Saturday, December 6, 2008

Save Time While Building a Website

To increase productivity and to save time, we can follow the below given points
Drop Down Menus :: All Web Menus (
http://www.likno.com/allwebmenusinfo.html )Whenever I need to add a complex drop-down menu to a site, I turn to All Web Menus. This program styles a complex multi level menu in less time then it takes to type the text. All websites need menus and for detailed menus, you should try All Web Menus.
Time saved: 1 Hour per menu.
Image Capture :: Gadwin Printscreen (
http://www.gadwin.com/printscreen/)Gadwin Printscreen utility replaces windows standard printscreen keyboard button with more options and flexibility. It loads on windows startup and runs silently in the background without consuming system resources but it’s always there when you need it. Instead of just capturing the entire screen view, Gadwin allows you to select a rectangular area very precisely with the help of a built in magnifying window. This tool is invaluable to my work. I often use it to grab different pieces to mock up a design.
Time saved: 2 Minutes per screen capture.
Batch Image Resizing :: Multiple Image Resizer .NET (
http://www.multipleimageresizer.net/ )I tried 10-15 different batch resizing programs before setting on Multiple Image Resizer .NET. This program’s ease of use in unparalleled and it has absolutely any function you could desire. Simply select a folder of images you want resized, determine whether to resize, add border, add text, overlay images, crop, rotate or flip them as a whole and save time editing the images one by one. I often use it to add small watermarks to a group of images I’m uploading to a clients website.
Time saved: 20 Minutes for a group of 20 images to an hour or more for a group of 100+.
Web Forms :: My Contact Form (
http://www.mycontactform.com)Web forms take time to build, time to style and time to test. I use My Contact Form because it offers all of these steps in one plus more options then I can easily code. The forms are highly customizable including things like adding attachments, multiple recipients, selecting a recipient and complete control over the look of the form.
Time saved: 30 Minutes to 1 hour per form.
CSS Text Boxes :: Rounded Cornr (
http://www.roundedcornr.com/)Once you have your rough draft laid out in CSS, and you need to start adding style, Rounded Cornr can save you a lot of time. Rounded Cornr quickly and easily creates images and CSS code for different box styles in an easy to use interface. It also offers an option to code it using only one image for all four corners, saving a minimal amount of bandwidth.
Time Saved: 10 Minutes per style.
Vectorizing Images :: Vector Magic (
http://vectormagic.com/)A lot of the time, I’m designing images using Photoshop. I do all of my logo work in Illustrator but I often find the need to convert another one of my designs to a vector so I can resize it without a reduction in quality. Vector Magic is a free online service that does this amazingly well. I was skeptical at first, but after answering a few short questions about my image, they were able to create a high quality vector image from my jpeg. Doing it by hand for complex images would take a lot of time and not turn out quite as precise.
Time Saved: 1-3 Hours per complex vector.
Selecting Color Schemes :: Adobe Kuler (
http://kuler.adobe.com/)This is one site you’ve probably heard of before, but I find it easier to use and with more advanced features then the other color scheme utilities available online. Add Adobe Air and use this tool even when you’re not connect to the internet. Kuler offers many premade color schemes as well that are an excellent source of inspiration for your website.
Time Saved: 5-10 Minutes per scheme over self-selection.
Creating a Patterned Background :: Stripe Generator (
http://www.stripegenerator.com/)A simple patterned background can add a much needed professional finish to your websites. Sometimes, if a site of mine is looking amateur despite clean design, all I need to add is a touch of design to the background. Stripe Generator, as it’s name implies, auto generates a striped background based on your input criteria. You can change all the colors and create a wide variety of non-distracting backgrounds.
Time Saved: 15 Minutes over creating it manually.
Building a Quick Photo Gallery :: Flickr Slidr (
http://flickrslidr.com/)First off, I’m partial to Thickbox, but it isn’t always the easiest to configure exactly as you like and can require use of other programs like the Multiple Image Resizer. Flickr Slidr creates a user controlled photo slideshow that pulls the pictures directly from your Flickr account. You can create a separate account for each project and spend much less time editing the look of the already well-designed gallery.
Time Saved: 25 Minutes over Thickbox.
Testing Your Website in Multiple Browsers :: Browser Shots (
http://browsershots.org/)It’s extremely important that your websites look good on all browsers (or at a minimum, all widely used ones). Most people don’t have very many browsers installed on their computer and even if they did, the time it would take to open your site in each browser would be huge. That’s where Browser Shots comes in. Put in your URL and choose all the browsers you liked to test for. Over 50 browsers are currently supported and you can even specify different screen sizes. Browser Shots then shows you a screenshot of what your website looks like in each browser.
Time Saved: 1-2 Hours if you choose all the browsers.

Friday, December 5, 2008

2007 Internet Quiz

I scored 96% in the 2007 internet quiz.
OnePlusYou Quizzes and Widgets

Created by OnePlusYou

You also try @ http://www.justsayhi.com/bb/internet

Monday, March 17, 2008

Important and advanced concepts in SQL Server 2005

Introduction
This article discusses some of the most important and advanced concepts in SQL Server and how one can improve the performance of SQL Server using the best practices.

CLR integration
Any code that runs under the Common Language Runtime (CLR) hood is managed code. The CLR is the core of the .NET environment providing all necessary services for the execution of the managed code. SQL Server 2005 has a tight integration with the CLR which enables the developer to create stored procedures, triggers, user defined functions, aggregates and user defined types using the managed code.

T-SQL suits best when you need to perform little procedural logic and access data from the server. If your data needs to undergo complex logic then the better option is using managed code. When working on data intensive operations, working in T-SQL would be an easy approach, but T-SQL lacks the ease of programming. You might end up in lot of lines of coding in trying to simulate operations that are specific to character, string operations, arrays, collections, bit shifting and so forth. While working with mathematical operations and regular expressions, you might need a language that provides an easy, clean yet powerful way of handling such operations. If you encounter situation where you need to perform such operations using T-SQL, then it is really going to be annoying.

Integrating DML operations with managed code also helps bifurcate logic into classes and namespaces, which is somewhat similar to what we have schemas in the database. Now saying this, it should be understood that integrating CLR into SQL Server does not replace the business tier of your application. The benefits of integrating the CLR with SQL Server include:

The T-SQL statements that you execute actually run on the server end. But at times when you want to distribute the load between the client and server, you could go with the managed code. So using managed code you could perform critical logic operations in client side so that the server could be busy only with data intensive operations.
The fact that SQL Server provides you extended stored procedures to avail certain system related functions from your T-SQL code, but at the same time you may have to compromise with the integrity of the server. When it comes to managed code, it provides type safety, effective memory management and better synchronization of services which is integrated tightly with the CLR and, hence, the SQL Server 2005. So this means that integrating CLR with SQL Server provides a scalable and safer means for accomplishing tasks which are tougher or almost impossible using T-SQL.
.NET Framework provides a rich support for handling XML based operations from managed code; although realizing the fact the SQL Server supports XML based operations, you could perform such operations using .NET with little effort when compared to using T-SQL scripts.
Nested transactions in T-SQL have limitations when dealing with look back connections, whereas this could be better achieved using managed code by setting the attribute "enlist=false" in the connection string.
When working T-SQL you may not be able to fetch rows which form the middle of the operation from a result set until the execution gets finished. This is termed as pipelining of results which could be achieved with CLR integration.
If you could check your database configuration you could notice that the CLR Integration is turned off by default. Enabling or disabling of CLR integration could be done by setting the "clr enabled" option to 1 or 0. Once the CLE integration is disabled, all the executing CLR procedures are unloaded across all application domains. To turn it on you need to use the following.

Listing 1
EXEC sp_configure 'clr enabled', 1;
RECONFIGURE;

The RECONFIGURE statement ensures that you need not restart the server with the change in the configuration. But if one among the several configuration options fails, none of the configured values would take effect.

But to configure the above you require ALTER SETTINGS server level permission to enable it. Again, for achieving that you need to be a member of serveradmin and sysadmin roles.

Using RECONFIGURE ensures that the update in the configuration does not need server restart. The configured values do not take effect in case you reconfigure several options in one time and one of the option fail.

Best Practices to Improve Performance
Performance factor is a measure of the amount of response time that you get upon any operation that you perform against the server. Modern databases are designed in a way that they would not halt the business with increasing load. But, the performance factor of the database in an enterprise project is usually given a low priority in the initial stages of design. Poor database design may lead to slow running transactions, excessive blocking, poor resource balancing and so forth which could cost excess amount of time and money to maintain.

So why do we need to care about performance any way? Better performance provides faster transactions and good scalability. This would cause more batch processing jobs to be done in less time with a low down time. Increased performance would help to gain better response time for the users and provide faster services even on increased load operations. Performance factor should be considered from the day we start designing our database. As the complexity of the design increases, it becomes harder and harder to pull out the design issues in order to get a better performance.

There are many techniques where we could monitor and improve the performance, but we shall limit it now to certain tips that will help us to fine tune the database.

Row versioning-based isolation levels
SQL Server 2005 introduces a feature called Row Level Versioning which allows effective management of concurrent access to the data while maintaining the consistency of data. Usually an isolation level decides an extent on how the modified data is isolated from other. RLV benefits when data is accessed across isolation levels because they help to eliminate the read operation locks, thus improving the read concurrency. The fact is that the read operations do not require shared locks on the data when it is running on isolation levels with RLV, this eventually does not block other requests accessing the same data, and that is how the locking resources are minimized. On the other hand, when it comes to write operations, two write requests cannot modify the same data at the same time.

Triggers working on INSERT and DELETE operations change the versions of the rows. So triggers that modify data will benefit from RLV. The rows of the result set is versioned, when an INSERT, DELETE or UPDATE statement is issued prior the data was accessed using the SELECT statement.

Transactions greatly affect the data when you perform CRUD operations. Transactions may also be executed in batch with many requests operating on a single or a row set. So when a transaction modifies a row value the previous committed row value is stored as version in tempdb.

By setting the READ_COMMITTED_SNAPSHOT and ALLOW_SNAPSHOT_ISOLATION options to ON, logical copies are made on the modified data by the transactions, and a transaction sequence number is assigned to every transaction that operates on the data using Row Level Versioning. The transaction sequence number is automatically incremented each time the BEGIN TRANSACTION statement is executed.

So the changes to the row are marked with transaction sequence numbers. These TSN's are linked with the newer rows that reside in the current database. Now, the TSN's are monitored periodically and numbers with least use are deleted from time to time by the database. So it is up to the database which actually decides how long the row versions have to be stored in the tempdb database.

Now, the transaction sequence numbers are tracked periodically and transaction sequence numbers with least use are deleted from time to time. As a matter of fact, the read operations do not require shared locks on the data when it is running on isolation levels with Row Level Versioning, which eventually does not block other readers or writers accessing the same data, as a result the locking resources are minimized. On the other hand, when it comes to write operations, two writers cannot modify the same data at the same time.

The READ_COMMITTED_SNAPSHOT and ALLOW_SNAPSHOT_ISOLATION options should be turned on in order that the transaction isolation levels such as READ_COMMITTED and SNAPSHOT make use of the RLV system. The read committed isolation level supports distributive transactions unlike the snapshot which does not. The temporary database "tempdb" is extensively used by the SQL Server to store its temporary result sets. All the versions are stored in the tempdb database. Once the database has exceeded its maximum space utilization the update operations stops generating versions. The applications that leverage the row committed level transactions does not even need to be re-factored for enabling the RLV and also it consumes less storage space of the tempdb database, and for these reasons the read committed isolation level is preferred over the snapshot isolation.

Row Level Versioning help in situations where an applications lot of insert and update operations on the data and at the same time a bunch of reports are accessing in parallel. It could also prove beneficial if your server is experiencing relatively high deadlocks. Also for systems performing mathematical computation, they require accurate precision and RLV gives a greater amount of accuracy for such kind of operations.

Error handling
Error handling was a pretty tough job in the earlier versions of SQL Server. Developers had to perform lot of conditional checks for the error code returned after each INSERT, UPDATE or DELETE operations. Developers must check the @@ERROR attribute each time they might see the possibility of an error based operations. Error messages could be generated either from the SQL Server or thrown explicitly by the user. Let us first see how developers usually perform the error handling in SQL Server 2000. We shall have a stored procedure for our demonstration. Let us use AdventureWorks database for our demonstration purpose.

Listing 2
CREATE PROCEDURE ErrorHandlerDemo
AS
BEGIN
BEGIN TRANSACTION
DECLARE @EmpID AS BIGINT
INSERT INTO [HumanResources].[Employee] ([NationalIDNumber], [ContactID],
[LoginID], [ManagerID], [Title], [BirthDate], [MaritalStatus], [Gender],
[HireDate], [SalariedFlag], [VacationHours], [SickLeaveHours], [CurrentFlag],
[rowguid], [ModifiedDate])
VALUES ('1441784507', 120439, 'adventure-works\guy43', 13456, 'Production
Technician - WC60', '19720515', 'M', 'M', '19960731', 0, 21, 30, 1,
'AAE1D04A-C237-4974-B4D5-735247737718', '20040731')
IF @@ERROR != 0 GOTO ERROR_HANDLER
SET @EmpID = @@IDENTITY
INSERT INTO [HumanResources].[EmployeeAddress] ([EmployeeID], [AddressID],
[rowguid], [ModifiedDate])
VALUES (@EmpID, 61, '77253AEF-8883-4E76-97AA-7B7DAC21A2CD',
'20041013 11:15:06.967')
IF @@ERROR != 0 GOTO ERROR_HANDLER
COMMIT TRANSACTION
ERROR_HANDLER:
ROLLBACK TRANSACTION
RETURN @@ERROR
END
GO

When we try to execute this procedure we get the output as:

Listing 3
Msg 547, Level 16, State 0, Procedure ErrorHandlerDemo, Line 11The INSERT statement conflicted with the FOREIGN KEY constraint "FK_Employee_Contact_ContactID." The conflict occurred in database "AdventureWorks" table "Person.Contact" column "ContactID."

The statement has been terminated.

As you could see, the error message has Msg, Level of severity, State and Line. The “Msg” holds the error number generated for the message, in this case 547. All the error messages are defined in the table called sys.messages. If you are going for custom error handling then you can utilize the sp_addmessage system procedure to implement new error messages.

Next, you have the severity “Level” of the message. Severity codes lie in the range of 0 to 25. Any error message above 20 will terminate the connection. Error messages from 17 to 20 specify a resource problem, from 11 to 16 specify error messages in the T-SQL scripts, and error messages below 11 specify warnings.

Next we have the "State" of the error message. This is an arbitrary integer range falling in between 0 to 127. This provides information on the source that has issued the error message. However, there is not much documentation disclosed by Microsoft on this.

Next is the "Line" number which tells us the line where the error has occurred in the procedure or T-SQL batch. And the last one is the message itself.

The understanding and implementation of the error handling in the earlier versions of SQL Server 2005 was fair enough, but was with a lot of housekeeping activity. SQL Server 2005 provides flexible means to handle error handling mechanism by using the TRY and CATCH blocks.

The syntax looks like:

Listing 4
BEGIN
TRY
BEGIN TRANSACTION
…..
--perform insert update and delete statements
…..
…..
COMMIT TRANSACTION
END TRY
BEGIN
--Start
CATCH
ROLLBACK TRANSACTION
PRINT ERROR_NUMBER()
PRINT ERROR_SEVERITY()
PRINT ERROR_STATE()
PRINT ERROR_PROCEDURE()
PRINT ERROR_LINE()
PRINT ERROR_MESSAGE()
--End
END CATCHSo as you could see from the above script, the error handling mechanism is simplified. When an error occurs, the statement is terminated from the current execution point and it enters the catch block. The functions following the PRINT statements are built-in functions that provide information on the error message. You could also embed the code from start to end in a stored procedure and call it wherever you need. You can also log the error messages in a table for debugging purpose. AdventureWorks database handles error handling in similar manner; you could find procedures uspLogError and uspPrintError which do the job.

You could also use the RAISERROR to define you own custom error messages. The RAISERROR may take a system error code or a user defined error code which would be eventually fired by the server to the connected application or within the try..catch block. The example of using RAISERROR is:

Listing 5
RAISERROR (
-- Message text.
'A serious error has terminated the program. Error message is %s, Error code is %d.',
-- Severity
10,
-- State
1,
-- Argument 1.
'...statement conflicted...',
-- Argument 2.
52000); The output of this RAISERROR would look like:

A serious error has terminated the program. Error message is ...statement conflicted..., Error code is 52000.

Next time when you are working with T-SQL code, you need not really worry about implementing numerous checks for errors. The TRY..CATCH feature helps with a better approach for error handling which would minimize the size of your code, thus improving readability.

Efficient Concurrency Control
Concurrency could be defined as an ability of multiple sessions to access a shared data at the same time. Concurrency comes in to picture when a request is trying to read data and the process prevents the other requests to change the same data or vice versa.

The RLV discussed above allows concurrent access automatically with no additional application control to avail this feature. Now any relational database could support multiple and simultaneous connections to the database. The job of handling concurrencies between the requests is usually handled by the server. SQL Server internally takes care of the blocking issues between two or more processes. But sometimes it may be necessary to take over some part of the control over the concurrent access to maintain the balance between data consistency and concurrency.

There are two kinds of concurrency control: Optimistic concurrency control and pessimistic concurrent control. SQL Server has a pessimistic concurrency model by default. So by default, other transactions could not read the data until the current session commits, which in this case is a writer block. Locking could prove a good choice for many of today’s database systems, but it may also introduce blocking issues. If the results are based on only the committed data then the only option left is to wait until changes are committed.

To put it in a straight forward manner, a pessimistic concurrency control the system is pessimistic. It assumes that a conflict will arise when a read operation is requested over the data modification of other users. So in this case locks are imposed to ensure that the access to the data is blocked which is being used by other session.

But in the case of optimistic concurrency, it works with an assumption that any request could modify data which is being currently read by another request. This is where the row level versioning is being used which checks the state before accessing the modified data.

Best Practices to handle Queries
One of the common mistakes developers often perform is to execute T-SQL statements directly from their application. Worse, the performance may get degraded if the combinations of operators, such as LIKE and NOT LIKE, are used with the statements. It is always a good practice to use stored procedures rather than stuffing queries in your application or a web page. Stored procedures help improve the performance since they are precompiled.

Use minimal string operations as are often costly, so do not use them in a JOIN condition. Using implicit or explicit functions in the where clause might impact your server. Also, using complex business logic in triggers is yet another performance issue. When you are working with transactions, always use isolation levels. Proper utilization of isolation levels will help reduce locking and also avoid dirty reads and writes.

If possible, avoid using CURSOR's. The other way around is to use temporary tables with WHILE statements and break complex queries into many temporary tables and later joining them. Also when you are working with large tables, select only those rows or columns that are needed in the result set. Unnecessary inclusion on columns and rows will congest the network traffic which is, again, a bottleneck for performance.

Index Considerations
Create indexes only when they are really required because SQL Server needs to arrange and maintain records for each index that you define. To make sure that you are creating for the right purpose, you can create indexes on columns which are used in WHERE condition and ORDER BY, GROUP BY and DISTINCT clauses. Indexes which are not used may cause extra overhead. Also, it is always recommended to have smaller cluster indexes and moreover define a data range for the cluster indexes that you maintain. Once you define a column as foreign key, it is a good practice to create an index on it. You can also use the index tuning wizard to check the index performance and be sure to remove unused indexes.

Best Practices in handling Transactions
It is advisable not to have transactions that run for a long time. If you need to access large data in a transaction that needs to be sent to the client, then you can have that operation at the end of the transaction. Transactions that require user input to commit are also a degrade factor, ensure that explicit transactions are wither committed or rollback at some point of time. Also, you could find a boost in the performance if the resources are accessed in the same order. Proper utilization of isolation levels helps minimize locking.

Efficient Design Considerations
The way you design your database impacts greatly on the performance of SQL Server. When you are working with tables, always use proper data types for the columns. If your data has very large chunks of characters then you can go with text data type. Check if proper primary and foreign key relationships are defined across various tables. Make a practice of normalizing your database first and then work around de-normalizing for improving any performance. You may use indexed views for de-normalization purpose. Analysis job usually takes more of the system resources, so it is recommended to use separate servers for Analysis and Transaction processing jobs.

Best practices in using Stored Procedures
Do not use prefix in your stored procedures, i.e., do not prefix them with sp_. Microsoft ships system procedures which are prefixed with sp_. So if you prefix your procedures with sp_, SQL Server will first search in the master database for the procedure and then in your application database. Again, this is a bottleneck.

Always use exception handling if you are working with transaction based procedures. Proper error handling ensures security and provides a better approach of what to do when an unexpected error occurs.

If you do not want your client application to check the rows affected for an operation, then it is advisable to use SET NOCOUNT ON in your stored procedures. Not using this would send the number of rows affected to the client application or ADO/ADO.NET. The client application would further work on this result through the command or connection objects. This could cause extra overhead on both the client and server.

Efficient Execution Considerations
When trying to execute your T-SQL code, do not perform index or table scans. You can have index seek instead of index scan. Also, try to evaluate hash joins, filters, sort conditions and bookmarks. A better decision could be made on the execution strategy by observing the execution plan. When working with dynamic SQL, things may not work as anticipated during execution. Although it is not a great idea having your queries dynamically built, in some cases they help reduce the code when the SQL expression needs to be built upon many decisions. In such cases you can always use sp_executesql. Also, while working with stored procedures it is advised not to mix up the DML and DDL statements.

Best practices in deployment
Set your database size initially instead of allowing it to grow automatically. To minimize disk reads and writes, you may create the log file and tempdb into separate devices from the data. You can utilize RAID configuration with multiple disk controllers if the database performs large data warehouse operations. Have an optimal memory for your server and perform index fragmentation as and when needed. You can go with the automatic database shrink option to manage unwanted space. However, it is recommended that you use default server configuration for your application.

Some of the common mistakes that are usually noticed and should be avoided include:

Usage of GUIDs at times wherever not necessary and using GUIDs as primary key.
Having a clustered index GUID primary key makes a row bigger and it also degrades the performance as it would make every non-clustered index bigger.
Not being rational on the usage of data types, for example usage of larger data types where smaller data types would suffice the purpose.
No proper attention to remove missing or unused indexes. Poor decision making capabilities to identify the usage of clustered and non-clustered indexes.
Un-utilized columns and rows which may cause excessive data density.
Execution plan is ignored while writing queries. Each query adds up to extra time for processing.
Conclusion
This article looked at some of the advanced concepts of SQL Server, such as CLR integration, Row versioning-based isolation levels, TRY...CATCH Error Handling, Concurrency Control and the techniques that can be followed to improve database performance.
  1. Introduction
    Object Pooling is nothing new. It is a concept that implies that we can store a pool of objects in memory to be reused later and, hence, reduce the load of object creation to a great extent. An Object Pool, also known as a Resource Pool, is a list/set of ready to be used reusable objects that reduce the overhead of creating each object from the scratch whenever a request for an object creation comes in. This is somewhat similar to the functioning of a Connection Pool, but with some distinct differences. This article throws light on this concept (Object Pooling) and discusses how we can implement a simple generic Object Pool in .NET.

    What is an Object Pool?
    An Object Pool may be defined as a container of objects that are ready for use. Lists of ready-to-be-used objects are contained in this pool. Whenever a new request for an object creation comes in, the request is served by allocating an object from the pool. Therefore, it reduces the overhead of creating and re-creating objects each time an object creation is required. "An object pool is an object that holds a list of other objects, ready to make them available for use (to yet another object, probably). It does the management work involved, like keeping track of which objects are currently in use, how many objects the pool holds, whether this number should be increased."

    Why is an Object Pool required?
    The biggest advantage of using Object Pooling is that it minimizes the consumption of memory and the system's resources by recycling and re-using objects as and when it is needed and serving the request for new objects from the pool of ready-to-be-used objects. The objects that the application is done with (the objects are no longer needed) are sent back to the pool rather than destroying them from the memory. According to MSDN, "Once an application is up and running, memory utilization is affected by the number and size of objects the system requires. Object pooling reduces the number of allocations, and therefore the number of garbage collections, required by an application. Pooling is quite simple: an object is reused instead of allowing it to be reclaimed by the garbage collector. Objects are stored in some type of list or array called the pool, and handed out to the client on request. This is especially useful when an instance of an object is repeatedly used, or if the object has an expensive initialization aspect to its construction such that it's better to reuse an existing instance than to dispose of an existing one and to create a completely new one from scratch."

    How does an Object Pool work?
    When an object is requested, it is served from the pool. When the object is disposed, it is placed back into the pool to await the next request that might come in at a later point in time. The pool initially consists of a number of objects. When a request for creation of an object comes in, the request is server from the pool of objects and the number of the available objects in the pool decreases by one. This process continues until the pool runs out of objects. The pool remains in memory as long as there is at least one object in the pool. The pool facilitates reusability and eliminates the overhead involved in creation of objects whenever they are requested. The following section discusses how an Object Pool (though somewhat similar to a Connection Pool) differs from a Connection Pool. You can find my article on Connection Pooling here.

    How does Object Pooling and Connection Pooling differ?
    There are distinct differences between Object pooling and Connection Pooling. Object Pooling is great in the sense that it can optimize access to expensive resources (like file handles or network connections) by pooling them in memory and reusing them as and when they are needed. According to MSDN, "Object pooling lets you control the number of connections you use, as opposed to connection pooling, where you control the maximum number reached."

    Implementing an Object Pool in C#
    We would design an object pool based on some predefined goals/objectives. These are stated as under the following.

    Ease of use and reusable
    Thread Safe
    Type Safe
    Scalable
    Configurable
    These are the basic objectives that the pool should adhere to. With these in mind, we will now implement a simple Object Pool in C# and use this pool for creation, usage and destruction of objects.

    The Pool Manager Class

    The following code example illustrates how an object pool can be created. I have provided enough comments to enable the reader to understand how the Pool Manager class works. This class is based on the Singleton Pattern, i.e., there can be at any point of time only one instance of this class.

    Listing 1
    1 using System;

    2 using System.ComponentModel;

    3 using System.Collections;

    4 using System.Threading;

    5

    6 namespace ObjectPooling

    7 {

    8 /// <summary>

    9 /// A class to manage objects in a pool.

    10

    11 ///The class is sealed to prevent further inheritence

    12 /// and is based on the Singleton Design.

    13 /// </summary>

    14 public sealed class PoolManager

    15 {

    16 private Queue poolQueue = new Queue();

    17 private Hashtable objPool = new Hashtable();

    18 private static readonly object objLock = new object();

    19 private const int POOL_SIZE = 10;

    20 private int objCount = 0;

    21 private static PoolManager poolInstance = null;

    22

    23 /// <summary>

    24 /// Private constructor to prevent instantiation

    25 /// </summary>

    26 private PoolManager()

    27 {

    28

    29 }

    30

    31 /// <summary>

    32 /// Static constructor that gets

    33 ///called only once during the application's lifetime.

    34

    35 /// </summary>

    36 static PoolManager()

    37 {

    38 poolInstance = new PoolManager();

    39 }

    40

    41 /// <summary>

    42 /// Static property to retrieve the instance of the Pool Manager

    43

    44 /// </summary>

    45

    46 public static PoolManager Instance

    47 {

    48 get

    49 {

    50 if(poolInstance != null)

    51 {

    52 return poolInstance;

    53 }

    54

    55 return null;

    56 }

    57 }

    58

    59 /// <summary>

    60

    61 /// Creates objects and adds them in the pool

    62 /// </summary>

    63 /// <param name="obj">The object type</param>

    64 public void CreateObjects(object obj)

    65 {

    66 object _obj = obj;

    67 objCount = 0;

    68 poolQueue.Clear();

    69 objPool.Clear();

    70

    71 for (int objCtr = 0; objCtr < _obj =" new" name="obj" objcount ="="" objcount ="=""> 0)

    122 return poolQueue.Dequeue();

    123 }

    124

    125 return null;

    126 }

    127

    128 /// <summary>

    129

    130 /// Releases an object from the pool

    131 /// </summary>

    132 /// <param name="obj">Object to remove from the pool</param>

    133 /// <returns>The object if success, null otherwise</returns>

    134

    135 public object ReleaseObject(object obj)

    136 {

    137 if(objCount == 0)

    138 return null;

    139

    140 lock(objLock)

    141 {

    142 objPool.Remove(obj.GetHashCode());

    143 objCount --;

    144 RePopulate();

    145 return obj;

    146 }

    147 }

    148

    149 /// <summary>

    150

    151 /// Method that repopulates the

    152 ///Queue after an object has been removed from the pool.

    153 /// This is done to make the queue

    154 ///objects in sync with the objects in the hash table.

    155 /// </summary>

    156 private void RePopulate()

    157 {

    158 if(poolQueue.Count > 0)

    159 poolQueue.Clear();

    160

    161 foreach (int key in objPool.Keys)

    162 {

    163 poolQueue.Enqueue(objPool[key]);

    164 }

    165 }

    166

    167 /// <summary>

    168

    169 /// Property that represents the current no of objects in the pool

    170 /// </summary>

    171 public int CurrentObjectsInPool

    172 {

    173 get

    174 {

    175 return objCount;

    176 }

    177 }

    178

    179 /// <summary>

    180

    181 /// Property that represents the maximum no of objects in the pool

    182 /// </summary>

    183 public int MaxObjectsInPool

    184 {

    185 get

    186 {

    187 return POOL_SIZE;

    188 }

    189 }

    190 }

    191 }

    Using the Pool Manager Class

    The following code snippets in Listings 2, 3 and 4 show how we can use the PoolManager created in Listing 1.

    Listing 2
    object obj1 = new object();
    object obj2 = new object();
    PoolManager poolManager = PoolManager.Instance;
    poolManager.AddObject(obj1);
    poolManager.AddObject(obj2);
    MessageBox.Show(poolManager.CurrentObjectsInPool.ToString());
    poolManager.ReleaseObject(obj1);
    MessageBox.Show(poolManager.CurrentObjectsInPool.ToString());

    object obj = null;
    for(;;)
    {
    obj = poolManager.ReleaseObject();
    if(obj != null)
    MessageBox.Show(obj.GetHashCode().ToString());
    else

    {
    MessageBox.Show("No more objects in the pool");
    break;
    }
    }Listing 3
    object obj1 = new object();

    object obj2 = new object();

    PoolManager poolManager = PoolManager.Instance;

    poolManager.AddObject(obj1);

    poolManager.AddObject(obj2);

    MessageBox.Show(poolManager.CurrentObjectsInPool.ToString());

    poolManager.ReleaseObject(obj1);

    MessageBox.Show(poolManager.CurrentObjectsInPool.ToString());



    object obj = null;



    for(;;)

    {

    obj = poolManager.ReleaseObject();

    if(obj != null)

    MessageBox.Show(obj.GetHashCode().ToString());

    else

    {

    MessageBox.Show("No more objects in the pool");

    break;

    }

    }

    Listing 4
    PoolManager poolManager = PoolManager.Instance;

    ArrayList arr = new ArrayList();

    poolManager.CreateObjects(arr);



    object obj = poolManager.ReleaseObject();

    MessageBox.Show("No of objects in Pool is: " +

    poolManager.CurrentObjectsInPool.ToString(),

    "The hash code of the released object is: " +

    obj.GetHashCode().ToString());

    obj = poolManager.ReleaseObject();

    MessageBox.Show("No of objects in Pool is: " +

    poolManager.CurrentObjectsInPool.ToString(),

    "The hash code of the released object is: " +

    obj.GetHashCode().ToString());

    Note how we have used the PoolManager in the code listings above. The CreateObjects method has been used in Listing IV to create a specified number of objects of the ArrayList type and store them in the pool. However, the major drawback of this design is that there would be enough boxing and un-boxing overhead involved on storing and retrieving the objects to and fro from the pool. To eliminate this, I would recommend the usage of Generics. Further, the size of the pool (the maximum number of objects that the pool can contain) is also fixed and is not configurable.

Monday, March 3, 2008

LINQ sample from C#.NET 3.0

LINQ to Objects allows you to use LINQ queries with any object that support IEnumerable or IEnumerable(T). This means that you can use LINQ to acccess data in arrays, lists, dictionaries, and other collections. You also can use LINQ with any of your own classes that implement or inherit IEnumerable. Let me start with a very simple introduction using an array of integers. This following example uses LINQ to find all of the even integers in an array of integers:

// Array of data
int[] Data = { 17, 32, 51, 98, 87, 4, 63, 26, 75, 40 };

// Create the variable to store the query
var EvenNumbers = from Num in Data
where Num % 2 == 0
select Num;

// Run the query and output the results
foreach (int i in EvenNumbers)
Debug.Print(i.ToString());

Data is simply an integer array. The only thing important about it is that arrays implement IEnumerable so LINQ can be used with them. The statement of interest in this example is the definition of EvenNumbers. The from clause is used to define a subset of the elements in Data. Num is called the range variable and is used to represent each element from the data source. The where clause is used here to include only those numbers that are evenly divisible by 2, eliminating all odd numbers. The select clause indicates what information about each Num should be included for each matching element. In this case, the number itself is being selected. One point to note is that at this point the query has not been executed. EvenNumbers is simply a definition of the query. The query is not actually processed until it is used in the foreach loop that follows. The piece is the foreach loop that iterates over the EvenNumbers query and prints out each included element. The result is the following:

32
98
4
26
40

I hope this whets your appetite for what LINQ can do and will help you to think about where you can use LINQ in your application. LINQ allows you to use a SQL-like syntax for working with your data outside of the database.

Thursday, February 28, 2008

ODBC Secrets

What is ODBC?

ODBC stands for Open Database Connect and is a Microsoft specification. ODBC provides a standard way of defining data sources and their data access methods. ODBC is designed around SQL and relational databases, but there is nothing preventing translation of the incoming SQL to another language.

The ODBC specification defines low level API calls that any application can make use of for database queries. By writing calls to the API, a reporting writer or other tool can portably access heterogeneous data sources with one set of source code.

Architectures

There are two basic architectures employed by the driver makers: single vs. multiple tier. Intersolv’s DataDirect ODBC, OpenLink Lite, and the Progress ODCB Driver are single tier drivers, while the rest are all multiple tier.
Single Tier
Single tier architectures use the driver itself to process the SQL query, implying PC side resolution. The driver connects to the database, sends SQL to the database, does any additional record selection or joining, and then passes the result to the application. Driver connections for Progress requires a client product such as Client Networking, 4GL Client, or ProVision to connect to the database, or use its own network protocol for a remote database. The Progress client is responsible for getting the records to the driver where it does the rest of the work. Starting in Version 8, executables separate from the full Client Networking product are shipped for establishing this connection. This smaller client is referred to as the Open Interface Driver and is combined with the Open Interface Broker for multi-user situations.
Multiple Tier
Queries are offloaded by the driver to another application in multiple tier architectures. This secondary layer is generally a networking program talks to a server side component. The server receives SQL requests from multiple network connections, resolves the request through interaction with the database, and returns the data to the PC’s secondary layer. The secondary program must still pass the final results to the driver. While it is not required, almost all multiple tier implementations make direction connections from the server to the database. The server side execution generally provides better performance since only selected records get passed to the client PC. Under Progress Client Networking, records are sometimes passed to the PC for selection, increasing network traffic. The specific circumstances where this happens are version specific, but joins for example, are determined by the client PC under all current versions of Progress.
JDBC and ODBC to JDBC Bridges

If you’re programming in Java, there is a standard similar to ODBC for Java. JDBC drivers are also available for Progress. They are generally of two different types, true JDBC drivers and ODBC to JDBC Bridges.

True JDBC involves your Java program making a connection to a database over a URL. ODBC bridges work the same way from the program side, but connect locally through the PC’s ODBC driver. The database can still be remote, just the ODBC must be set up locally.

23 IE 5 Secrets

23 IE 5 SECRETS!!!.
1.) Do animated graphics, banner ads distract you from your surfing experience ? Once the page loads, just press Esc and presto, everything faintly flashing comes to a grinding halt. This might not work for Java applets and flash animations.
2.) Finding your 14/15-inch monitor too cluttered with all the toolbars, banner ads, taskbars overshadowing the precious real-estate space on the webpage? Simple. Press F11 (for full screen) and go to Start-Settings-Taskbar and click on autohide. Presto your webpage, blows up and surfing becomes more fun, and your 14'' inch turns into 20" inch monitor, well almost. You'll find reading the webpage a much more pleasurable experience. Plus - Right click on the IE 5 toolbar and customize. Here in the dropdown menu remove the text label options and select Small icons instead of the default large icons. Presto, do all this and your browser window just blows up and surfing suddenly becomes a pleasure.

3.) Just want content, and not the gimmicks? Go to Tools-Internet options-Advanced and turn off pictures; sounds, animations, videos and couple it with powerful ad blocking/filtering software like Naviscope where you can even block out backgrounds, JavaScript, just about anything. Ah, clean pure content finally.

4.) Use keyboard short cuts, they are faster, simpler and make you look like a surfing pro. Our favorite Keyboard shortcuts - F5 - Refresh Alt-D - Address bar F4 - Shows typed address Ctrl + W - Closing current window. Esc - Stop loading a page. F1 - Help Alt + Back arrow - Previous webpage F11 - Full screen.

5.) Go to Tools-Internet Options and increase Temporary Internet files - settings to as much as you can.(Don't worry, if you have 4.5 GB hard disk)

6.) Removing MS IE as the default Internet Browser Whenever you install a Web browser, it begs you to let it become your default browser, and it repeats this plea until you make a decision. What if, after using one browser for a while, you want to switch? You can change your mind, but it's much easier to do so in Internet Explorer than in any version of Navigator. In IE 3.x and 4.x, select View, Internet Options, click the Program tab, check the box labeled "Internet Explorer should check to see whether it is the default browser," and then click OK.

7.) Install IE Webtools , wallpaper and lots of fun things. (http://www.microsoft.com/windows/ie/) Microsoft Web Developer Accessories, toolbar wallpaper, explorer bars and lots more. Why doesn't Microsoft ship these goodies along with the browser, or integrate them into the browser?

8.) Change the boring toolbar wallpaper (www.hotbar.com) Customize the toolbar, the way you want. Change the look and feel with toolbar skins or right click on the toolbar and Customize. Add/remove buttons, shortcuts, keep separators do just about anything, and keep only the buttons you need on the toolbar.

9.) Microsoft makes Windows98 crawl with its browser integration. Though it gives you more features but then you trade-off performance. Don't believe Microsoft when they say you can disintegrate IE5 from windows. Even we couldn't. To the rescue comes this nifty 113 KB ripper. Download it and break away from IE. IE-off.exe is what we recommend.

10.) Taking printouts of webpages is a big pain with all the banner ads/links/frames/tablets etc right? Just select the text (with images) on the webpage you want by highlighting it, and right click on the selected text - print - and in the print-range option tick on "selected " and print-it. You get just what you want. It saves paper and your precious printer cartridge.

11.)A good way to keep track of your printed Web pages is to include date and time information in the header and footer. To do this, choose File, Page Setup. When the Page Setup dialog box opens, you can enter the information codes that you want to use in the Header and Footer boxes. Let's say you want the date and time in the footer. Click the Footer entry box and type &t &d into the field. We suggest using at least two spaces between &t and &d to separate the time and date on the printout. After you enter the codes, click OK to close the dialog box and save your changes. These changes will remain in effect until you change them again. Add these to your Header and Footer entries for better control over your printouts:

12.) I.E5 comes with some default favorites folders, which we don't think are good. Before you start surfing and start piling up hundreds of favorites create favorite folders based on your interests. Like say, Webmail, Fun, Personal, Business, Movies, Shopping and maybe an Etc folder. Depending upon the webpage you see, add it to your corresponding favorite folder. So you don't have organized them later.

13.) Do you use several machines for surfing or have a notebook and having a problem trying to use the same bookmarks and cookies? We have a solution at hand. Get a floppy, insert into your drive and go to File-Import-Export follow the wizard and copy (Export) all your favorites, cookies or only a selected few and the works into a floppy (they all fit in, don't worry),.And carry the floppy around. No matter which machine you use to surf, just insert the floppy pick up (Import) the cookies/favorites you need and get your work done. Also check out www.backflip.com or www.blink.com another neat new services for storing your favorites online. No matter whether you surf from a cyber cafe, from your office PC, home PC or your notebook all your IE treasures are intact.

14.)Dozens of bugs, some of them potent are reported every month from Internet Explorer. Click on Tools- Windows update and install the necessary ActiveX updating controls, and update your browser at regular intervals, and check out sites like bugnet.com and ZDNet's updates for the latest patches.

15.)Tools-Internetoptions-Content-Autocomplete. This takes care of filling up those boring forms, alternatively you can also check out Gator (www.gator.com), another software that does about the same job but with more features, and also check out the Profile assistant and Microsoft wallet, which could help you while shopping. We recommend you log into www.passport.com (a Microsoft venture) and help yourself.

16.) Surfing through an interesting page, want to share it with friends. You don't need to save it or copy the page, open Outlook express and send it. Time consuming. Instead use File- Send page by email/send link and I.E 5 will take care of sending it. Just make sure your email client is properly configured. Or want to save the webpage but the PC isn't yours or you don't have floppy, just email the page to your webmail account and pick up the webpage or URL from there or send it to those free online hard disk providers like I-drive.com, driveway.com etc.

17.) Install Netscape 4.7, its pretty good and a nice alternative. Or try the lightweight Opera 3.6 or even better install Neoplanet 5.1, it runs on the I.E 5 engine, with its skins looks cool and stylish and does all the things that I.E 5 does and it look sober and boring. Or you can check out Netcaptor which also runs on the I.E engine and offers additional neat features. Wake up, there are lots of good, if not better alternatives out there

18.) Any browser crash can make Windows unstable. The severity ranges from an app going belly-up to Microsoft's dreaded Blue Screen of Death. If the culprit is Navigator, you can bring up the Close Program dialog box by pressing Ctrl-Alt-Delete, selecting Netscape, and clicking End Task. But doing this can cause other open programs to fall like dominoes, so you might want to bite the bullet, close all running apps, and restart. IE 4 offers another option. Do it before you hit your next crash, and you'll save yourself serious headaches. Click View, Internet Options, click the Advanced tab and check "Browse in a new process." This makes IE handle Web browsing as a task separate from other system functions, so the next time IE 4 hits an iceberg, it shouldn't take Windows down with it.

20.) Don't want others to know where you are surfing? Remove favorites from the startup menu - use a tweaking tool like Xteq systems. Use a browser washer, which washes all the cookies, history and traces of your nefarious surfing activities. Every time you visit a Web site, Internet Explorer stores history and cache information in files that have the .dat extension. The more data these files have to store, the bigger they get. Though Microsoft won't cop to it, clearing the Cache or History folders in IE 3.x and 4.x doesn't always return these files to their original default size of 8KB, 16KB, or 32KB. You can see for yourself by opening a DOS prompt (select Start, Programs, MS-DOS Prompt), navigating to the directory where your Cache or History resides (c:\windows\tempor~ or c:\windows\history), and then looking for the .dat files. If you open them, you'll see all of your "deleted" URLs. The problem? Aside from the fact that these .dat index files let snoops track where you've been surfing, IE begins to slow down when the files reach approximately 200KB. And once they reach 500KB, the program starts crashing. One solution is to delete both files, but you have to do it in DOS, not Windows. Select Start, Shut Down, Restart in MS-DOS mode. At the C:\> prompt, type deltree c:\windows\history, and then press Enter. (In IE 4.x, this path could be c:\windows\profiles\yourname\history.) Then type deltree c:\windows\tempor~1 and press Enter. (This process can take 15 minutes if the .dat files are large.) The next time you fire up your browser, both files will be rebuilt as empty .dat files. Feeling insecure online, IE 5 alone might not do a good job. Wipe out all your private data, manage file size and kill all "that" data in Cache, History, and .dat files by using the $15 shareware program TweakIE. Its IESweep feature clears your Cache and History folders and resets the .dat files to their empty size. This utility also alerts you when .dat files have grown big enough to affect your browser's performance.

21.) Internet explorer serves both as an explorer for the Internet and windows. You can do just about anything from the address bar of I.E 5. For starters -- · Send an e-mail message: Type mailto: followed by the address--for example, mailto:firstname_lastname@pcworld.com. Even Netscape Navigtor does this. · View your desktop: Type desktop. · Open My Computer: Type my computer, or the name you gave to My Computer. · Start a DOS prompt: Type c:\command.com. · Open a folder: Type its path name--for example, c:\text. You'll get a directory listing of the folder. You can open some files from here, too; and by clicking the down arrow, you can find and launch commands you've recently typed. This also works in Navigator. You can speed your way to Web and FTP sites by clicking the Windows Start button and then selecting the Run command. In the dialog box that appears, type the URL or FTP address. (You can even type the subdirectory and name of the file you want to download, such as ftp.microsoft.com/softlib/mslfiles/rptsampl.exe.) Click OK. Windows will dial your Internet service provider, load your browser--either Netscape's or Microsoft's, whichever is the system default--and head to the site.

22.) Wacky eggs, easter eggs. Whodunit? Want to see who created I.E 5.0? Psst..this doesn't work in I.E 5.01 but does in 5.0/4.0 1. Open up Notepad. 2. Type " " (no quotes) 3. Save the file as "test.htm" (no quotes) 4. Open up "test.htm" in IE 5 And see the credits roll-by, and try counting the number of heads, which went into IE 5. PS - The same thing works in Outlook 4.

For Outlook -- 1. Click the Compose Message button, and make sure that Rich Text (HTML) is ticked. 2. Now click in the main body to make the formatting bar come to life 3. Click in the Font selection box and type in "athena" and press enter 4. Go back into the main Outlook Express program and click on "Outlook Express" at the top of the folders list 5. Click an empty space on the page that appears and type about to see the names of the OE team fly on to your screen!

1. Open up IE5 2. From the menu, select Tools > Internet Options > General (tab) > Languages (button) 3. Press 'Add' 4. Type: "ie-ee" (without the quotes) and click 'OK' 5. Move "User Defined [ie-ee]" to the TOP of the list 6. Exit back to where you can browse in IE5 again 7. Click on the Search icon (to pull up the side search menu) 8. Laugh at the new options 9. Select 'Previous Searches'

C#.NET 3.5 - Automatic Properties

Automatic Properties
There is no. of new features introduced in C# to enable developers to write simple and short code. Properties
is best when we want to control or validate values to be stored in fields/members, Many a times in order to grant access to private fields of class we declare appropriate public properties. So we declare it as

public class Employee {

private string _EmpName;
private int _Salary;

public string EmpName {

get {
return _EmpName;
}
set {
_EmpName = value;
}
}

public int Salary{

get {
return _Salary;
}
set {
_Salary = value;
}
}
}

With automatic properties we need not provide full definition of property, In fact compiler generates default definition which is simple assignment and retrieval of values. if you wish to add you own logic then automatic properties is not for you.

public class Employee {
public string EmpName { get; set; }
public int Salary{ get; set; }
}

Above code in C#.net 3.5 at compile time expands and generates code (in target output file) as seen in first (more verbose) implementation above.

So obviously when you have need of simple assignment and retrieval of values this feature is quite useful as it makes code short and more clean for readibility. you may also apply attributes to Auto Properties eg.

public class Employee
{
[Obsolete("Changed to ...",true)]
public string EmpName { get; set; }
public int Salary { get; set; }
}

Also to make property as Readonly or WriteOnly you can add access modifier (private) next to the set or get. This allow you to internally modify the member, while publicly providing "read only" or "write only" access.

public class Employee
{
public string EmpName { get; private set; }
public int Salary { get; private set; }

public void method1()
{
EmpName = "Jignesh"; //allowed
Salary = 100000; //allowed
}
}

private void Form1_Load(object sender, EventArgs e)
{
Employee x = new Employee();
x.Salary = 2000; // Not allowed.
}

Block images on the page from being copied

Recently my client wanted to block images on the page from being copied. So we wrote a standard script to trap and block right-click on a page. But he did not wish to block entire page, he wanted users to copy unique Itemcode and a couple of other unique items on the page for correspondence. Well we can decided to block right-click only for images This is how we ejected a client script to achieve the task.

Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
Dim s As New StringBuilder()
s.AppendLine("function ProtectImages(e) {var msg = 'Warning: Image is copyrighted.';")
s.AppendLine("if (navigator.appName == 'Microsoft Internet Explorer' && event.button==2){")
s.AppendLine("alert(msg);")
s.AppendLine("return false;}")
s.AppendLine("else return true;}")


s.AppendLine("if (document.images){")
s.AppendLine("for(i=0;is.AppendLine("document.images[i].onmousedown = ProtectImages;")
s.AppendLine("document.images[i].onmouseup = ProtectImages; }}")

ClientScript.RegisterStartupScript(Me.GetType(), "ProtectImages", s.ToString(), True)
End Sub