Profile

Cover photo
110 followers|8,494 views
AboutPostsPhotosVideos

Stream

asp.net Tutorial

Shared publicly  - 
 
jquery DataTable PlugIn in ASP.Net using C# or jQuery DataTables and ASP.NET Integration for GridView
///////////////////////////////////////////////////////////////////////////////
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="GridTable.aspx.cs"  
Inherits="DT_Pagination.GridTable" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" 
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server">     <title></title>     <%--<link href="media_ColVis/css/ColVisAlt.css" rel="stylesheet" type="text/css" />-%>     <link href="media_ColVis/css/ColVis.css" rel="stylesheet" type="text/css" />     <link href="media/css/TableTools.css" rel="stylesheet" type="text/css" />     <link href="media/css/TableTools_JUI.css" rel="stylesheet" type="text/css" />     <link href="Scripts/css/themes/smoothness/jquery-ui.css" rel="stylesheet" type="text/css" />     <link href="Scripts/css/themes/smoothness/jquery.ui.theme.css" rel="stylesheet" type="text/css" />          <link href="Scripts/css/jquery.dataTables_themeroller.css" rel="stylesheet" type="text/css" />     <script src="Scripts/js/jquery.js" type="text/javascript"></script>     <script src="Scripts/js/jquery.dataTables.min.js" type="text/javascript"></script>     <script src="media/js/ZeroClipboard.js" type="text/javascript"></script>     <%-<script src="media/js/TableTools.min.js" type="text/javascript"></script>-%>     <script src="media/js/TableTools.js" type="text/javascript"></script>     <script src="Scripts/js/jquery.dataTables.columnFilter.js" type="text/javascript"></script>     <script src="Scripts/js/jquery-ui-1.9.2.custom.min.js" type="text/javascript"></script>     <script src="Scripts/js/FixedHeader.js" type="text/javascript"></script>     <script src="media_ColVis/js/ColVis.js" type="text/javascript"></script>     <style type="text/css">         .ui-datepicker-calendar tr, .ui-datepicker-calendar td, .ui-datepicker-calendar td a, .ui-datepicker-calendar th         {             font-size: inherit;         }         div.ui-datepicker         {             font-size: 10px;         }         .ui-datepicker-title span         {             font-size: 10px;         }                  .my-style-class input[type=text]         {             color: green;         }     </style>     <script type="text/javascript">         var oTable;         $(document).ready(function () {             $.datepicker.regional[""].dateFormat = 'dd/mm/yy';             $.datepicker.setDefaults($.datepicker.regional['']);             TableTools.DEFAULTS.aButtons = [ "copy", "csv", "xls", "pdf", "print", { "sExtends": "collection", "sButtonText": "Save", "aButtons": [                                                 "csv",                                                 "xls", //"pdf",                                                 {                                                 "sExtends": "pdf",                                                 //"sPdfOrientation": "landscape",                                                 "sPdfMessage": "Your custom message would go here."                                              },                                  "print"                                             ] }]             //TableTools.DEFAULTS.aButtons = [ "copy", "csv", "xls",  "pdf" ];             /          Main Functionality       /             $('#GridView1').dataTable({                 //"oLanguage": { "sSearch": "Search the nominees:" },                 "aLengthMenu": [[10, 25, 50, 100, -1], [10, 25, 50, 100, "All"]],                 "iDisplayLength": 10,                 "aaSorting": [[0, "asc"]],                 "bJQueryUI": true,                 "bAutoWidth": false,                 "bProcessing": true,                 "sDom": '<"top"i><"title">lt<"bottom"pf>',                 "sPaginationType": "full_numbers",                 "bRetrieve": true,                 //Scrolling--------------                 "sScrollY": "250px",                 "sScrollX": "100%",                                 "sScrollXInner": "100%",                 "bScrollCollapse": true,                 // ---  Print_Export_Copy  --                                 "sDom": 'T<"clear"><"H"lfr>t<"F"ip>',                 //"sDom": '<"top"iflp<"clear">>rt<"bottom"iflp<"clear">>',                                             // --- Column Visiblity ----                                 //"sDom": '<"H"Cfr>t<"F"ip>',                 //                "oColVis": //                { //                    //"sDom": 'C<"clear">lfrtip', //                    "activate": "mouseover"                     //             //"bJQueryUI": true //                },                 //- Dynamic Language---------                 "oLanguage": {                     "sZeroRecords": "There are no Records that match your search critera",                     "sLengthMenu": "Display _MENU_ records per page&nbsp;&nbsp;",                     "sInfo": "Displaying _START_ to _END_ of _TOTAL_ records",                     "sInfoEmpty": "Showing 0 to 0 of 0 records",                     "sInfoFiltered": "(filtered from _MAX_ total records)",                     "sEmptyTable": 'No Rows to Display.....!',                     "sSearch": "Search all columns:"                 },                 / Column Sorting And Searching /                 //      "aoColumns": [                 // { "bSearchable": false }, //Disable search on this column 1                 // {"bSortable": false }, //Disable sorting on this column 2                                // {"asSorting": ["asc"] }, //Allow only "asc" sorting on column 2                 // null,                 // { "sSortDataType": "dom-text", "sType": "numeric" },                 // { "iDataSort": 4 }, //Use column 4 to perform sorting                 // null,                 // null                 // ],                 /  Column Visibilities /                 // "aoColumns": [                 // / Sno /{"bSearchable": false, "bVisible": false},                 // / Engine /   null,                 // / Browser /  null,                 // / Platform / { "bSearchable": false, "bVisible":    false },                 // / Version /  { "bSearchable": false, "bVisible":    false },                 // / Grade /     null,                 //      / Share /    null,                 //      / Date /    null                 // ],                 "oSearch": {                     "sSearch": "",                     "bRegex": false,                     "bSmart": true                 },                 //------------------------Total in footer                                 "fnFooterCallback": function TotalCalc(nRow, aaData, iStart, iEnd, aiDisplay) {                     / Calculate the total market share for all browsers in this table (ie inc. outside the pagination) /                     var iTotalMarket = 0;                     for (var i = 0; i < aaData.length; i++) {                         //alert('Length : ' + aaData.length + ', Row No : ' + i + ', Share : ' + aaData[i][6]);                         iTotalMarket += parseInt(aaData[i][6]);                     }                     / Calculate the market share for browsers on this page /                     var iPageMarket = 0;                     for (var i = iStart; i < iEnd; i++) {                         iPageMarket += parseInt(aaData[aiDisplay[i]][6]);                         //alert('Length : ' + iStart + ', Row No : ' + i + ', Share : ' + aaData[aiDisplay[i]][6] + 'Total : ' + iPageMarket);                     }                     / Modify the footer row to match what we want /                     var nCells = nRow.getElementsByTagName('td');                     nCells[0].innerHTML = 'Total : '+parseInt(iPageMarket * 100) / 100 + '% (' + parseInt(iTotalMarket * 100) / 100 + '% Grand Total)';                 } // End of Footer Footer             });             // ----- Header Buttons ---------             $('<a id="btnDelete" style="padding: 0px; display:none;" class="ui-button ui-widget ui-state-default ui-corner-all'             + 'ui-button-text-only" href="javascript:void(0)"><span style="font-size: small; padding: 2px 5px;"'             + 'class="ui-button-text"> Delete selected Row</span></a>&nbsp;&nbsp;<button id="refresh">Refresh</button>').appendTo('div.dataTables_length');             //$('<button id="refresh">Refresh</button>').appendTo('div.dataTables_length'); //ReFresh Button             $("table#GridView1").dataTable().columnFilter(                 {                     //sPlaceHolder: "foot:before",                     "aoColumns": [                                     null, //{ "type": "number-range" },                                     {"type": "text", width: "50px" },                                     { "type": "select" },                                     { "type": "text" }, //null, //{ "type": "date-range", width: "50px" },                                     {"type": "number-range", width: "50px" },                                     { "type": "select" },                                     { "type": "select" },                                     { "type": "date-range"}   //{ "type": "date-range", width: "50px" },                                     ]                 });             // -----------  Fixed Header   ----------- //            oTable = $('#GridView1').dataTable(); //            new FixedHeader(oTable);             //$('#GridView1 div.title').text("This is a table title");             / Add a click handler to the rows - this could be used as a callback /             $("#GridView1 tbody tr").click(function (e) {                 if ($(this).hasClass('row_selected')) {                     $(this).removeClass('row_selected');                     $('#btnDelete').hide();                 }                 else {                     oTable.$('tr.row_selected').removeClass('row_selected');                     $(this).addClass('row_selected');                     $('#btnDelete').show();                 }             });             / Add a click handler for the delete row /             $('#btnDelete').click(function () {                 var anSelected = fnGetSelected(oTable);                 if (anSelected.length !== 0) {                     / Nedd Ajax Call To perform in serverSide/                     if (confirm('Are you sure you wish to delete this row?')) {                         / do the delete /                         oTable.fnDeleteRow(anSelected[0]);                     }                     else {                         $("#GridView1 tbody tr").removeClass('row_selected');                         $('#btnDelete').hide();                     }                 }             });             $.fn.dataTableExt.oStdClasses["filterColumn"] = "my-style-class";             $('#GridView1 tbody td').click(function () {                 / Get the position of the current data from the node /                 var aPos = oTable.fnGetPosition(this);                 var aData = oTable.fnGetData(aPos[0]);                 //alert(aData[0]);             });             / Init the table /             oTable = $('#GridView1').dataTable();         });         function fnGetSelected(oTableLocal) {             return oTableLocal.$('tr.row_selected');         }         //$("div.tools").html('Organize by: <select id="booking_status"><option value="">All Bookings</option><option value="confirmed">Upcoming</option><option value="arrived">Arrived</option><option value="rejected">Rejected</option></select>');     </script>      </head> <body>     <form id="form1" runat="server">      <div class="Shadow">         <asp:GridView ID="GridView1" runat="server" OnPreRender="GridView1_PreRender" 
ShowFooter="true" AutoGenerateColumns="false">             <Columns>                                 <asp:TemplateField HeaderText="S.No">                     <ItemTemplate>                         <%# Eval("id") %>                     </ItemTemplate>                 </asp:TemplateField>                                 <asp:TemplateField HeaderText="Engine">                     <ItemTemplate>                         <%# Eval("engine")%>                     </ItemTemplate>                 </asp:TemplateField>                                 <asp:TemplateField HeaderText="Browser">                     <ItemTemplate>                         <%# Eval("browser")%>                     </ItemTemplate>                 </asp:TemplateField>                                 <asp:TemplateField HeaderText="Platform">                     <ItemTemplate>                         <%# Eval("platform")%>                     </ItemTemplate>                 </asp:TemplateField>                                 <asp:TemplateField HeaderText="Version">                     <ItemTemplate>                         <%# Eval("version")%>                     </ItemTemplate>                 </asp:TemplateField>                                 <asp:TemplateField HeaderText="Grade">                     <ItemTemplate>                         <%# Eval("grade")%>                     </ItemTemplate>                 </asp:TemplateField>                                 <asp:TemplateField HeaderText="Market Share">                     <ItemTemplate>                         <%# Eval("marketshare")%>                     </ItemTemplate>                 </asp:TemplateField>                                 <asp:TemplateField HeaderText="Date">                     <ItemTemplate>                         <%# Eval("RDate")%>                     </ItemTemplate>                                    </asp:TemplateField>             </Columns>         </asp:GridView>     </div>     </form> </body> </html>

Code Behind :

using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Data; using System.Data.SqlClient; namespace DT_Pagination {     public partial class GridTable : System.Web.UI.Page     {         protected void Page_Load(object sender, EventArgs e)         { string strConnect = "server=.\\MYDATABASE; user id=sa; pwd=***; database=aspdotnetDB;";             DataSet dataset = new DataSet(); SqlDataAdapter da = new SqlDataAdapter("select *,convert(varchar(10),released,103) as RDate from ajax", strConnect);             da.Fill(dataset, "ajax");             GridView1.DataSource = dataset;             GridView1.DataBind();         }         protected void GridView1_PreRender(object sender, EventArgs e)         {             GridView1.UseAccessibleHeader = false;             GridView1.HeaderRow.TableSection = TableRowSection.TableHeader;             GridView1.FooterRow.TableSection = TableRowSection.TableFooter;             int CellCount = GridView1.FooterRow.Cells.Count;             GridView1.FooterRow.Cells.Clear();             GridView1.FooterRow.Cells.Add(new TableCell());             GridView1.FooterRow.Cells[0].ColumnSpan = CellCount - 1;             GridView1.FooterRow.Cells[0].HorizontalAlign = HorizontalAlign.Right;             GridView1.FooterRow.Cells.Add(new TableCell());             TableFooterRow tfr = new TableFooterRow();             for (int i = 0; i < CellCount; i++)             {                 tfr.Cells.Add(new TableCell());                 //tfr.Cells[i].i                 //tfr.Cells[i].ColumnSpan = CellCount;                 //tfr.Cells[0].Text = "Footer 2";             }             GridView1.FooterRow.Controls[1].Controls.Add(tfr);         }     } }
1
Add a comment...
 
Sunday, December 16, 2012
How solve Unexpected Logout issues
When you set the Session TimeOut to 20, you would expect the Session to expire after 20 minutes of inactivity. However, you're using Session State Mode InProc (the default value), which means that the SessionState is stored in memory. When the Application Pool recycles. all Sessions stored in Memory will be lost. There can be many reasons why the Application Pool recycles.

http://blogs.msdn.com/b/johan/archive/2007/05/16/common-reasons-why-your-application-pool-may-unexpectedly-recycle.aspx

Also, in a shared hosted environment, Application Pools recycles frequently. To overcome both problems, you should consider to use another SessionState Mode:

http://msdn.microsoft.com/en-us/library/ms178586(v=vs.100).aspx

But this has nothing to do with authentication, as already stated! When you set the forms authentication to 20 minutes, it means that the user will be logged out anywhere between 10 to 20 minutes of inactivity. This is because the authentication ticket is only reset after more than half of the timeout has expired.

http://msdn.microsoft.com/en-us/library/system.web.configuration.formsauthenticationconfiguration.slidingexpiration.aspx

But sometimes the authentication ticket seems to expire unexpectedly also, forcing the user to the login page.. To understand why this happens, you need to understand how authentication works.

When you login, an authentication ticket is created in a cookie. By default, this authentication ticket encrypted using the machinekey section in web.config. When this section is not specified in web.config, ASP.NET will generate one for you. If the application pool recycles, sometimes ASP.NET will generate a new machinekey (although MSDN says different!) especially in shared hosted environment. But with this new key, the authentication ticket cannot be decrypted anymore, so the user is redirected to the login page. To overcome this, simply add a machinekey section in your web.config, so the same key is used on each and every request:

http://www.developmentnow.com/articles/machinekey_generator.aspx
1
Add a comment...
 
How to know which control has triggered the Post back event.


There may be scenarios where we require to know which control has triggered the postback event both at client-side and server-side.
 
For all the controls except button and Imagebutton
Below are two hidden fields added by Asp.net.
<input type="hidden" name="__EVENTTARGET" id="__EVENTTARGET" value="" />
<input type="hidden" name="__EVENTARGUMENT" id="__EVENTARGUMENT" value="" />
"__EVENTTARGET" value is set to the control id who has triggered the postback. This happens for all the controls except button and Imagebutton.
At client side, on body onbeforeunload event, check the value of "__EVENTTARGET" , this will return the control ID of the control which has triggered the postback.

document.getElementById('__EVENTTARGET').value;
At server side, on Page_load check the control which has caused the postback.
string controlName = Request.Params.Get("__EVENTTARGET");
For Buttons and Imagebuttons
For Buttons and ImageButton, the "__EVENTTARGET" is set to empty string.
In this case, we need to check the data posted to server ( key/value pair).
In postbacks triggered by other controls except button and Image button, the key-value pair does not contains the buttonid and its value.
But if the postback is triggered by a button/imagebutton in this case the posted data has the buttonID and value it empty string.
At server side check the "__EVENTTARGET", If its empty and check the control type to know which button has triggered the postback
foreach (string controlID in Request.Form) {   Control objControl = Page.FindControl(controlID);   if (objControl is Button)   {     control = objControl;     break;   } }
Explanation:
In ASP.net, all the controls (except buttons and imagebuttons) uses javascript function  __doPostBack to trigger postback.
If the web page has any control whose AutoPostBack Property is set to true then the below is the code snippet added to page by ASP.net
<script type="text/javascript">
//<![CDATA[
var theForm = document.forms['Form1'];
if (!theForm) {
    theForm = document.Form1;
}
function __doPostBack(eventTarget, eventArgument) {
    if (!theForm.onsubmit || (theForm.onsubmit() != false)) {
        theForm.__EVENTTARGET.value = eventTarget;
        theForm.__EVENTARGUMENT.value = eventArgument;
        theForm.submit();
    }
}
//]]>
</script>
The Button controls and ImageButton controls does not call the __doPostBack function. Because of this, the _EVENTTARGET will always be empty.
When the user clicks on the "Submit" button, the content of the form is sent to the server. The form's action attribute defines the name of the file to send the content to.
The form data (key-value pair) sent to the server has the button control id that has caused the triggered postback.
1
Add a comment...

asp.net Tutorial

Shared publicly  - 
 
Exploring Session in ASP.NET

Download sample application - 2.85 KB
Table of Contents
Introduction
What is Session?
Advantages and disadvantages of Session
Storing and retrieving values from Session
Session ID
Session Mode and State Provider
Session State
Session Event
Session Mode
InProc Session Mode
Overview of InProc session mode
When should we use InProc session mode?
Advantages and disadvantages
StateServer Session Mode
Overview of StateServer session mode
Configuration for StateServer session mode
How StateServer session mode works
Example of StateServer session mode
Advantages and Disadvantages
SQL Server Session Mode
How SQLServer session mode works
When should we use SQL Server session mode?
Configuration for SQL Server session mode
Advantages and disadvantages
Custom Session Mode
How Custom session mode works?
When should we use Custom session mode?
Configuration for Custom session mode
Advantages and disadvantages
Overview of production deployment
Application Pool
Identity of Application Pool
Creating and assigning Application Pool
Web Garden
How to create a web garden
How Session depends on a web garden
Web Farm and Load Balancer
Handling Session in web farm and load balancer scenarios
Session and Cookies
What is Cookie-Munging?
How Cookie-Munging works in ASP.NET
Removing Session
Enabling and disabling Session
Summary
Introduction
First of all, I would like to thank all the readers who have read and voted for my articles. In the Beginner's Guide series, I have written some articles on state management. Probably this is my last article on state management.
This article will give you a very good understanding of session. In this article, I have covered the basics of session, different ways of storing session objects, session behavior in web farm scenarios, session on Load Balancer, etc. I have also explained details of session behavior in a live production environment. Hope you will enjoy this article and provide your valuable suggestions and feedback.
What is Session?
Web is stateless, which means a new instance of a web page class is re-created each time the page is posted to the server. As we all know, HTTP is a stateless protocol, it can't hold client information on a page. If the user inserts some information and move to the next page, that data will be lost and the user would not be able to retrieve that information. What do we need here? We need to store information. Session provides a facility to store information on server memory. It can support any type of object to store along with our own custom objects. For every client, session data is stored separately, which means session data is stored on a per client basis. Have a look at the following diagram:
Fig: For every client, session data is stored separately
State management using session is one of the best ASP.NET features, because it is secure, transparent from users, and we can store any kind of object in it. Along with these advantages, some times session can cause performance issues in high traffic sites because it is stored in server memory and clients read data from the server. Now let's have a look at the advantages and disadvantages of using session in our web applications.
Advantages and disadvantages of Session?
Following are the basic advantages and disadvantages of using session. I have describe in details with each type of session at later point of time.
Advantages:
It helps maintain user state and data all over the application.
It is easy to implement and we can store any kind of object.
Stores client data separately.
Session is secure and transparent from the user.
Disadvantages:
Performance overhead in case of large volumes of data/user, because session data is stored in server memory.
Overhead involved in serializing and de-serializing session data, because in the case of StateServer and SQLServer session modes, we need to serialize the objects before storing them.
Besides these, there are many advantages and disadvantages of session that are based on the session type. I have discussed all of them in the respective sections below.
Storing and retrieving values from Session
Storing and retrieving values in session are quite similar to that in ViewState. We can interact with session state with the System.Web.SessionState.HttpSessionState class, because this provides the built-in session object in ASP.NET pages.
The following code is used for storing a value to session:
Collapse | Copy Code
//Storing UserName in Session Session["UserName"] = txtUser.Text;
Now, let's see how we can retrieve values from session:
Collapse | Copy Code
//Check weather session variable null or not if (Session["UserName"] != null) { //Retrieving UserName from Session lblWelcome.Text = "Welcome : " + Session["UserName"]; } else { //Do Something else }
We can also store other objects in session. The following example shows how to store a DataSet in session.
Collapse | Copy Code
//Storing dataset on Session Session["DataSet"] = _objDataSet;
The following code shows how we to retrieve that DataSet from session:
Collapse | Copy Code
//Check weather session variable null or not if (Session["DataSet"] != null) { //Retrieving UserName from Session DataSet _MyDs = (DataSet)Session["DataSet"]; } else { //Do Something else }
References:
MSDN (read the session variable section)
Session ID
ASP.NET uses an 120 bit identifier to track each session. This is secure enough and can't be reverse engineered. When a client communicates with a server, only the session ID is transmitted between them. When the client requests for data, ASP.NET looks for the session ID and retrieves the corresponding data. This is done in the following steps:
Client hits the web site and information is stored in the session.
Server creates a unique session ID for that client and stores it in the Session State Provider.
The client requests for some information with the unique session ID from the server.
Server looks in the Session Providers and retrieves the serialized data from the state server and type casts the object.
Take a look at the the pictorial flow:
Fig: Communication of client, web server, and State Provider
References:
SessionID in MSDN
Session Mode and State Provider
In ASP.NET, there are the following session modes available:
InProc
StateServer
SQLServer
Custom
For every session state, there is a Session Provider. The following diagram will show you how they are related:
Fig: Session state architecture
We can choose the session state provider based on which session state we are selecting. When ASP.NET requests for information based on the session ID, the session state and its corresponding provider are responsible for sending the proper information. The following table shows the session mode along with the provider name:
Apart from that, there is another mode Off. If we select this option, the session will be disabled for the application. But our objective is to use session, so we will look into the above four session state modes.
Session States
Session state essentially means all the settings that you have made for your web application for maintaining the session. Session State itself is a big thing. It says all about your session configuration, either in the web.config or from the code-behind. In the web.config, <SessionState> elements are used for setting the configuration of the session. Some of them are Mode, Timeout, StateConnectionString, CustomProvider, etc. I have discussed about each and every section of the connection string. Before I discuss Session Mode, take a brief overview of session events.
Session Event
There are two types of session events available in ASP.NET:
Session_Start
Session_End
You can handle both these events in the global.asax file of your web application. When a new session initiates, the session_start event is raised, and the Session_End event raised when a session is abandoned or expires.
Collapse | Copy Code
void Session_Start(object sender, EventArgs e) { // Code that runs when a new session is started } void Session_End(object sender, EventArgs e) { // Code that runs when a session ends. }
References:
Application and Session Events
Session Mode
I have already discussed about session modes in ASP.NET. Following are the different types of session modes available in ASP.NET:
Off
InProc
StateServer
SQLServer
Custom
If we set session Mode="off" in web.config, session will be disabled in the application. For this, we need to configure web.config the following way:
InProc Session Mode
This is the default session mode in ASP.NET. Its stores session information in the current Application Domain. This is the best session mode for web application performance. But the main disadvantage is that, it will lose data if we restart the server. There are some more advantages and disadvantages of the InProc session mode. I will come to those points later on.
Overview of InProc session mode
As I have already discussed, in InProc mode, session data will be stored on the current application domain. So it is easily and quickly available.
InProc session mode stores its session data in a memory object on the application domain. This is handled by a worker process in the application pool. So if we restart the server, we will lose the session data. If the client request for data, the state provider read the data from an in-memory object and returns it to the client. In web.config, we have to mention the session mode and also set the timeout.
The above session timeout setting keeps the session alive for 30 minute. This is configurable from the code-behind too.
Collapse | Copy Code
Session.TimeOut=30;
There are two types of session events available in ASP.NET: Session_Start() and Session_End and this is the only mode that supports the Session_End() event. This event is called after the session timeout period is over. The general flow for the InProc session state is like this:
When the Session_End() is called depends on the session timeout. This is a very fast mechanism because no serialization occurs for storing and retrieving data, and data stays inside the same application domain.
When should we use the InProc session mode?
InProc is the default session mode. It can be very helpful for a small web site and where the number of users is very less. We should avoid InProc in a Web Garden scenario (I will come to this topic later on).
Advantages and disadvantages
Advantages:
It store session data in a memory object of the current application domain. So accessing data is very fast and data is easily available.
There is not requirement of serialization to store data in InProc session mode.
Implementation is very easy, similar to using the ViewState.
Disadvantages:
Although InProc session is the fastest, common, and default mechanism, it has a lot of limitations:
If the worker process or application domain is recycled, all session data will be lost.
Though it is the fastest, more session data and more users can affect performance, because of memory usage.
We can't use it in web garden scenarios.
This session mode is not suitable for web farm scenarios.
As per the above discussion, we can conclude that InProc is a very fast session storing mechanism but suitable only for small web applications. InProc session data will get lost if we restart the server, or if the application domain is recycled. It is also not suitable for Web Farm and Web Garden scenarios.
Now we will have a look the other options available to overcome these problems. First comes the StateServer mode.
StateServer Session Mode
Overview of StateServer session mode
This is also called Out-Proc session mode. StateServer uses a stand-alone Windows Service which is independent of IIS and can also be run on a separate server. This session state is totally managed by aspnet_state.exe. This server may run on the same system, but it's outside of the main application domain where your web application is running. This means if you restart your ASP.NET process, your session data will still be alive. This approaches has several disadvantages due to the overhead of the serialization and de-serialization involved, it also increases the cost of data access because every time the user retrieves session data, our application hits a different process.
Configuration for StateServer session mode
In StateServer mode, session data is stored in a separate server which is independent of IIS and it is handled by aspnet_state.exe. This process is run as a Windows Service. You can start this service from the Windows MMC or from the command prompt.
By default, the "Startup Type" of the ASP.NET state service is set to Manual; we have to set it to Automatic.
From the command prompt, just type "net start aspnet_state". By default, this service listens to TCP port 42424, but we can change the port from the Registry editor as show in the picture below:
Now have a look at the web.config configuration for the StateServer setting. For the StateServer setting, we need to specify the stateConnectionString. This will identify the system that is running the state server. By default, stateConnectionString used the IP 127.0.0.1 (localhost) and port 42424.
When we are using StateServer, we can configure the stateNetworkTimeOut attribute to specify the maximum number of seconds to wait for the service to respond before canceling the request. The default timeout value is 10 seconds.
For using StateServer, the object which we are going to store should be serialized, and at the time of retrieving, we need to de-serialize it back. I have described this below with an example.
How the StateServer Session Mode works
We use the StateServer session mode to avoid unnecessary session data loss when restarting our web server. StateServer is maintained by the aspnet_state.exe process as a Windows service. This process maintains all the session data. But we need to serialize the data before storing it in StateServer session mode.
As shown in the above figure, when the client sends a request to the web server, the web server stores the session data on the state server. The StateServer may be the current system or a different system. But it will be totally independent of IIS. The destination of the StateServer will depend on the web.config stateConnectionString setting. If we set it to 127.0.0.1:42424, it will store data in the local system itself. For changing the StateServer destination, we need to change the IP, and make sure aspnet_state.exe is up and running on that system. Otherwise you will get the following exception while trying to store data on session.
When we are storing an object on session, it should be serialized. That data will be stored in the StateServer system using the State Provider. And at the time of retrieving the data, the State Provider will return the data. The complete flow is given in the picture below:
Example of StateServer Session Mode
Here is a simple example of using the StateServer session mode. I have created this sample web application directly on IIS so that we can easily understand its usage.
Step 1: Open Visual Studio > File > New > Web Sites. Choose Location as HTTP and create the web application.
Now if you open IIS, you will see a virtual directory created with the name of your web application, in my case it is StateServer.
Step 2: Create s simple UI that will take the roll number and the name of a student. We will store the name and roll number in a state server session. I have also created a class StudentInfo. This class is listed below:
Collapse | Copy Code
[Serializable] public class StudentInfo { //Default Constructor public StudentInfo() { } /// <summary> /// Create object of student Class /// </summary> /// <param name="intRoll">Int RollNumber</param> /// <param name="strName">String Name</param> public StudentInfo(int intRoll, string strName) { this.Roll = intRoll; this.Name = strName; } private int intRoll; private string strName; public int Roll { get { return intRoll; } set { intRoll = value; } } public string Name { get { return strName; } set { strName = value; } } }
Now have a look at the code-behind. I have added two buttons: one for storing session and another for retrieving session.
Collapse | Copy Code
protected void btnSubmit_Click(object sender, EventArgs e) { StudentInfo _objStudentInfo = new StudentInfo(Int32.Parse( txtRoll.Text) ,txtUserName.Text); Session["objStudentInfo"] = _objStudentInfo; ResetField(); } protected void btnRestore_Click(object sender, EventArgs e) { StudentInfo _objStudentInfo = (StudentInfo) Session["objStudentInfo"]; txtRoll.Text = _objStudentInfo.Roll.ToString(); txtUserName.Text = _objStudentInfo.Name; }
Step 3: Configure your web.config for state server as I have already explained. And please make sure aspnet_state.exe is up and running on the configured server.
Step 4: Run the application.
Enter the data, click on Submit.
There are the following tests that I have made which will totally explain how exactly StateServer is useful.
First: Remove the [Serializable ] keyword from the StudentInfo class and try to run the application. When you click on the Submit button, you will get the following error:
Which clearly says that you have to serialize the object before storing it.
Second: Run the application, store data by clicking on the Submit button. Restart IIS.
In the case of InProc, you will have already lost your session data, but with StateServer, click on Restore Session and you will get your original data, because StateServer data does not depend on IIS. It keeps it separately.
Third: Stop aspnet_state.exe from the Windows Services MMC and submit the data. You will get the following error:
because your State Server process is not running. So please keep in mind these three points when using StateServer mode.
Advantages and Disadvantages
Based on the above discussion:
Advantages:
It keeps data separate from IIS so any issues with IIS will not hamper session data.
It is useful in web farm and web garden scenarios.
Disadvantages:
Process is slow due to serialization and de-serialization.
State Server always needs to be up and running.
I am stopping here on StateServer, you will find some more interesting points on it in the Load Balancer, Web Farm, and Web Garden section.
References:
State Server Session Mode
ASP.NET Session State
SQLServer Session Mode
Overview of SQL Server Session Mode
This session mode provide us more secure and reliable session management in ASP.NET. In this session mode, session data is serialized and stored in A SQL Server database. The main disadvantage of this session storage method is the overhead related with data serialization and de-serialization. It is the best option for using in web farms though.
To setup SQL Server, we need these SQL scripts:
For installing: InstallSqlState.sql
For uninstalling: UninstallSQLState.sql
The easiest way to configure SQL Server is using the aspnet_regsql command.
I have explained in detail the use of these files in the configuration section. This is the most useful state management in web farm scenarios.
When should we use SQLServer Session Mode?
SQL Server session mode is a more reliable and secure session state management.
It keeps data in a centralized location (database).
We should use the SQLServer session mode when we need to implement session with more security.
If there happens to be frequent server restarts, this is an ideal choice.
This is the perfect mode for web farm and web garden scenarios (I have explained this in detail later).
We can use SQLServer session mode when we need to share session between two different applications.
Configuration for SQLServer Session Mode
In SQLServer session mode, we store session data in SQL Server, so we need to first provide the database connection string in web.config. The sqlConnectionString attribute is used for this.
After we setup the connection string, we need to configure the SQL Server. I will now explain how to configure your your SQL Server using the aspnet_regsql command.
Step 1: From command prompt, go to your Framework version directory. E.g.: c:\windows\microsoft.net\framework\<version>.
Step 2: Run the aspnet_regsql command with the following parameters:
Have a look at the parameters and their uses:
After you run the command, you will get the following message:
That's all.
Step 3: Open SQL Server Management Studio, check if a new database ASPState has been created, and there should be two tables:
ASPStateTempApplications
ASPStateTempSessions
Change the configuration string of the StateServer example and run the same sample application.
Just store the roll number and user name and click on the Submit button. Open the ASPStateTempSessions table from SQL Server Management Studio.. here is your session data:
Now do the following test that I have already explained in the StateServer mode section:
Remove the Serialize keyword from the StydentInfo class
Reset IIS and click on Restore Session
Stop SQL Server Services
I think I have explained the SQLServer session mode well.
Advantages and Disadvantages
Advantages:
Session data not affected if we restart IIS.
The most reliable and secure session management.
It keeps data located centrally, is easily accessible from other applications.
Very useful in web farms and web garden scenarios.
Disadvantages:
Processing is very slow in nature.
Object serialization and de-serialization creates overhead for the application.
As the session data is handled in a different server, we have to take care of SQL Server. It should be always up and running.
References:
Read more about SQLServer mode
Custom Session Mode
Overview of Custom Session Mode
Commonly we use the InProc, StateServer, or SQLServer session modes for our application, but we also need to know the fundamentals of the Custom session mode. This session mode is quite interesting, because Custom session gives full control to us to create everything, even the session ID. You can write your own algorithm to generate session IDs.
You can implement custom providers that store session data in other storage mechanisms simply by deriving from the SessionStateStoreProviderBase class. You can also generate a new session ID by implementing ISessionIDManager.
These are the methods called during the implementation of Custom session:
In the Initialize method, we can set a custom provider. This will initialize the connection with that provider. SetItemExpireCallback is used to set SessionTimeOut. We can register a method that will be called at the time of session expiration. InitializeRequest is called on every request and CreateNewStoreData is used to create a new instance of SessionStateStoreData.
When should we use Custom Session Mode?
We can use Custom session mode in the following cases:
We want to store session data in a place other than SQL Server.
When we have to use an existing table to store session data.
When we need to create our own session ID.
What configuration do we need for it?
We need to configure our web.config like this:
If you want to explore this more, please check the References section.
Advantages and Disadvantages
Advantages:
We can use an existing table for storing session data. This is useful when we have to use an existing database.
It's not dependent on IIS, so restarting the web server does not have any effect on session data.
We can crate our own algorithm for generating session ID.
Disadvantages:
Processing of data is very slow.
Creating a custom state provider is a low-level task that needs to be handled carefully to ensure security.
It is always recommended to use a third party provider rather than create your own.
References:
Custom Mode
Overview of production deployment
Production environments are where we deploy our applications on a live production server. It is a major and big challenge for web developers to deploy their applications on a live server, because in a big production environment, there are a large number of users and it is hard to handle the load for so many users with a single server. Here comes in the concepts of Web Farm, Load Balancer, Web Garden, etc.
Just a few months back, I deployed a web application in a live production environment which is accessed by millions of user and there were more than 10 Active Directory instances, more than 10 web servers over a Load Balancer, and several database server, Exchange Server, LCS Server, etc. The major risk involved in multiple servers is session management. The following picture shows a general diagram of production environments:
I will try to explain the different scenarios that you need to keep in mind while deploying your application.
Application Pool
This is one of the most important things you should create for your applications in a production environment. Application pools are used to separate sets of IIS worker processes that share the same configuration. Application pools enable us to isolate our web application for better security, reliability, and availability. The worker process serves as the process boundary that separates each application pool so that when one worker process or application has an issue or is recycled, other applications or worker processes are not affected.
Identity of Application Pool
Application pool identity configuration is an important aspect of security in IIS 6.0 and IIS 7.0, because it determines the identity of the worker process when the process is accessing a resource. In IIS 7.0, there are three predefined identities that are the same as in IIS 6.0.
Creating and assigning Application Pool
Open IIS Console, right click on Application Pool Folder > Create New.
Give the Application Pool ID and click OK.
Now, right click on the Virtual Directory (I am using StateServer web sites) and assign StateServerAppPool to the StateServer Virtual Directory.
So this StateServer web site will run independently with StateServerAppPool. Any problem related with other applications will not affect this application. This is the main advantage of creating application pools separately.
Web Garden
By default, each application pool runs with a single worker process (W3Wp.exe). We can assign multiple worker processes with a single application pool. An application pool with multiple worker processes is called a Web Garden. Many worker processes with the same Application Pool can sometimes provide better throughput performance and application response time. And each worker process should have its own Thread and memory space.
As shown in the picture, in IIS, there may be multiple application pools and each application pool will have at least one worker process. A Web Garden should contain multiple worker processes.
There are certain restrictions in using a Web Garden with your web application. If we use the InProc session mode, our application will not work correctly because the session will be handled by a different worker process. To avoid this problem, we should use the OutProc session mode and we can use a session state server or SQL-Server session state.
Main advantage: The worker processes in a Web Garden share the requests that arrive for that particular application pool. If a worker process fails, another worker process can continue processing the requests.
How to Create a Web Garden?
Right click on Application Pool > Go to Performance tab > Check Web Garden section (highlighted in picture):
By default, it would be 1. Just change it to more than one.
How Session depends on Web Garden?
I have already explained that InProc is handled by a worker process. It keeps data inside its memory object. Now if we have multiple worker processes, then it would be very difficult to handle the session because each and every worker process has its own memory, so if my first request goes to WP1 and it keeps my session data and the second request goes to WP2, I am trying to retrieve session data and it will not be available, which will throw an error. So please avoid Web Gardens in InProc session mode.
We can use StateServer or SQLServer session modes in Web Gardens because as I have already explained, these two session modes do not depend on worker processes. In my example, I have also explained that if you restart IIS, you are still able to access your session data.
In short:
Web Farm and Load Balancer
This is the most common terms that are used in production deployments. These terms come in when we are using multiple web servers for deploying our applications. The main reason for using these is we have to distribute the load over multiple servers. A Load Balancer is used to distribute the load on multiple servers.
If we take a look at the above diagram, the client request the URL and it will hit a Load Balancer, which decides which server to access. The load balancer will distribute the traffic over all the different web servers.
Now how does this affect Session?
Handling Session in web farm and load balancer scenarios
Handling session is one of the most challenging jobs in a web farm.
InProc: In InProc session mode, session data is stored in an in-memory object of the worker process. Each server will have its own worker process and will keep session data inside its memory.
If one server is down, and if the request goes to a different server, the user is not able to get session data. So it is not recommended to use InProc in Web Farms.
StateServer: I have already explained what a state server is and how to configure a state server, etc. For web farm scenarios, you can easily understand how much this is important because all session data will be stored in a single location.
Remember, in a web farm, you have to make sure you have the same <machinekey> in all your web servers. Other things are the same as I have describe earlier. All web.config files will have the same configuration (stateConnectionString) for session state.
SQL Server: This is another approach, the best that we can use in a web farm. We need to configure the database first. The required steps have been explained covered.
As shown in the above picture, all web server session data will be stored in a single SQL Server database. And it is easily accessible. Keep one thing in mind, you should serialize objects in both StateServer and SQLServer modes. If one of the web servers go down, the load balancer will distribute the load to other servers and the user will still be able to read session data from the server, because data is stored in a centralized database server.
In summary, we can use either StateServer or SQLServer session mode in a web farm. We should avoid InProc.
Session and Cookies
Clients use cookies to work with session. Because clients need to present the appropriate session ID with each request. We can do this in the following ways:
Using cookies
ASP.NET creates a special cookie named ASP.NET_SessionId automatically when a session collection is used. This is the default. Session ID is transmitted through that cookie.
Cookie munging
Some older browsers do not support cookies or the user may disable cookies in the browser, in that case, ASP.NET transmits session ID in a specially modified (or “munged”) URL.
How Cookie Munging works?
When the user requests for a page on a server, the server encoded the session ID and adds it with every HREF link in the page. When the user clicks on a link, ASP.NET decodes that session ID and passes it to the page that the user is requesting. Now the requesting page can retrieve session variables. This all happens automatically if ASP.NET detects that the user's browser does not support cookies.
How to implement Cookie Munging?
For this, we have to make our session state cookie-less.
Removing Session
Following are the list of methods that are used to remove Session:
Enabling and disabling Session
For performance optimization, we can enable or disable session because each and every page read and write access of the page involves some performance overhead. So it is always better to enable and disable session based on requirements rather than enable it always. We can enable and disable session state in two ways:
Page level
Application level
Page level
We can disable session state in page level using the EnableSessionState attribute in the Page directive.
This will disable the session activities for that particular page.
The same way, we can make it read-only also. This will permit to access session data but will not allow writing data on session.
Application level
Session state can be disabled for the entire web application using the EnableSessionState property in Web.Config.
Generally we use page level because some pages may not require any session data or may only read session data.
References:
How to Disable ASP.NET Session State
Summary
Hope you are now really familiar with Session, its use, how to apply it in web farms, etc. To summarise:
The in-process (InProc) session provider is the fastest because of everything being stored inside memory. Session data will be lost if we restart the web server or if the worker process is recycled. You can use this in small web applications where the number of users is less. Do not use InProc in web farms.
In StateServer session mode, session data is maintained by aspnet_state.exe. It keeps session data out of the web server. So any issues with the web server does not affect session data. You need to serialized an object before storing data in StateServer session. We can use this safely in web farms.
SQLServer session modes store data in SQL Server. We need to provide the connection string. Here we also need to serialize the data before storing it to session. This is very useful in production environments with web farms.
We can use a Custom provider for custom data sources or when we need to use an existing table to store session data. We can also create custom session IDs in Custom mode. But it is not recommended to create your own custom provider. It is recommended to use a third party provider.
Hope you have enjoyed the article. Please give your suggestions and feedback for further improvements. Again thanks for reading.
Further study and references
I have already added some in the various sections. Here I am giving a few more links which will really help you for further study:
ASP.NET Session Overview
ASP.NET Session State Overview
Different Session Modes
Web Farm-Load Balancing in ASP.NET
Enabling and Disabling Session Mode
Configuring Session Modes
1
Add a comment...

asp.net Tutorial

Shared publicly  - 
 
Ten Common Database Design Mistakes

No list of mistakes is ever going to be exhaustive. People (myself included) do a lot of really stupid things, at times, in the name of "getting it done." This list simply reflects the database design mistakes that are currently on my mind, or in some cases, constantly on my mind. I have done this topic two times before. If you're interested in hearing the podcast version, visit Greg Low's super-excellent SQL Down Under. I also presented a boiled down, ten-minute version at PASS for the Simple-Talk booth. Originally there were ten, then six, and today back to ten. And these aren't exactly the same ten that I started with; these are ten that stand out to me as of today.
Before I start with the list, let me be honest for a minute. I used to have a preacher who made sure to tell us before some sermons that he was preaching to himself as much as he was to the congregation. When I speak, or when I write an article, I have to listen to that tiny little voice in my head that helps filter out my own bad habits, to make sure that I am teaching only the best practices. Hopefully, after reading this article, the little voice in your head will talk to you when you start to stray from what is right in terms of database design practices.
So, the list:
Poor design/planning
Ignoring normalization
Poor naming standards
Lack of documentation
One table to hold all domain values
Using identity/guid columns as your only key
Not using SQL facilities to protect data integrity
Not using stored procedures to access data
Trying to build generic objects
Lack of testing
Poor design/planning
"If you don't know where you are going, any road will take you there" – George Harrison
Prophetic words for all parts of life and a description of the type of issues that plague many projects these days.
Let me ask you: would you hire a contractor to build a house and then demand that they start pouring a foundation the very next day? Even worse, would you demand that it be done without blueprints or house plans? Hopefully, you answered "no" to both of these. A design is needed make sure that the house you want gets built, and that the land you are building it on will not sink into some underground cavern. If you answered yes, I am not sure if anything I can say will help you.
Like a house, a good database is built with forethought, and with proper care and attention given to the needs of the data that will inhabit it; it cannot be tossed together in some sort of reverse implosion.
Since the database is the cornerstone of pretty much every business project, if you don't take the time to map out the needs of the project and how the database is going to meet them, then the chances are that the whole project will veer off course and lose direction. Furthermore, if you don't take the time at the start to get the database design right, then you'll find that any substantial changes in the database structures that you need to make further down the line could have a huge impact on the whole project, and greatly increase the likelihood of the project timeline slipping.
Far too often, a proper planning phase is ignored in favor of just "getting it done". The project heads off in a certain direction and when problems inevitably arise – due to the lack of proper designing and planning – there is "no time" to go back and fix them properly, using proper techniques. That's when the "hacking" starts, with the veiled promise to go back and fix things later, something that happens very rarely indeed.
Admittedly it is impossible to predict every need that your design will have to fulfill and every issue that is likely to arise, but it is important to mitigate against potential problems as much as possible, by careful planning.
Ignoring Normalization
Normalization defines a set of methods to break down tables to their constituent parts until each table represents one and only one "thing", and its columns serve to fully describe only the one "thing" that the table represents.
The concept of normalization has been around for 30 years and is the basis on which SQL and relational databases are implemented. In other words, SQL was created to work with normalized data structures. Normalization is not just some plot by database programmers to annoy application programmers (that is merely a satisfying side effect!)
SQL is very additive in nature in that, if you have bits and pieces of data, it is easy to build up a set of values or results. In the FROM clause, you take a set of data (a table) and add (JOIN) it to another table. You can add as many sets of data together as you like, to produce the final set you need.
This additive nature is extremely important, not only for ease of development, but also for performance. Indexes are most effective when they can work with the entire key value. Whenever you have to use SUBSTRING, CHARINDEX, LIKE, and so on, to parse out a value that is combined with other values in a single column (for example, to split the last name of a person out of a full name column) the SQL paradigm starts to break down and data becomes become less and less searchable.
So normalizing your data is essential to good performance, and ease of development, but the question always comes up: "How normalized is normalized enough?" If you have read any books about normalization, then you will have heard many times that 3rd Normal Form is essential, but 4th and 5th Normal Forms are really useful and, once you get a handle on them, quite easy to follow and well worth the time required to implement them.
In reality, however, it is quite common that not even the first Normal Form is implemented correctly.
Whenever I see a table with repeating column names appended with numbers, I cringe in horror. And I cringe in horror quite often. Consider the following example Customer table:
Are there always 12 payments? Is the order of payments significant? Does a NULL value for a payment mean UNKNOWN (not filled in yet), or a missed payment? And when was the payment made?!?
A payment does not describe a Customer and should not be stored in the Customer table. Details of payments should be stored in a Payment table, in which you could also record extra information about the payment, like when the payment was made, and what the payment was for:
In this second design, each column stores a single unit of information about a single "thing" (a payment), and each row represents a specific instance of a payment.
This second design is going to require a bit more code early in the process but, it is far more likely that you will be able to figure out what is going on in the system without having to hunt down the original programmer and kick their butt…sorry… figure out what they were thinking
Poor naming standards
"That which we call a rose, by any other name would smell as sweet"
This quote from Romeo and Juliet by William Shakespeare sounds nice, and it is true from one angle. If everyone agreed that, from now on, a rose was going to be called dung, then we could get over it and it would smell just as sweet. The problem is that if, when building a database for a florist, the designer calls it dung and the client calls it a rose, then you are going to have some meetings that sound far more like an Abbott and Costello routine than a serious conversation about storing information about horticulture products.
Names, while a personal choice, are the first and most important line of documentation for your application. I will not get into all of the details of how best to name things here– it is a large and messy topic. What I want to stress in this article is the need for consistency. The names you choose are not just to enable you to identify the purpose of an object, but to allow all future programmers, users, and so on to quickly and easily understand how a component part of your database was intended to be used, and what data it stores. No future user of your design should need to wade through a 500 page document to determine the meaning of some wacky name.
Consider, for example, a column named, X304_DSCR. What the heck does that mean? You might decide, after some head scratching, that it means "X304 description". Possibly it does, but maybe DSCR means discriminator, or discretizator?
Unless you have established DSCR as a corporate standard abbreviation for description, then X304_DESCRIPTION is a much better name, and one leaves nothing to the imagination.
That just leaves you to figure out what the X304 part of the name means. On first inspection, to me, X304 sounds like more like it should be data in a column rather than a column name. If I subsequently found that, in the organization, there was also an X305 and X306 then I would flag that as an issue with the database design. For maximum flexibility, data is stored in columns, not in column names.
Along these same lines, resist the temptation to include "metadata" in an object's name. A name such as tblCustomer or colVarcharAddress might seem useful from a development perspective, but to the end user it is just confusing. As a developer, you should rely on being able to determine that a table name is a table name by context in the code or tool, and present to the users clear, simple, descriptive names, such as Customer and Address.
A practice I strongly advise against is the use of spaces and quoted identifiers in object names. You should avoid column names such as "Part Number" or, in Microsoft style, [Part Number], therefore requiring you users to include these spaces and identifiers in their code. It is annoying and simply unnecessary.
Acceptable alternatives would be part_number, partNumber or PartNumber. Again, consistency is key. If you choose PartNumber then that's fine – as long as the column containing invoice numbers is called InvoiceNumber, and not one of the other possible variations.
Lack of documentation
I hinted in the intro that, in some cases, I am writing for myself as much as you. This is the topic where that is most true. By carefully naming your objects, columns, and so on, you can make it clear to anyone what it is that your database is modeling. However, this is only step one in the documentation battle. The unfortunate reality is, though, that "step one" is all too often the only step.
Not only will a well-designed data model adhere to a solid naming standard, it will also contain definitions on its tables, columns, relationships, and even default and check constraints, so that it is clear to everyone how they are intended to be used. In many cases, you may want to include sample values, where the need arose for the object, and anything else that you may want to know in a year or two when "future you" has to go back and make changes to the code.
NOTE:
Where this documentation is stored is largely a matter of corporate standards and/or convenience to the developer and end users. It could be stored in the database itself, using extended properties. Alternatively, it might be in maintained in the data modeling tools. It could even be in a separate data store, such as Excel or another relational database. My company maintains a metadata repository database, which we developed in order to present this data to end users in a searchable, linkable format. Format and usability is important, but the primary battle is to have the information available and up to date.
Your goal should be to provide enough information that when you turn the database over to a support programmer, they can figure out your minor bugs and fix them (yes, we all make bugs in our code!). I know there is an old joke that poorly documented code is a synonym for "job security." While there is a hint of truth to this, it is also a way to be hated by your coworkers and never get a raise. And no good programmer I know of wants to go back and rework their own code years later. It is best if the bugs in the code can be managed by a junior support programmer while you create the next new thing. Job security along with raises is achieved by being the go-to person for new challenges.
One table to hold all domain values
"One Ring to rule them all and in the darkness bind them"
This is all well and good for fantasy lore, but it's not so good when applied to database design, in the form of a "ruling" domain table. Relational databases are based on the fundamental idea that every object represents one and only one thing. There should never be any doubt as to what a piece of data refers to. By tracing through the relationships, from column name, to table name, to primary key, it should be easy to examine the relationships and know exactly what a piece of data means.
The big myth perpetrated by architects who don't really understand relational database architecture (me included early in my career) is that the more tables there are, the more complex the design will be. So, conversely, shouldn't condensing multiple tables into a single "catch-all" table simplify the design? It does sound like a good idea, but at one time giving Pauly Shore the lead in a movie sounded like a good idea too.
For example, consider the following model snippet where I needed domain values for:
Customer CreditStatus
Customer Type
Invoice Status
Invoice Line Item BackOrder Status
Invoice Line Item Ship Via Carrier
On the face of it that would be five domain tables…but why not just use one generic domain table, like this?
This may seem a very clean and natural way to design a table for all but the problem is that it is just not very natural to work with in SQL. Say we just want the domain values for the Customer table:
 
SELECT *
FROM Customer
  JOIN GenericDomain as CustomerType
    ON Customer.CustomerTypeId = CustomerType.GenericDomainId
      and CustomerType.RelatedToTable = 'Customer'
      and  CustomerType.RelatedToColumn = 'CustomerTypeId'
  JOIN GenericDomain as CreditStatus
    ON  Customer.CreditStatusId = CreditStatus.GenericDomainId
      and CreditStatus.RelatedToTable = 'Customer'
      and CreditStatus.RelatedToColumn = ' CreditStatusId'
As you can see, this is far from being a natural join. It comes down to the problem of mixing apples with oranges. At first glance, domain tables are just an abstract concept of a container that holds text. And from an implementation centric standpoint, this is quite true, but it is not the correct way to build a database. In a database, the process of normalization, as a means of breaking down and isolating data, takes every table to the point where one row represents one thing. And each domain of values is a distinctly different thing from all of the other domains (unless it is not, in which case the one table will suffice.). So what you do, in essence, is normalize the data on each usage, spreading the work out over time, rather than doing the task once and getting it over with.
So instead of the single table for all domains, you might model it as:
Looks harder to do, right? Well, it is initially. Frankly it took me longer to flesh out the example tables. But, there are quite a few tremendous gains to be had:
Using the data in a query is much easier:
SELECT *
FROM Customer
  JOIN CustomerType
    ON Customer.CustomerTypeId = CustomerType.CustomerTypeId
  JOIN CreditStatus
    ON  Customer.CreditStatusId = CreditStatus.CreditStatusId 
Data can be validated using foreign key constraints very naturally, something not feasible for the other solution unless you implement ranges of keys for every table – a terrible mess to maintain.
If it turns out that you need to keep more information about a ShipViaCarrier than just the code, 'UPS', and description, 'United Parcel Service', then it is as simple as adding a column or two. You could even expand the table to be a full blown representation of the businesses that are carriers for the item.
All of the smaller domain tables will fit on a single page of disk. This ensures a single read (and likely a single page in cache). If the other case, you might have your domain table spread across many pages, unless you cluster on the referring table name, which then could cause it to be more costly to use a non-clustered index if you have many values.
You can still have one editor for all rows, as most domain tables will likely have the same base structure/usage. And while you would lose the ability to query all domain values in one query easily, why would you want to? (A union query could easily be created of the tables easily if needed, but this would seem an unlikely need.)
I should probably rebut the thought that might be in your mind. "What if I need to add a new column to all domain tables?" For example, you forgot that the customer wants to be able to do custom sorting on domain values and didn't put anything in the tables to allow this. This is a fair question, especially if you have 1000 of these tables in a very large database. First, this rarely happens, and when it does it is going to be a major change to your database in either way.
Second, even if this became a task that was required, SQL has a complete set of commands that you can use to add columns to tables, and using the system tables it is a pretty straightforward task to build a script to add the same column to hundreds of tables all at once. That will not be as easy of a change, but it will not be so much more difficult to outweigh the large benefits.
The point of this tip is simply that it is better to do the work upfront, making structures solid and maintainable, rather than trying to attempt to do the least amount of work to start out a project. By keeping tables down to representing one "thing" it means that most changes will only affect one table, after which it follows that there will be less rework for you down the road.
Using identity/guid columns as your only key
First Normal Form dictates that all rows in a table must be uniquely identifiable. Hence, every table should have a primary key. SQL Server allows you to define a numeric column as an IDENTITY column, and then automatically generates a unique value for each row. Alternatively, you can use NEWID() (or NEWSEQUENTIALID()) to generate a random, 16 byte unique value for each row. These types of values, when used as keys, are what are known as surrogate keys. The word surrogate means "something that substitutes for" and in this case, a surrogate key should be the stand-in for a natural key.
The problem is that too many designers use a surrogate key column as the only key column on a given table. The surrogate key values have no actual meaning in the real world; they are just there to uniquely identify each row.
Now, consider the following Part table, whereby PartID is an IDENTITY column and is the primary key for the table:
 
PartID
PartNumber
Description
1
XXXXXXXX
The X part
2
XXXXXXXX
The X part
3
YYYYYYYY
The Y part
How many rows are there in this table? Well, there seem to be three, but are rows with PartIDs 1 and 2 actually the same row, duplicated? Or are they two different rows that should be unique but were keyed in incorrectly?
The rule of thumb I use is simple. If a human being could not pick which row they want from a table without knowledge of the surrogate key, then you need to reconsider your design. This is why there should be a key of some sort on the table to guarantee uniqueness, in this case likely on PartNumber.
In summary: as a rule, each of your tables should have a natural key that means something to the user, and can uniquely identify each row in your table. In the very rare event that you cannot find a natural key (perhaps, for example, a table that provides a log of events), then use an artificial/surrogate key.
Not using SQL facilities to protect data integrity
All fundamental, non-changing business rules should be implemented by the relational engine. The base rules of nullability, string length, assignment of foreign keys, and so on, should all be defined in the database.
There are many different ways to import data into SQL Server. If your base rules are defined in the database itself can you guarantee that they will never be bypassed and you can write your queries without ever having to worry whether the data you're viewing adheres to the base business rules.
Rules that are optional, on the other hand, are wonderful candidates to go into a business layer of the application. For example, consider a rule such as this: "For the first part of the month, no part can be sold at more than a 20% discount, without a manager's approval".
Taken as a whole, this rule smacks of being rather messy, not very well controlled, and subject to frequent change. For example, what happens when next week the maximum discount is 30%? Or when the definition of "first part of the month" changes from 15 days to 20 days? Most likely you won't want go through the difficulty of implementing these complex temporal business rules in SQL Server code – the business layer is a great place to implement rules like this.
However, consider the rule a little more closely. There are elements of it that will probably never change. E.g.
The maximum discount it is ever possible to offer
The fact that the approver must be a manager
These aspects of the business rule very much ought to get enforced by the database and design. Even if the substance of the rule is implemented in the business layer, you are still going to have a table in the database that records the size of the discount, the date it was offered, the ID of the person who approved it, and so on. On the Discount column, you should have a CHECK constraint that restricts the values allowed in this column to between 0.00 and 0.90 (or whatever the maximum is). Not only will this implement your "maximum discount" rule, but will also guard against a user entering a 200% or a negative discount by mistake. On the ManagerID column, you should place a foreign key constraint, which reference the Managers table and ensures that the ID entered is that of a real manager (or, alternatively, a trigger that selects only EmployeeIds corresponding to managers).
Now, at the very least we can be sure that the data meets the very basic rules that the data must follow, so we never have to code something like this in order to check that the data is good:
 
SELECT CASE WHEN discount < 0 then 0 else WHEN discount > 1 then 1…
We can feel safe that data meets the basic criteria, every time.
Not using stored procedures to access data
Stored procedures are your friend. Use them whenever possible as a method to insulate the database layer from the users of the data. Do they take a bit more effort? Sure, initially, but what good thing doesn't take a bit more time? Stored procedures make database development much cleaner, and encourage collaborative development between your database and functional programmers. A few of the other interesting reasons that stored procedures are important include the following.
Maintainability
Stored procedures provide a known interface to the data, and to me, this is probably the largest draw. When code that accesses the database is compiled into a different layer, performance tweaks cannot be made without a functional programmer's involvement. Stored procedures give the database professional the power to change characteristics of the database code without additional resource involvement, making small changes, or large upgrades (for example changes to SQL syntax) easier to do.
Encapsulation
Stored procedures allow you to "encapsulate" any structural changes that you need to make to the database so that the knock on effect on user interfaces is minimized. For example, say you originally modeled one phone number, but now want an unlimited number of phone numbers. You could leave the single phone number in the procedure call, but store it in a different table as a stopgap measure, or even permanently if you have a "primary" number of some sort that you always want to display. Then a stored proc could be built to handle the other phone numbers. In this manner the impact to the user interfaces could be quite small, while the code of stored procedures might change greatly.
Security
Stored procedures can provide specific and granular access to the system. For example, you may have 10 stored procedures that all update table X in some way. If a user needs to be able to update a particular column in a table and you want to make sure they never update any others, then you can simply grant to that user the permission to execute just the one procedure out of the ten that allows them perform the required update.
Performance
There are a couple of reasons that I believe stored procedures enhance performance. First, if a newbie writes ratty code (like using a cursor to go row by row through an entire ten million row table to find one value, instead of using a WHERE clause), the procedure can be rewritten without impact to the system (other than giving back valuable resources.) The second reason is plan reuse. Unless you are using dynamic SQL calls in your procedure, SQL Server can store a plan and not need to compile it every time it is executed. It's true that in every version of SQL Server since 7.0 this has become less and less significant, as SQL Server gets better at storing plans ad hoc SQL calls (see note below). However, stored procedures still make it easier for plan reuse and performance tweaks. In the case where ad hoc SQL would actually be faster, this can be coded into the stored procedure seamlessly.
In 2005, there is a database setting (PARAMETERIZATION FORCED) that, when enabled, will cause all queries to have their plans saved. This does not cover more complicated situations that procedures would cover, but can be a big help. There is also a feature known as plan guides, which allow you to override the plan for a known query type. Both of these features are there to help out when stored procedures are not used, but stored procedures do the job with no tricks.
And this list could go on and on. There are drawbacks too, because nothing is ever perfect. It can take longer to code stored procedures than it does to just use ad hoc calls. However, the amount of time to design your interface and implement it is well worth it, when all is said and done.
Trying to code generic T-SQL objects
I touched on this subject earlier in the discussion of generic domain tables, but the problem is more prevalent than that. Every new T-SQL programmer, when they first start coding stored procedures, starts to think "I wish I could just pass a table name as a parameter to a procedure." It does sound quite attractive: one generic stored procedure that can perform its operations on any table you choose. However, this should be avoided as it can be very detrimental to performance and will actually make life more difficult in the long run.
T-SQL objects do not do "generic" easily, largely because lots of design considerations in SQL Server have clearly been made to facilitate reuse of plans, not code. SQL Server works best when you minimize the unknowns so it can produce the best plan possible. The more it has to generalize the plan, the less it can optimize that plan.
Note that I am not specifically talking about dynamic SQL procedures. Dynamic SQL is a great tool to use when you have procedures that are not optimizable / manageable otherwise. A good example is a search procedure with many different choices. A precompiled solution with multiple OR conditions might have to take a worst case scenario approach to the plan and yield weak results, especially if parameter usage is sporadic.
However, the main point of this tip is that you should avoid coding very generic objects, such as ones that take a table name and twenty column names/value pairs as a parameter and lets you update the values in the table. For example, you could write a procedure that started out:
 
CREATE PROCEDURE updateAnyTable
@tableName sysname,
@columnName1 sysname,
@columnName1Value varchar(max)
@columnName2 sysname,
@columnName2Value varchar(max)

The idea would be to dynamically specify the name of a column and the value to pass to a SQL statement. This solution is no better than simply using ad hoc calls with an UPDATE statement. Instead, when building stored procedures, you should build specific, dedicated stored procedures for each task performed on a table (or multiple tables.) This gives you several benefits:
Properly compiled stored procedures can have a single compiled plan attached to it and reused.
Properly compiled stored procedures are more secure than ad-hoc SQL or even dynamic SQL procedures, reducing the surface area for an injection attack greatly because the only parameters to queries are search arguments or output values.
Testing and maintenance of compiled stored procedures is far easier to do since you generally have only to search arguments, not that tables/columns/etc exist and handling the case where they do not
A nice technique is to build a code generation tool in your favorite programming language (even T-SQL) using SQL metadata to build very specific stored procedures for every table in your system. Generate all of the boring, straightforward objects, including all of the tedious code to perform error handling that is so essential, but painful to write more than once or twice.
In my Apress book, Pro SQL Server 2005 Database Design and Optimization, I provide several such "templates" (manly for triggers, abut also stored procedures) that have all of the error handling built in, I would suggest you consider building your own (possibly based on mine) to use when you need to manually build a trigger/procedure or whatever.
Lack of testing
When the dial in your car says that your engine is overheating, what is the first thing you blame? The engine. Why don't you immediately assume that the dial is broken? Or something else minor? Two reasons:
The engine is the most important component of the car and it is common to blame the most important part of the system first.
It is all too often true.
As database professionals know, the first thing to get blamed when a business system is running slow is the database. Why? First because it is the central piece of most any business system, and second because it also is all too often true.
We can play our part in dispelling this notion, by gaining deep knowledge of the system we have created and understanding its limits through testing.
But let's face it; testing is the first thing to go in a project plan when time slips a bit. And what suffers the most from the lack of testing? Functionality? Maybe a little, but users will notice and complain if the "Save" button doesn't actually work and they cannot save changes to a row they spent 10 minutes editing. What really gets the shaft in this whole process is deep system testing to make sure that the design you (presumably) worked so hard on at the beginning of the project is actually implemented correctly.
But, you say, the users accepted the system as working, so isn't that good enough? The problem with this statement is that what user acceptance "testing" usually amounts to is the users poking around, trying out the functionality that they understand and giving you the thumbs up if their little bit of the system works. Is this reasonable testing? Not in any other industry would this be vaguely acceptable. Do you want your automobile tested like this? "Well, we drove it slowly around the block once, one sunny afternoon with no problems; it is good!" When that car subsequently "failed" on the first drive along a freeway, or during the first drive through rain or snow, then the driver would have every right to be very upset.
Too many database systems get tested like that car, with just a bit of poking around to see if individual queries and modules work. The first real test is in production, when users attempt to do real work. This is especially true when it is implemented for a single client (even worse when it is a corporate project, with management pushing for completion more than quality).
Initially, major bugs come in thick and fast, especially performance related ones. If the first time you have tried a full production set of users, background process, workflow processes, system maintenance routines, ETL, etc, is on your system launch day, you are extremely likely to discover that you have not anticipated all of the locking issues that might be caused by users creating data while others are reading it, or hardware issues cause by poorly set up hardware. It can take weeks to live down the cries of "SQL Server can't handle it" even after you have done the proper tuning.
Once the major bugs are squashed, the fringe cases (which are pretty rare cases, like a user entering a negative amount for hours worked) start to raise their ugly heads. What you end up with at this point is software that irregularly fails in what seem like weird places (since large quantities of fringe bugs will show up in ways that aren't very obvious and are really hard to find.)
Now, it is far harder to diagnose and correct because now you have to deal with the fact that users are working with live data and trying to get work done. Plus you probably have a manager or two sitting on your back saying things like "when will it be done?" every 30 seconds, even though it can take days and weeks to discover the kinds of bugs that result in minor (yet important) data aberrations. Had proper testing been done, it would never have taken weeks of testing to find these bugs, because a proper test plan takes into consideration all possible types of failures, codes them into an automated test, and tries them over and over. Good testing won't find all of the bugs, but it will get you to the point where most of the issues that correspond to the original design are ironed out.
If everyone insisted on a strict testing plan as an integral and immutable part of the database development process, then maybe someday the database won't be the first thing to be fingered when there is a system slowdown.
Summary
Database design and implementation is the cornerstone of any data centric project (read 99.9% of business applications) and should be treated as such when you are developing. This article, while probably a bit preachy, is as much a reminder to me as it is to anyone else who reads it. Some of the tips, like planning properly, using proper normalization, using a strong naming standards and documenting your work– these are things that even the best DBAs and data architects have to fight to make happen. In the heat of battle, when your manager's manager's manager is being berated for things taking too long to get started, it is not easy to push back and remind them that they pay you now, or they pay you later. These tasks pay dividends that are very difficult to quantify, because to quantify success you must fail first. And even when you succeed in one area, all too often other minor failures crop up in other parts of the project so that some of your successes don't even get noticed.
The tips covered here are ones that I have picked up over the years that have turned me from being mediocre to a good data architect/database programmer. None of them take extraordinary amounts of time (except perhaps design and planning) but they all take more time upfront than doing it the "easy way". Let's face it, if the easy way were that easy in the long run, I for one would abandon the harder way in a second. It is not until you see the end result that you realize that success comes from starting off right as much as finishing right.
1
Add a comment...
Have them in circles
110 people
javad helali's profile photo
Black Belt Coder's profile photo
rooh allah mahboobi's profile photo
Haythem Ben Omrane's profile photo
Cường Phạm Thị's profile photo
Haythem Hassan's profile photo
cva.banu Karri's profile photo
beshoy sobhi's profile photo
erdinç çetinkaya's profile photo
 
Total sum in Gridview Footer in Asp.net C#
Using C#

protected void grd_RowDataBound(object sender, System.Web.UI.WebControls.GridViewRowEventArgs e) { // GET THE RUNNING TOTAL OF PRICE FOR EACH PAGE. if (e.Row.RowType == DataControlRowType.DataRow) { Label lblPgTotal = (Label)e.Row.FindControl("lblTotalPrice"); dPageTotal += Decimal.Parse(lblPgTotal.Text); }
1
Add a comment...
 
Total sum in Gridview Footer in Asp.net C#
Using Jquery

var result = from p in st.prices select new { p.id, p.Name, p.price1 }; GridView1.DataSource = result; GridView1.DataBind(); double sum=0; foreach(var item in result){   sum += Convert.ToDouble(item.price1); } Label lblsum = (Label)GridView1.FooterRow.FindControl("Lbltotal"); lblsum.Text = sum.ToString();



1
Add a comment...

asp.net Tutorial

Shared publicly  - 
 
Sending HTML formatted mails

Sending rich and colorful  emails with your own logos and banners is not a very difficult task if you know how to send HTML formatted mails through .NET
 Follow the steps below::
 
Step1:Simply design the format u want to send in html. Add the images banners etc that you want to add.
 
Step2: import the namespace using System.Net.Mail; in your .cs page
 
Step3: append the html table in string like following and send mail with allowing html in body
 
public string fnProjectRoot(string PATH_INFO)
       {
         string rootProject = "";
         string[] tmpStr = PATH_INFO.Split('/');
         rootProject = tmpStr[1];
         return "/" + rootProject + "/";
        }
public void sbDoMail()
        {
          string currentPath = "http://" + Request.ServerVariables["HTTP_HOST"] + fnProjectRoot(Request.ServerVariables["PATH_INFO"]);
          StringBuilder confirmMail = null;
          confirmMail.Append( "<table style='width: 100%; position: static; height: 100%'>" );
         confirmMail.Append( "<tr>" );
         confirmMail.Append( "<td style='width: 29px; height: 21px'>" );
         confirmMail.Append( " </td>" );
         confirmMail.Append( "<td class='smallbluetext1' style='height: 21px; text-decoration: underline'>" );
         confirmMail.Append( "<p style='background-color: silver'>" );
         confirmMail.Append( "Welcome and email validation email:</p>" );
         confirmMail.Append( "</td>" );
         confirmMail.Append( "<td>" );
         confirmMail.Append( " </td>" );
         confirmMail.Append( "</tr>" );
         confirmMail.Append( "<tr>" );
         confirmMail.Append( "<td>" );
         confirmMail.Append( "</td>" );
        confirmMail.Append( "<td>" );
        confirmMail.Append( "<p class='MsoNormal' style='margin: 0in 0in 0pt'>" );
        confirmMail.Append( "<img alt='' border='0' src="+ currentPath+"MFlogo[1].gif/>&nbsp;</p>" );
        confirmMail.Append( "</td>" );
        confirmMail.Append( "<td>" );
        confirmMail.Append( "</td>" );
        confirmMail.Append( "</tr>" );
        confirmMail.Append( "<tr>" );
        confirmMail.Append( "<td>" );
        confirmMail.Append( "</td>" );
        confirmMail.Append( "<td>" );
        confirmMail.Append( "<span style='font-size: 12pt;" );
        confirmMail.Append( "" );
        confirmMail.Append( "</span></td>" );
        confirmMail.Append( " <td>" );
        confirmMail.Append( "</td>" );
        onfirmMail.Append( "</tr>" );
        confirmMail.Append( "<tr>" );
       confirmMail.Append( "<td >" );
                  confirmMail.Append( "</td>" );
       confirmMail.Append( "<td class='smallnormalbrowntext'>" );
       confirmMail.Append( "<p>" );
        confirmMail.Append( "Dear&nbsp; <span class='headline' style='color:Blue;'> NAME </span>,</p>" );
        confirmMail.Append( "<p>" );
        confirmMail.Append( "<?xml namespace='' ns='urn:schemas-microsoft-com:office:office' prefix='o' ?><?xml namespace='' prefix='O' ?><o:p></o:p>" );
        confirmMail.Append( "</p>" );
        confirmMail.Append( "<p>" );
        confirmMail.Append( "You are welcome " );
        confirmMail.Append( " We are sending this mail to validate" );
        confirmMail.Append( "your email address given.</p>" );
        confirmMail.Append( "<p>" );
       confirmMail.Append( "Our e-mail validation is intended to confirm that the email entered in your profile" );
        confirmMail.Append( "is authentic. This procedure adds credence to your contact information.</p>" );
        confirmMail.Append( "<p>" );
        confirmMail.Append( "" );
         confirmMail.Append( "</td>" );
        confirmMail.Append( "<td >" );
        confirmMail.Append( "</td>" );
        confirmMail.Append( "</tr>" );
        confirmMail.Append( "<tr>" );
        confirmMail.Append( " <td style='width: 29px; height: 92px'>" );
       confirmMail.Append( "</td>" );
       confirmMail.Append( "<td style='height: 92px'>" );
       confirmMail.Append( " <table class='smallnormalbrowntext'>" );
        confirmMail.Append( "<tr>" );
        confirmMail.Append( "<td style='width: 251px'>" );
        confirmMail.Append( "<p class='MsoNormal' style='margin: 0in 0in 0pt'>" );
       confirmMail.Append( "<b style='<u>Login information<o:p></o:p></u></b></p>" );
       confirmMail.Append( "</td>" );
         confirmMail.Append( "<td rowspan='3' style='width: 52px'>" );
         confirmMail.Append( "</td>" );
         confirmMail.Append( "</tr>" );
         confirmMail.Append( "<tr>" );
         confirmMail.Append( "<td class='headline'>" );
         confirmMail.Append( "<span class='smallbluetext1' >Username:</span>" );
         confirmMail.Append( "</td>" );
         confirmMail.Append( "</tr>" );
         confirmMail.Append( "<tr>" );
        confirmMail.Append( "<td class='headline'>" );
       confirmMail.Append( "<span class='smallbluetext1'>Password:</span>" );
       confirmMail.Append( "" );
      confirmMail.Append( "</td>" );
     confirmMail.Append( "</tr>" );
     confirmMail.Append( "</table>" );
      confirmMail.Append( "<p class='MsoNormal' style='margin: 0in 0in 0pt'>" );
      confirmMail.Append( "</td>" );
      confirmMail.Append( "<td style='width: 23px; color: #000000; height: 92px'>" );
      confirmMail.Append( "</td>" );
     confirmMail.Append( "</tr>" );
     confirmMail.Append( "<tr style='color: #000000'>" );
      confirmMail.Append( "<td style='width: 29px'>" );
      confirmMail.Append( "</td>" );
                  confirmMail.Append( "<td class='smallnormalbrowntext'>" );
        confirmMail.Append( "<p class='MsoNormal' style='margin: 0in 0in 0pt'>" );
       confirmMail.Append( "With regards,</p>" );
       confirmMail.Append( "<p class='MsoNormal' style='margin: 0in 0in 0pt'>" );
      confirmMail.Append( "<b style='<span style='color: #0000ff '>" );
     confirmMail.Append( "team<o:p></o:p></span></b></p>" );
     confirmMail.Append( "<p class='MsoNormal' style='margin: 0in 0in 0pt'>" );
   confirmMail.Append( "<o:p></o:p>" );
   confirmMail.Append( "</p>" );
confirmMail.Append( "<p class='MsoNormal' style='margin: 0in 0in 0pt'>" );
     confirmMail.Append( "<o:p></o:p>" );
      confirmMail.Append( "</p>" );
      confirmMail.Append( "<p class='MsoNormal' style='margin: 0in 0in 0pt'>" );
       confirmMail.Append( "<span style='/span>Need Help ? Please write to us at" );
     confirmMail.Append( "" );
      confirmMail.Append( "</p>" );
       confirmMail.Append( "<p class='MsoNormal' style='margin: 0in 0in 0pt'>" );
       confirmMail.Append( "<span style='&nbsp; </span>" );
        confirmMail.Append( "</p>" );
        confirmMail.Append( "<p class='MsoNormal' style='margin: 0in 0in 0pt'>" );
       confirmMail.Append( "" );
       confirmMail.Append( "" );
       confirmMail.Append( "" );
        confirmMail.Append( "</p>" );
        confirmMail.Append( "<p class='MsoNormal' style='margin: 0in 0in 0pt; text-indent: 0.5in'>" );
       confirmMail.Append( "" );
       confirmMail.Append( "</p>" );
       confirmMail.Append( "<p class='MsoNormal' style='margin: 0in 0in 0pt'>" );
       confirmMail.Append( "<span style='</span>&nbsp;</p>" );
                  confirmMail.Append( "<p class='MsoNormal' style='margin: 0in 0in 0pt'>" );
      confirmMail.Append( "" );
      confirmMail.Append( "</p>" );
      confirmMail.Append( "<p class='MsoNormal' style='margin: 0in 0in 0pt; text-indent: 0.5in'>" );
      confirmMail.Append( "" );
      confirmMail.Append( "</p>" );
      confirmMail.Append( "</td>" );
      confirmMail.Append( "<td style='width: 23px'>" );
      confirmMail.Append( "</td>" );
      confirmMail.Append( "</tr>" );
     confirmMail.Append( "</table>" );
                 
                  string add = "username@gmail.com";
                  try
                  {
                        System.Net.Mail.MailMessage mMailMessage = new System.Net.Mail.MailMessage();
                        SmtpClient mSmtpClient = new SmtpClient();
                        mSmtpClient.Host = "192.168.10.1";
                        mSmtpClient.DeliveryMethod = SmtpDeliveryMethod.Network;

                        mMailMessage.From = new MailAddress("monalisab@mindfiresolutions.com");
                        mMailMessage.To.Add(new MailAddress(add));
                        mMailMessage.Subject = "Congratulations";
                        mMailMessage.Body = confirmMail.ToString();
                        mMailMessage.IsBodyHtml = true;//----------Line for sending html formatted mail
                        mSmtpClient.Send(mMailMessage);
                  }
                  catch (Exception ex)
                  {
                        Response.Write(ex);
                  }
            }
1
Add a comment...

asp.net Tutorial

Shared publicly  - 
 
Methods in Global.asax
This blog is intended to spread some light on the various methods which are available in global.asax file in ASP.NET. It’s very important to understand the methods in global.asax so that we as programmers can handle some application level events very efficiently. I said application level events and reason for using application word is that global.asax is an application level file and methods in it are used to handle application level events and these methods are not at all specific to any aspx page. Some of the common methods in the order in which they are executed are listed below
Application_Start
Application_BeginRequest
Application_AuthenticateRequest
Session_Start
Application_EndRequest
Session_End
Application_End
Application_Error
Now let’s see what is the major difference between these methods or events. Oh I forgot to say, these are actually events not methods which get raised when a particular event gets triggered. Before we see the various methods in Global.asax I would like to tell you that Global.asax is actually derived from a class called “HttpApplication”. The above listed methods are only a few methods which I am gonna talk about. The listing of other methods can be found at the end of the blog. Now lets see the above mentioned events one by one.

Application_Start
Application_Start event gets triggered only once during the life cycle of the application. This once happens when the first request for any resource in the application comes. Resource can be a page or an image in the application. When the very first request for a resource, say a web page, is made by a user “Application_Start” is triggered after which this event is not at all executed. If by any chance the server where the application is hosted is restarted then this event is fired once again i.e. when the very first request for any resource in the application is made after the server is reset.

Application_BeginRequest
“Application_BeginRequest” is the second event which gets fired after “Application_Start”. Unlike the “Application_Start”, “Application_BeginRequest” is triggered for each and every request which comes to the application. Since this method is fired for any request made to the application you can use this method to keep track of what and all resources are accessed through this method.

Application_AuthenticateRequest
“Application_AuthenticateRequest” is the next event in line which is triggered after “Application_BeginRequest” is triggered. “Application_AuthenticateRequest” is also fired for each and every request. This event can be used to write code in scenarios where you want to do something when the user is getting authenticated.

Session_Start
The next event in line which gets triggered after “Application_AuthenticateRequest” is “Session_Start”. Session start event is fired only when a new session for a user starts. Once “Session_Start” for a user is fired then if the user makes subsequent request to any resource within the application this event is not at all triggered. The event is triggered only when the user’s session expires and then the user tries to access any resource in the application again.
This event can be used when you want to do something when the user visits you site/application for the first time or when his session starts. This event doesn’t get triggered if you are not using sessions which can be disabled in the web.config.

Application_EndRequest
The next event in line which gets fired once the request for the user is processed is “Applicatin_EndRequest”. This event is the closing event of “Applicatin_BeginRequest”. This event is also fired for each and every request which comes for the application.

Session_End
The closing event of “Session_Start” event. Whenever a user’s session in the application expires this event gets fired. So anything you want to do when the user’s session expires you can write codes here. The session expiration time can be set in web.config file. By default session time out is set to 20 mins.

Application_End
The same as “Application_Start”, “Application_End” is executed only once, when the application is unloaded. This event is the end event of “Application_Start”. This event is normally fired when the application is taken offline or when the server is stopped.

Application_Error
Now we come to the last event mentioned in this blog and that is “Application_Error”. This event gets fired when any unhandled exception/error occurs anywhere in the application. Any unhandled here means exception which are not caught using try catch block. Also if you have custom errors enabled in your application i.e. in web.config file then the configuration in web.config takes precedence and all errors will be directed to the file mentioned in the tag.
Lets see with an e.g. how these events get fired.
Suppose “A”, “B” and “C” are users who are going to access a site named “My Site”. “A” is the very first user to visit “My Site” and he/she is accessing “productlist.aspx” page. At this time the flow of the request is as follows. The “Application_Start” event is triggered, since “A” is the very first user to visit the application, after this “Application_BeginRequest”, then “Application_AuthenticateRequest”, then “Session_Start”, “productlist.aspx” page level events are processed and then “Application_EndRequest” event is triggered. After accessing “productlist.aspx” if “A” access some other page then for those page request the flow will be first “Application_BeginRequest”, “Application_AuthenticateRequest” then the page processing (page level events) and then “Application_EndRequest”. For every subsequent request this pattern is followed.

When “B” accesses some resource in the site, say “default.aspx”, then first “Applicatin_BeginRequest”, second “Application_AuthenticateRequest”, third “Session_Start” then “default.aspx” page level events are executed and after that “Application_EndRequest” is executed. After accessing “default.aspx” “B” access “productlist.aspx” then first “Application_BeginRequest”, second “Application_AuthenticateRequest” then “productlist.aspx” and then “Application_EndRequest” event is triggered. He refreshes the page the same events are executed in the same order.
The above same process is repeated for “C” also.
Suppose you have an unhandled exception and you don’t have custom errors enabled in web.config then when a user accesses a resource the flow will be first “Application_BeginRequest”, “Application_AuthenticateRequest”, page level event and an error occurs in the page then it goes to “Application_Error” after that “Application_EndRequest”.
The order mentioned above is how the events are triggered. So with this I hope you would have got a clear idea on how these events are triggered.
Some other events which are part of the HttpApplication class are as follows
PostAuthenticateRequest
AuthorizeRequest
PostAuthorizeRequest
ResolveRequestCache
PostResolveRequestCache
PostMapRequestHandler
AcquireRequestState
PostAcquireRequestState
PreRequestHandlerExecute
PostRequestHandlerExecute
ReleaseRequestState
PostReleaseRequestState
UpdateRequestCache
PostUpdateRequestCache
LogRequest. (Supported in IIS 7.0 only.)
PostLogRequest (Supported in IIS 7.0 only.)
1
Add a comment...

asp.net Tutorial

Shared publicly  - 
 
Instructions 1

SQL database-driven websites are at risk.
Any web page which passes parameters to a database can be vulnerable to attacks. This includes e-commerce shopping carts or any other website that has a form for login, search, etc. Any SQL database-driven website is at risk of hackers who may be able to enter into the database through a back door. Usually these back doors are present in URL querystrings and form inputs, such as Login forms, Search forms, or other user input textboxes that can communicate with a database. 2

An overview of hacking.
Generally, a hacker can enter bogus characters into the URL querystring or a textbox. The bogus input is then interpreted as SQL rather than ordinary user data and is executed by the unsuspecting database. As a result, the website may break and display an error, allowing the hacker to glean private information about the database. Even worse, the hacker's hazardous scripts may actually be executed on the database, causing security breaches and/or permanent damage.


Sponsored LinksDownload Password ManagerNever Forget Your Passwords Again. Over 50 Million Downloads To Date!RoboForm.com/Password-Management 3

How hackers do it.
The first goal of a hacker is to repeatedly try to break a website, causing it to display a variety of valuable errors that give away private database details. In this way, he can gain insight into the structure of the database and ultimately create a map or footprint of all its tables and columns. The second goal of the hacker is to actually manipulate the database by executing scripts in malicious ways. With control over the database, the hacker may possibly steal credit card numbers, erase data or infect it with viruses, among other nasty things. In essence, the URL querystring and textbox are the two backdoors into a database. Getting errors and manipulating the backdoors are the two methods used by hackers to ultimately destroy a database. 4

Hack your own website.
Let's look at how a hacker might go about breaking into a website. Using the first technique described, he can hack the URL querystring and cause an error to be displayed. You can do a simple test to hack into your own website via the URL querystring. All you have to do is type something else directly into the address bar at the end of your querystring.Type your URL like the following example and press enter:
http://www.mywebsite.com/bookreports.asp?reportID=21Now simply add a single quote to the end the querystring and press enter:
http://www.mywebsite.com/bookreports.asp?reportID=21' 5

Generate an error.
As predicted, you may have successfully broken your website and received an error as follows.Error Type:
Microsoft OLE DB Provider for ODBC Drivers (0x80040E14)
[Microsoft][ODBC SQL Server Driver][SQL Server]Unclosed quotation mark before the character string ' AND users.userID=reports.reportsID'.
/bookreports.asp, line 20The single quote causes an unclosed quotation mark error and now the once-secret table names and column names of your database are publicly visible. After generating a series of these kinds of valuable errors, a hacker can piece together private database details which will ultimately help him break into and wreak havoc on the database. 6

Hide website errors.
The top most effective solution for keeping the private details of your database from getting into the hands of a hacker is to setup a custom error page for your website. This way, a hacker will never see any detailed error messages. If you do nothing else, this is the number one thing that every website must have. Otherwise, you are giving the hacker an open invitation into your database and practically offering him all the information he needs to launch an attack. 7

Setup custom error pages.
Some hosting services automatically use custom error pages to help protect your security. To setup your own custom error page, you will need to consult your web host for instructions. Generally, you will create a new HTML page to look the way you please and that says something short and sweet, like 'Sorry, the page you have requested is unavailable.' Then save it as error404.htm and upload it to your server. Following the instructions from your host, you will change the website settings to point to the new error page. This will stop many hackers right in their tracks. 8

Manipulate the URL querystring.
Besides fishing for errors, a hacker can enter even more dangerous code than a simple single quote into the URL querystring. In an effort to execute malicious scripts on a database, a variety of creative coding is employed, such as %20HAVING%201=1 or maybe %20;shutdown with no wait-- or much worse. Once the hacker is able to execute scripts, the vulnerable database is like putty in their hands. The hacker never has to know the database login or connection string because he is using the URL querystring which already has an open connection.Warning: Test this on your own website only if you really want to erase a table in your database. Simply, enter the following text after the end of your URL querystring and press enter. Be sure to use the real name of one of your tables (preferably a test table!) in place of myTablename.http://www.mywebsite.com/bookreports.asp?reportID=21'; drop table myTablename--Your table is permanently deleted. 9

Manipulate the form input.
The other most common point of entry besides the URL querystring is the form input. A hacker may manipulate any textbox within an HTML form. A search box or a login form with username and password fields are all prime targets. The hacker can enter bogus characters into the textbox and submit the form. The input is then interpreted as SQL rather than ordinary user data and executed by the database. Again, this attack will either cause an error so he can glean private information about your database, or it may actually insert hazardous scripts and wreak havoc on the database.Warning: Test this on your own website only if you really want to erase a table in your database. Simply, enter the following text into your textbox (say, a search box or username box) and then submit the form. Be sure to use the real name of one of your tables (preferably a test table!) in place of myTablename.fred'; drop table myTablename--Your table is permanently deleted. 10

Block input containing malicious code.
By now, you probably have a good idea of how much damage a hacker can do and you are ready and willing to do whatever it takes to stop them. The number one way to block a hacker from manipulating the URL querstrying and textboxes is to block their input. But, how do you determine who they are, what they will input and whether or not it is safe? Unfortunately, you cannot know. So, you must assume that all user input could be potentially dangerous. A common saying in the programming world is that ALL INPUT IS EVIL. Thus, it must be treated with caution. Everything from everybody should be checked every time to ensure dangerous code does not slip in. This is accomplished by checking all input that is submitted via a querystring or form and then rejecting or removing unsafe characters before it ever reaches the database. If this sounds like a lot of trouble, you are right. But, it is the price we pay to protect our websites and databases from the wrath of hackers. It is your responsibility as the webmaster to ensure that only clean, safe input is allowed to enter your database. 11

Input validation.
To check if the input entered into the URL querystring or textbox is safe, we can use input validation rules. In other words, using ASP code on a web page can validate the input collected from the querystring or form to make sure it contains only safe characters. Once the input is deemed safe, it can be stored in a new variable, inserted into the SQL string and sent to the database. For more details about validation, see my companion article in the resources section or at http://www.ehow.com/how_4434953_block-hackers-asp-validation.html . 12

12.
The wash and rinse cycle.
Input validation should be a two-part process, like a wash and rinse cycle. We want to thoroughly clean all input by first checking for safe characters and second by checking for bad strings. See the resources at the end of this article for a more in depth discussion on this method. The code for the good character function and the bad string function can be found in my companion article in the resources section or at http://www.ehow.com/how_4434953_block-hackers-asp-validation.html . 13

Filter characters.
Another method that can be used in conjunction with the above two functions, but is considered to be very weak when used alone, is to sanitize the input by filtering or escaping.A well-known threat is the single quote or apostrophe because it breaks the SQL statement. Following is an ASP example that renders the single quote harmless, by replacing it with two single quotes.'doubleup single quotes
newSafeString = replace(searchInput, "'", "''")Other variations for the replace function include stripping out the script tag and replacing it with a space. Or, filter out characters such as the dollar sign $ quotation mark " semi-colon ; and apostrophe ' the left and right angle brackets <> the left and right parentheses ( ) the pound sign # and the ampersand &. Or convert these characters to their HTML entities.Remember to use a solution that best fits your website or consult a professional. 14

Finally, there are a few other security measures that you can research and explore on your own. Remember a hacker can easily save a copy of your webpage, then modify the HTML and javascript, then re-upload the page. Therefore, it is best to never use javascript alone for input validation since it can easily be removed, and instead duplicate any javacript validation with ASP validation. Also, hidden input fields are a threat in the same way since they can easily be altered to include bogus code. Other tips include: Never give away any clues about your database, including making your input field names the same as the database field names. Always set a max length for inputs and truncate the excess. 15

If you would like to pursue more advanced security techniques, please see the resources at the end of this article. Topics discussed include, password policies, buffer overrun, creative table and column names, table name aliases, set and check data types, .bak files, stored procedures with parameters, and log files.


Sponsored LinksBig Data Analyticshpccsystems.comEnterprise Big Data Solution Is Now Open Source. Visit us Today!Send Mail from ASP / .NETwww.aspemail.comAspEmail supports Unicode, HTML, encryption, TLS, message queuing.Want To Learn Hacking?www.ibmail.in/Info-SecurityLearn Ethical Hacking & Cyber Security From The Pros Today!The DDOS Specialistwww.riorey.comIdentify and block DDOS attacks automatically and in real time. Tips & Warnings As always, please remember that databases can be highly vulnerable to hackers. The number and frequency SQL injection attacks and XSS (cross-site scripting) attacks are on the rise. So please ensure you have setup custom error pages and use server-side input validation like ASP as a precaution to ensure database security.
Read more: How to Protect Your Website from Hacker Attacks | eHow.com http://www.ehow.com/how_4434719_protect-website-hacker-attacks.html#ixzz1vmbjQaWH
1
حسن دشتي's profile photo
Add a comment...
People
Have them in circles
110 people
javad helali's profile photo
Black Belt Coder's profile photo
rooh allah mahboobi's profile photo
Haythem Ben Omrane's profile photo
Cường Phạm Thị's profile photo
Haythem Hassan's profile photo
cva.banu Karri's profile photo
beshoy sobhi's profile photo
erdinç çetinkaya's profile photo
Contact Information
Contact info
Email
Story
Tagline
Collection of Asp.net and C# articals