Friday, November 20, 2009

Change the Background Row Color of the GridView control using MouseEvents

By using this Article you can learn how to change BackGround Color of a Row in Gridview when user moves mouse on a particular selecting Row.

First Drag and Drop one Gridview Control on a webpage.After Retriving the data in a gridvew select your gridview and choose Events.In Events DoubleClik on RowDataBound.

after DoubleCliking on RowDataBound Event you will get a code in .aspx.cs as Below:


protected void GridView2_RowDataBound(object sender, GridViewRowEventArgs e)

{

if (e.Row.RowType == DataControlRowType.DataRow)

{

e.Row.Attributes.Add("onmouseover", "this.style.backgroundColor='Silver'");

e.Row.Attributes.Add("onmouseout", "this.style.backgroundColor='green'");

}

}


How To change the address bar icon of your website

Introduction
This is Basically use to change the address bar icon of your website. steps for this.
Steps to Change Address Bar Icon:
(1) Create fevicon.ico file.
(2) Add these lines in the head tag of the page. if you have master page then add these lines in master page.
< rel="shortcut icon" type="image/x-icon" href="/favicon.ico">
< rel="icon" type="image/x-icon" href="/favicon.ico">
(3) we are adding second line:
because for some browsers with the first line only it does not display your customize icon.

How to logoff,reboot,shutdown machine using javascrpt code

function Shutdown()
{
var ws = new ActiveXObject("WScript.Shell");
ws.Exec("shutdown.exe -s -t 30 ");
}

just call above function whenever required.

-s: used for shut down
-t:time limitIn above code time limit is 30 second.It will show alert for 30 second that your machine is shutting down .use -l instead of -s for log off and use -r for reboot

Note :The “Initialize and Script ActiveX controls not marked as safe” option should be selected as “Enable”.Goto Tool Menu in browser ->Internet Option ->Security -> select Local intranet -> Click Custom Level ->go to "Initialize and Script ActiveX controls not marked as safe" and mark as enable. Do this otherwise it will show error automation server canot create object.

Thursday, November 19, 2009

How to find out all table details (table name, total rows in table, table size (in KB)) for any given database.

DECLARE
@DBTABLES
AS TABLE
(
SNO INT IDENTITY(1,1),
TABLENAME
VARCHAR(256)
)
INSERT INTO @DBTABLES
SELECT
table_schema + '.' + table_name
FROM
information_schema.TABLES
WHERE
table_type = 'BASE TABLE'
DECLARE
@DBTABLESINFO
AS TABLE
(
SNO INT IDENTITY(1,1),
TABLENAME VARCHAr(256),
ROWS char(11),
reserved varchar(18),
data varchar(18),
index_size varchar(18),
unused varchar(18)
)

DECLARE @COUNT
INT , @CURRENT INT, @TABLENAME VARCHAR(256)
SELECT @COUNT = COUNT(*) FROM @DBTABLES
SET @CURRENT = 1
WHILE (@COUNT >= @CURRENT)
BEGIN
SELECT @TABLENAME =
TABLENAME FROM @DBTABLES WHERE SNO =
@CURRENT
INSERT INTO
@DBTABLESINFO

EXEC sp_spaceused @TABLENAME
SET @CURRENT =
@CURRENT + 1
END


SELECT
TABLENAME,
ROWS,
DATA,
convert(bigint, left(DATA,len(data)-3) )
FROM
@DBTABLESINFO ---where TableName like '%_1'

ORDER BY 4 DESC

Wednesday, November 18, 2009

How to check details of Most Recent Backups and # of days since ANY type of backup on any SQL server

SELECT B.name as Database_Name, ISNULL(STR(ABS(DATEDIFF(day, GetDate(),MAX(backup_finish_date)))), 'NEVER') as DaysSinceLastBackup,
ISNULL(Convert(char(19), MAX(backup_finish_date), 100), 'NEVER') as LastBackupDate,
case
when type='D' then '** FULL **'
when type='I' then 'DIFFERENTIAL'
when type='L' then 'LOG'
end as Backup_Type,
case
when status > 16 then 'Check DB Status' -- Alert that DB might be ReadOnly, Offline etc...
else ' '
end as 'DB Status'
FROM master.dbo.sysdatabases B LEFT OUTER JOIN msdb.dbo.backupset A ON A.database_name = B.name --AND A.type = 'D'
where B.name not like '%skip these%'
GROUP BY B.name , a.type, status
ORDER BY B.name , LastBackupDate desc,a.type, status

Certificate in Software Function Point Estimation

The success of any software project largely depends on effective estimation of project effort, time, and cost. Estimation helps in setting realistic targets for completing a project. The most important estimation that is required to be fairly accurate is that of effort and schedule. This enables us to obtain a reasonable idea of the project cost.

If the effort and schedule estimates are inaccurate, it can impact the project cost drastically and result in the project being delayed. Therefore, before initiating a project, it is essential to know how long it would take to complete the project, what would be the development cost, and how many resources would be required. Using the process of effort and schedule estimation, we can calculate fairly accurate estimates.

Software size is an important input for estimating the effort, schedule, and cost of software. There are various methods and techniques available to estimate these factors.

Function point technique helps to estimates

Target Audience

  • Software engineers with 3 - 4 years of experience
  • Individuals who intend to grow as software project managers


Prerequisites

  • Experience
    • 3 - 4 years of experience in software industry
  • Education
    • Graduate or equivalent
    • Should have basic knowledge of
      • Software development lifecycle
      • Estimation concepts
      • Project management


Application Validity

Each application is valid for a period of One Year.


Recommended Reading

  • EdistaLearning Modules
    • ES101: Software Size Estimation using FPA
    • ES102: Software Effort and Schedule Estimation
    • ES103: Effort and Schedule Estimation using COCOMO II

  • Books and White Papers
    • Function point training booklets, Longstreet D.
    • Function Point Counting Practices Manual, International Function Point Users Group
    • Software Requirements and Estimation, Kishore S. and Naik Rajesh
    • Software Engineering Economics, Boehm, B.W.
    • Software Engineering - A practitioner's approach, Pressman, R.S.
    • Working Schedule Handbook, U. S. Army
    • COCOMO II Model Definition Manual, Center for Software Engineering
    • Software Cost Estimation with COCOMO II, Barry Boehm et al.
    • Measures for Excellence: Reliable


Format of Exam

  • Exam duration: 2 hours
    (Breakup of Duration: 40 min Objective, 40 min Subjective, 40 min Case study = 120 minutes)
  • Question paper consists of following sections:
    • Section 1 Objective: 40% weightage (40 questions)
    • Section 2 Subjective: 40% weightage (10 to 12 questions)
    • Section 3 Case Study: 20% weightage (2 question, 3 options would be provided to chose from)
  • Passing percentage
    • For each of the 3 sections: 60%
    • Overall percentage in certification exam: 75%

An Introduction to Function Point Analysis

The purpose of this article and video is to provide an introduction to Function Point Analysis and its application in non-traditional computing situations. Software engineers have been searching for a metric that is applicable for a broad range of software environments. The metric should be technology independent and support the need for estimating, project management, measuring quality and gathering requirements. Function Point Analysis is rapidly becoming the measure of choice for these tasks.

Function Point Analysis has been proven as a reliable method for measuring the size of computer software. In addition to measuring output, Function Point Analysis is extremely useful in estimating projects, managing change of scope, measuring productivity, and communicating functional requirements.

There have been many misconceptions regarding the appropriateness of Function Point Analysis in evaluating emerging environments such as real time embedded code and Object Oriented programming. Since function points express the resulting work-product in terms of functionality as seen from the user's perspective, the tools and technologies used to deliver it are independent.

The following provides an introduction to Function Point Analysis and is followed by further discussion of potential benefits.
Introduction to Function Point Analysis
One of the initial design criteria for function points was to provide a mechanism that both software developers and users could utilize to define functional requirements. It was determined that the best way to gain an understanding of the users' needs was to approach their problem from the perspective of how they view the results an automated system produces. Therefore, one of the primary goals of Function Point Analysis is to evaluate a system's capabilities from a user's point of view. To achieve this goal, the analysis is based upon the various ways users interact with computerized systems. From a user's perspective a system assists them in doing their job by providing five (5) basic functions. Two of these address the data requirements of an end user and are referred to as Data Functions. The remaining three address the user's need to access data and are referred to as Transactional Functions.

The Five Components of Function Points

Data Functions
  • Internal Logical Files
  • External Interface Files
Transactional Functions
  • External Inputs
  • External Outputs
  • External Inquiries
Internal Logical Files - The first data function allows users to utilize data they are responsible for maintaining. For example, a pilot may enter navigational data through a display in the cockpit prior to departure. The data is stored in a file for use and can be modified during the mission. Therefore the pilot is responsible for maintaining the file that contains the navigational information. Logical groupings of data in a system, maintained by an end user, are referred to as Internal Logical Files (ILF).

External Interface Files - The second Data Function a system provides an end user is also related to logical groupings of data. In this case the user is not responsible for maintaining the data. The data resides in another system and is maintained by another user or system. The user of the system being counted requires this data for reference purposes only. For example, it may be necessary for a pilot to reference position data from a satellite or ground-based facility during flight. The pilot does not have the responsibility for updating data at these sites but must reference it during the flight. Groupings of data from another system that are used only for reference purposes are defined as External Interface Files (EIF).

The remaining functions address the user's capability to access the data contained in ILFs and EIFs. This capability includes maintaining, inquiring and outputting of data. These are referred to as Transactional Functions.

External Input - The first Transactional Function allows a user to maintain Internal Logical Files (ILFs) through the ability to add, change and delete the data. For example, a pilot can add, change and delete navigational information prior to and during the mission. In this case the pilot is utilizing a transaction referred to as an External Input (EI). An External Input gives the user the capability to maintain the data in ILF's through adding, changing and deleting its contents.

External Output - The next Transactional Function gives the user the ability to produce outputs. For example a pilot has the ability to separately display ground speed, true air speed and calibrated air speed. The results displayed are derived using data that is maintained and data that is referenced. In function point terminology the resulting display is called an External Output (EO).

External Inquiries - The final capability provided to users through a computerized system addresses the requirement to select and display specific data from files. To accomplish this a user inputs selection information that is used to retrieve data that meets the specific criteria. In this situation there is no manipulation of the data. It is a direct retrieval of information contained on the files. For example if a pilot displays terrain clearance data that was previously set, the resulting output is the direct retrieval of stored information. These transactions are referred to as External Inquiries (EQ).

In addition to the five functional components described above there are two adjustment factors that need to be considered in Function Point Analysis.

Functional Complexity - The first adjustment factor considers the Functional Complexity for each unique function. Functional Complexity is determined based on the combination of data groupings and data elements of a particular function. The number of data elements and unique groupings are counted and compared to a complexity matrix that will rate the function as low, average or high complexity. Each of the five functional components (ILF, EIF, EI, EO and EQ) has its own unique complexity matrix. The following is the complexity matrix for External Outputs.

1-5 DETs6 - 19 DETs20+ DETs
0 or 1 FTRsLLA
2 or 3 FTRsLAH
4+ FTRsAHH

ComplexityUFP
L (Low)4
A (Average)5
H (High)7

Using the examples given above and their appropriate complexity matrices, the function point count for these functions would be:

Function Name

Function Type

Record
Element Type

Data
Element Type

File Types
Referenced

Unadjusted
FPs

Navigational
data

ILF

3

36

n/a

10

Positional
data

EIF

1

3

n/a

5

Navigational
data - add

EI

n/a

36

1

4

Navigational
data - change

EI

n/a

36

1

4

Navigational
data - delete

EI

n/a

3

1

3

Ground speed
display

EO

n/a

20

3

7

Air speed
display

EO

n/a

20

3

7

Calibrated air
speed display

EO

n/a

20

3

7

Terrain clearance
display

EQ

n/a

1

1

3

Total unadjusted count

50 UFPs


All of the functional components are analyzed in this way and added together to derive an Unadjusted Function Point count.

Value Adjustment Factor - The Unadjusted Function Point count is multiplied by the second adjustment factor called the Value Adjustment Factor. This factor considers the system's technical and operational characteristics and is calculated by answering 14 questions. The factors are:

1. Data Communications
The data and control information used in the application are sent or received over communication facilities.

2. Distributed Data Processing
Distributed data or processing functions are a characteristic of the application within the application boundary.

3. Performance
Application performance objectives, stated or approved by the user, in either response or throughput, influence (or will influence) the design, development, installation and support of the application.

4. Heavily Used Configuration
A heavily used operational configuration, requiring special design considerations, is a characteristic of the application.

5. Transaction Rate
The transaction rate is high and influences the design, development, installation and support.

6. On-line Data Entry
On-line data entry and control information functions are provided in the application.

7. End -User Efficiency
The on-line functions provided emphasize a design for end-user efficiency.

8. On-line Update
The application provides on-line update for the internal logical files.

9. Complex Processing
Complex processing is a characteristic of the application.

10. Reusability
The application and the code in the application have been specifically designed, developed and supported to be usable in other applications.

11. Installation Ease
Conversion and installation ease are characteristics of the application. A conversion and installation plan and/or conversion tools were provided and tested during the system test phase.

12. Operational Ease
Operational ease is a characteristic of the application. Effective start-up, backup and recovery procedures were provided and tested during the system test phase.

13. Multiple Sites
The application has been specifically designed, developed and supported to be installed at multiple sites for multiple organizations.

14. Facilitate Change
The application has been specifically designed, developed and supported to facilitate change.

Each of these factors is scored based on their influence on the system being counted. The resulting score will increase or decrease the Unadjusted Function Point count by 35%. This calculation provides us with the Adjusted Function Point count.

An Approach to Counting Function Points
There are several approaches used to count function points. Q/P Management Group, Inc. has found that a structured workshop conducted with people who are knowledgeable of the functionality provided through the application is an efficient, accurate way of collecting the necessary data. The workshop approach allows the counter to develop a representation of the application from a functional perspective and educate the participants about function points.

Function point counting can be accomplished with minimal documentation. However, the accuracy and efficiency of the counting improves with appropriate documentation. Examples of appropriate documentation are:
  • Design specifications
  • Display designs
  • Data requirements (Internal and External)
  • Description of user interfaces
Function point counts are calculated during the workshop and documented with both a diagram that depicts the application and worksheets that contain the details of each function discussed.

Benefits of Function Point Analysis
Organizations that adopt Function Point Analysis as a software metric realize many benefits including: improved project estimating; understanding project and maintenance productivity; managing changing project requirements; and gathering user requirements. Each of these is discussed below.

Estimating software projects is as much an art as a science. While there are several environmental factors that need to be considered in estimating projects, two key data points are essential. The first is the size of the deliverable. The second addresses how much of the deliverable can be produced within a defined period of time. Size can be derived from Function Points, as described above. The second requirement for estimating is determining how long it takes to produce a function point. This delivery rate can be calculated based on past project performance or by using industry benchmarks. The delivery rate is expressed in function points per hour (FP/Hr) and can be applied to similar proposed projects to estimate effort (i.e. Project Hours = estimated project function points FP/Hr).

Productivity measurement is a natural output of Function Points Analysis. Since function points are technology independent they can be used as a vehicle to compare productivity across dissimilar tools and platforms. More importantly, they can be used to establish a productivity rate (i.e. FP/Hr) for a specific tool set and platform. Once productivity rates are established they can be used for project estimating as described above and tracked over time to determine the impact continuous process improvement initiatives have on productivity.

In addition to delivery productivity, function points can be used to evaluate the support requirements for maintaining systems. In this analysis, productivity is determined by calculating the number of function points one individual can support for a given system in a year (i.e. FP/FTE year). When compared with other systems, these rates help to identify which systems require the most support. The resulting analysis helps an organization develop a maintenance and replacement strategy for those systems that have high maintenance requirements.

Managing Change of Scope for an in-process project is another key benefit of Function Point Analysis. Once a project has been approved and the function point count has been established, it becomes a relatively easy task to identify, track and communicate new and changing requirements. As requests come in from users for new displays or capabilities, function point counts are developed and applied against the rate. This result is then used to determine the impact on budget and effort. The user and the project team can then determine the importance of the request against its impact on budget and schedule. At the conclusion of the project the final function point count can be evaluated against the initial estimate to determine the effectiveness of requirements gathering techniques. This analysis helps to identify opportunities to improve the requirements definition process.

Communicating Functional Requirements was the original objective behind the development of function points. Since it avoids technical terminology and focuses on user requirements it is an excellent vehicle to communicate with users. The techniques can be used to direct customer interviews and document the results of Joint Application Design (JAD) sessions. The resulting documentation provides a framework that describes user and technical requirements.

In conclusion, Function Point Analysis has proven to be an accurate technique for sizing, documenting and communicating a system's capabilities. It has been successfully used to evaluate the functionality of real-time and embedded code systems, such as robot based warehouses and avionics, as well as traditional data processing. As computing environments become increasingly complex, it is proving to be a valuable tool that accurately reflects the systems we deliver and maintain.

Sunday, October 11, 2009

How to use subversion with visual studio

Friday, August 28, 2009

PIVOT table result

Table
create table source(id int identity(1,1)not null,postcode varchar(10),cust varchar(10))

insert into source(postcode,cust)values('AB10','AMI004');
insert into source(postcode,cust)values('AB10','CLI001');
insert into source(postcode,cust)values('AB10','HCL001');
insert into source(postcode,cust)values('AB10','MIL003');
insert into source(postcode,cust)values('AB10','OSB001');
insert into source(postcode,cust)values('AB11','AMI004');
insert into source(postcode,cust)values('AB11','CLI001');
insert into source(postcode,cust)values('AB11','HCL001');
insert into source(postcode,cust)values('AB11','MIL003');
insert into source(postcode,cust)values('AB11','OSB001');
insert into source(postcode,cust)values('AB12','AMI004');
insert into source(postcode,cust)values('AB12','CLI001');
insert into source(postcode,cust)values('AB12','ABC001');
insert into source(postcode,cust)values('AB12','THK010');
insert into source(postcode,cust)values('AB12','NHF001');
insert into source(postcode,cust)values('AB12','HGF002');


select postcode,
replace((
select cust as "data()"
from source b
where b.postcode = a.postcode
for xml path('')
), ' ', ', ') as cust
from source a
group by postcode

Wednesday, August 26, 2009

some good blog

http://sqltutorials.blogspot.com/2007/06/sql-string-functions.html

Wednesday, July 1, 2009

SAP : BAPIs

Definition

A Business Application Programming Interface (BAPI) is a precisely defined interface providing access to processes and data in business application systems such as R/3.

BAPIs of SAP Business Object Types

BAPIs are defined as API methods of SAP business object types. These business object types and their BAPIs are described and stored in the Business Object Repository (BOR). A BAPI is implemented as a function module, that is stored and described in the Function Builder.

BAPIs of SAP Interface Types

As of Release 4.5A BAPIs can also describe interfaces, implemented outside the R/3 System that can be called in external systems by R/3 Systems. These BAPIs are known as BAPIs used for outbound processing. The target system is determined for the BAPI call in the distribution model of Application Link Enabling (ALE).

BAPIs used for outbound processing are defined in the Business Object Repository (BOR) as API methods of SAP Interface Types. Functions implemented outside the R/3 System can be standardized and made available as BAPIs. For further information see BAPIs Used For Outbound Processing.

Integration

BAPIs can be called within the R/3 System from external application systems and other programs. BAPIs are the communication standard for business applications. BAPI interface technology forms the basis for the following developments:

  • Connecting:
  • New R/3 components, for example, Advanced Planner and Optimizer (APO) and Business Information Warehouse (BW).
  • Non-SAP software
  • Legacy systems
  • Isolating components within the R/3 System in the context of Business Framework
  • Distributed R/3 scenarios with asynchronous connections using Application Link Enabling (ALE)
  • Connecting R/3 Systems to the Internet using Internet Application Components (IACs)
  • PC programs as frontends to the R/3 System, for example, Visual Basic (Microsoft) or Visual Age for Java (IBM).
  • Workflow applications that extend beyond system boundaries
  • Customers' and partners' own developments

The graphic below shows how BAPI interfaces enable different types of applications to be linked together.

BAPIs - Interfaces to the R/3 System

For further background information on BAPIs refer to the document BAPI User Guide.

Friday, June 26, 2009

SAP and Visual Studio 2005

I wrote in a previous blog entry that the design-time of the SAP Connector for Microsoft .NET does not work in Visual Studio 2005. Work-arounds are possible using Visual Studio 2003 to generate the proxies or using SAP Web Services via Exchange Infrastructure (XI) or directly from SAP NetWeaver Application Server are possible.

You can also use the Microsoft BizTalk Server 2006 with the Microsoft BizTalk Adapter for mySAP Business Suite to access SAP backend systems from Visual Studio 2005. The adapter offers code-free access to SAP systems. You can access every BAPI, RFC or IDoc from within Visual Studio 2005 with this adapter. In addition, Microsoft BizTalk Server 2006 enables you to publish your BizTalk application as a Web Service; a wizard helps you with the steps. The resulting Web Service can be used in your Visual Studio 2005 project and so no additional connector for accessing your SAP system is necessary.

Another alternative is to use third-party adapters such as the Sitrion iQL Studio 2006. This tool from the German based company Sitrion Systems enables rapid deployment and development of Microsoft-SAP based scenarios and .NET-based SAP Composite Applications. The iQL Studio works in Visual Studio 2005 without any restrictions. You can access all SAP instances and Business Entity-Repositories. You have access to RFCs, BAPIs, Queries, and also to Workflows in SAP. In addition you have the testing functionality of SAP transaction SE37 in Visual Studio and you can start the SAP ABAP Debugger from within Visual Studio. Sitrion also offers business entities and applications based on iQL Studio 2006 such as Travel Management, HR Manager Self Service, Employee Self Service, Sales Opportunity Management and a few more. For more information about the iQL Studio see www.sitrion.com.
http://blogs.msdn.com/saptech/archive/2006/04/20/579989.aspx

Connection to SAP using Visual Studio 2005 & 2008


SAP has not developed a connector API for Visual Studio 2005 and Visual Studio 2008 yet. I think the reason of not developing a connector API depends on some marketing strategies and the feasibility that can be developed web service that comes with the SAP 6.0, so the technology that can be developed web service in SAP will replace the connector API.

But, there is an important reality that some Visual Studio .NET 2003 projects which were developed before are connecting to SAP with SAP .NET Connector API. There is a ‘technology problem’ that has to be considered by developers if they want to migrate these projects to Visual Studio 2005 or Visual Studio 2008. Respect to this technology problem, SAP firstly suggest to their customers for examining SAP Application Server version whether higher than or equal to 6.0 or not and if higher than or equal to 6.0, developing a web service in SAP Application Server and solving the required connection via this web service. Of course, this is only one solution in all of solutions to this technology problem.

But, what happen if our SAP Application Server version is lower than 6.0 or we don’t want to connect SAP with web service respect to our analyzing decisions although our application server version is higher than or equal to 6.0? SAP has already a solution to this question, too. In fact, this is not an original solution, it is possible that every software specialist can find this solution after thought a while.

In fact, we are now mentioning a solution using SAP .NET Connector in Visual Studio 2005 or in Visual Studio 2008. So, how can do this operation? We have to use Visual Studio .NET 2003 to implement this solution although we are developing our applications with Visual Studio 2005 or Visual Studio 2008. Because, 2.0 version is the last version of SAP .NET Connector is produced for Visual Studio .NET 2003. The meaning of this that, SAP .NET Connector 2.0 is an add on API for only Visual Studio .NET 2003.

Alright, what is the solution method?

The solution method is so simple. We need to create a class library project using Visual Studio .NET 2003. With this project, we will define the desired SAP Application Server(s) and create connection to these servers. Then, we will choose the required functions/bapis of SAP Application servers. Finally, we will have a dll that belongs to this application, so this dll will be a reference for Visual Studio 2005 or Visual Studio 2008 project. I will explain in detailed this solution method of SAP Connection methods with practises in my next essays.

We have mentioned a problem of this solution method. What happen if we don’t have Visual Studio .NET 2003? Unfortunately, you can not use this solution method. The mean of this that there is a must that SAP says; ‘If you will use this solution method, you have to use Visual Studio .NET 2003′.

There are some third party programs that can connect to SAP Application Servers with Visual Studio 2005 or Visual Studio 2008. But, there is reality for enterprise firms; ‘If we are using SAP for ERP sulution, and if I need to connect to SAP via other systems, I should take care the suggestions of SAP priorly’. The mean of this that ‘if we need connector, we should use SAP connector’. I think this is the true approach. Because, if there is a problem based on third party program while our project is in live usage, you have to give a solution to this problem, and your customers will see you as person addressed who is responsible for this problem. So, now the technology problem has to have good analyzing in deep that you did not do before.

Friday, June 12, 2009

Browser disable refresh button

window.history.forward(1);
document.attachEvent("onkeydown", my_onkeydown_handler);
function my_onkeydown_handler()
{
switch (event.keyCode)
{
case 116 : // 'F5' event.returnValue = false; event.keyCode = 0; window.status = "We have disabled F5"; break;
}
}

Install Northwind database in your local machine

http://blog.sqlauthority.com/2007/05/23/sql-server-2005-northwind-database-or-adventureworks-database-samples-databases/

http://blog.sqlauthority.com/2007/06/15/sql-server-2005-northwind-database-or-adventureworks-database-samples-databases-part-2/

Friday, May 29, 2009

How to convert from rows to columns format in the table

I have the table and the values are defined as below:
Id PuringInterval Description
1 10 UTM
2 20 Hourly
3 25 Daily

And I would like to have the select or the view statement to return the
following
UTM Hourly Daily
10 20 25
So I want to convert from rows to columns

In SQL Server 2005 you can use the PIVOT operator, and using CASE will work
for both SQL Server 2000 & 2005:

SQL Server 2005
SELECT MAX(UTM) AS 'UTM',
MAX(Hourly) AS 'Hourly',
MAX(Daily) AS 'Daily'
FROM Foobar
PIVOT (MAX(PuringInterval)
FOR [Description] IN ([UTM], [Hourly], [Daily])) AS P;

SQL Server 2000 & 2005


SELECT MAX(CASE WHEN [Description] = 'UTM'
THEN PuringInterval END) AS 'UTM',
MAX(CASE WHEN [Description] = 'Hourly'
THEN PuringInterval END) AS 'Hourly',
MAX(CASE WHEN [Description] = 'Daily'
THEN PuringInterval END) AS 'Daily'
FROM Foobar;

Some more information refer link
http://sqlblogcasts.com/blogs/madhivanan/archive/2008/08/27/dynamic-pivot-in-sql-server-2005.aspx


Tuesday, May 5, 2009

What is the difference between Server.Transfer and Response.Redirect


Server.Transfer() : client is shown as it is on the requesting page only, but the all the content is of the requested page. Data can be persist accros the pages using Context.Item collection, which is one of the best way to transfer data from one page to another keeping the page state alive. 

Response.Dedirect() :client know the physical loation (page name and query string as well). Context.Items loses the persisitance when nevigate to destination page. In earlier versions of IIS, if we wanted to send a user to a new Web page, the only option we had was Response.Redirect. While this method does accomplish our goal, it has several important drawbacks. The biggest problem is that this method causes each page to be treated as a separate transaction. Besides making it difficult to maintain your transactional integrity, Response.Redirect introduces some additional headaches. First, it prevents good encapsulation of code. Second, you lose access to all of the properties in the Request object. Sure, there are workarounds, but they’re difficult. Finally, Response.Redirect necessitates a round trip to the client, which, on high-volume sites, causes scalability problems. 

As you might suspect, Server.Transfer fixes all of these problems. It does this by performing the transfer on the server without requiring a roundtrip to the client.

Examples:

Server.Transfer

Server.Transfer("Webform2.aspx")

Response.Redirect

Response.redirect("Webform2.aspx

AppDomain concept in ASP.Net

Asp.Net introduces the concept of an Application Domain which is shortly known as AppDomain. It can be considered as a Lightweight process which is both a container and boundary. The .NET runtime uses an AppDomain as a container for code and data, just like the operating system uses a process as a container for code and data. As the operating system uses a process to isolate misbehaving code, the .NET runtime uses an AppDomain to isolate code inside of a secure boundary.

The CLR can allow the multiple .Net applications to be run in a single AppDomain.

The CLR isolates each application domain from all other application domains and prevents the configuration, security, or stability of a running .NET applications from affecting other applications.An AppDomain can be destroyed without effecting the other Appdomains in the process.
 
Mulitple Appdomains can exist in Win32 process. As we discussed the main aim of AppDomain is to isolate applications from each other and the process is same as the working of operating system process. This isolation is achieved by making sure than any given unique virtual address space runs exactly one application and scopes the resources for the process or application domain using that address space.

Win32 processes provide isolation by having distinct memory addresses. The .Net runtime enforces AppDomain isolation by keeping control over the use of memory. All memory in the App domain is managed by the run time so the runtime can ensure that AppDomains Do not access each others memory.

How to create AppDomain

AppDomains are generally created by Hosts for example Internet Explorer and Asp.net. The following is an example to create instance of an object inside it and then executes one of the objects methods. This is the explicit way of creating AppDomain by .Net Applications
 
AppDomains are created using the CreateDomain method. AppDomain instances are used to load and execute assemblies (Assembly). When an AppDomain is no longer in use, it can be unloaded.

public class MyAppDomain : MarshalByRefObject
{
    public string GetInfo()
    {
        return AppDomain.CurrentDomain.FriendlyName;
    }
} 

public class MyApp

{

    public static void Main()

    {

        AppDomain apd = AppDomain.CreateDomain("Rajendrs Domain");

        MyAppDomain apdinfo = (MyAppDomain)apd.CreateInstanceAndUnwrap (Assembly.GetCallingAssembly().GetName().Name, "MyAppDomain");

        Console.WriteLine("Application Name = " + apdinfo.GetInfo());

    }

}

 

The AppDomain class implements a set of events that enable applications to respond when an assembly is loaded, when an application domain will be unloaded, or when an unhandled exception is thrown. 

Advantages

A single CLR operating system process can contain multiple application domains. There are advantages to having application domains within a single process.

  1. Lower system cost - many application domains can be contained within a single system process.
  2. Each application domain can have different security access levels assigned to them, all within a single process.
  3. Code in one AppDomain cannot directly access code in another AppDomain.
  4. The application in an AppDomain can be stopped without affecting the state of another AppDomain running in the same process.
  5. An  Exception in on AppDomain will not affect other AppDomains or crash the entire process that hosts the AppDomains.


Friday, May 1, 2009

Intermediate Language Disassembler(ILDASM)

You can get IL disassemble tool as ILDasm.exe in directory C:\Program Files\Microsoft.NET\FrameworkSDK\bin

So what does this tool do?

The answer to this question is found in the tutorial supplied with .NET SDK as "The ILDSAM tool parses any .NET Framework EXE/DLL module and shows the information in a human-readable format. It allows user to see the pseudo assembly language for .NET". IL disassmeber tool shows not only namespace but also types including their interfaces. As its name suggests, it is an intermediate language, so it has its own specification. Users can also write programs using this intermediate language, its very similar to assembly language of the old days.

I will use a simple example and use ILDASM.exe

//Hello World Program HelloWorld.cs using System;   class HelloWorld  {       static void Main()       {          Console.WriteLine("Hello, world!");           } } 
Complier it on command line by using csc HelloWorld.cs

Helloworld.exe file will be generated

Now use the command ildasm HelloWorld.exe

You will see a screen like this.

Here you can see all of the Symbols. The table below explains what each graphic symbol means. Some of them you can find in HelloWorld's members.

The tree in this window shows that manifest information contained inside HelloWorld.exe. By double-clicking on any of the types in the tree, you can see more information about the type.

Double-clicking the ".class public auto ansi" entry shows the following information:

Users can see that the HelloWorld type is derived from the System.Object type.

The first method, .ctor, is a constructor. This particular type has just one constructor but other types may have several constructors each with a different signature. If you double-click on the constructor method, a new window appears showing the IL (intermediate language) contained within the method:

The Common Language Runtime is stack based. So, in order to perform any operations, the operands are first pushed onto a virtual stack and then the operator executes. The operator grabs the operands off the stack, performs the desired operation and places the result back on the stack. At any one time, this method will have no more than 8 operands pushed onto the virtual stack. We can see thby looking at the ".maxstack" attribute ( Maximum Stack size ) that appears just before the IL code. In the above code maxstack is shown as 8.

Lets examine the IL code :

IL_0000:  ldarg.0 : Load Object this pointer in stack IL_0001:  call       instance void [mscorlib]System.Object::.ctor() IL_0006:  return the value loaded in stack 
If user make a double click on main: void()
It will look like this:

If we will examine IL Code:

IL_0000:  ldstr      "Hello, world!" IL_0005:  call       void [mscorlib]System.Console::WriteLine(class System.String) IL_000a:  ret 
LDSTR: Load String.
First line indicates load String in stack.
Second Line indicates call method System.Console:: WriteLine as the fetch the value from stack put in this method and again put the result in stack.
Third line shows fetch the final value from stack and return it.

There are some advance option also available. The extra options are enabled by running ILDASM with the /ADV ("ADVanced") command-line switch. When /ADV is specified, ILDASM enables additional command-line switches. For the user convenience I will summarize some basic instructions here below.

InstructionMeaning
LDCThis instruction pushes a hard coded number on the stack.
LDARG and LDARGALoad argument and load argument address, respectively
LDLOC and LDLOCALoad local variable and load local variable address, respectively
LDFLD and LDSFLDLoad Object Field and Load Static Field of a Class, respectively
LDELEMLoad an element of an array
LDLENLoad the length of an array
STARGStore a value in an argument slot
STELEMStore an element of an array
STFLDStore into a field of an object
CEQCompare equal
CGTCompare greater than
CLTCompare less than
BRUnconditional branch
BRFALSE and BRTRUEBranch on false and branch on true, respectively
CONVData conversion
NEWARRCreate a zero-based, one-dimensional array
NEWOBJCreate a new object
BOXConvert value type to object reference
UNBOXConvert boxed value type to its raw form
CALL and CALLVIRTCall a method and call a method associated at runtime with an object, respectively

Intermediate Language Disassembler(ILDASM)

You can get IL disassemble tool as ILDasm.exe in directory C:\Program Files\Microsoft.NET\FrameworkSDK\bin

So what does this tool do?

The answer to this question is found in the tutorial supplied with .NET SDK as "The ILDSAM tool parses any .NET Framework EXE/DLL module and shows the information in a human-readable format. It allows user to see the pseudo assembly language for .NET". IL disassmeber tool shows not only namespace but also types including their interfaces. As its name suggests, it is an intermediate language, so it has its own specification. Users can also write programs using this intermediate language, its very similar to assembly language of the old days.

I will use a simple example and use ILDASM.exe

//Hello World Program HelloWorld.cs using System;   class HelloWorld  {       static void Main()       {          Console.WriteLine("Hello, world!");           } } 
Complier it on command line by using csc HelloWorld.cs

Helloworld.exe file will be generated

Now use the command ildasm HelloWorld.exe

You will see a screen like this.

Here you can see all of the Symbols. The table below explains what each graphic symbol means. Some of them you can find in HelloWorld's members.

The tree in this window shows that manifest information contained inside HelloWorld.exe. By double-clicking on any of the types in the tree, you can see more information about the type.

Double-clicking the ".class public auto ansi" entry shows the following information:

Users can see that the HelloWorld type is derived from the System.Object type.

The first method, .ctor, is a constructor. This particular type has just one constructor but other types may have several constructors each with a different signature. If you double-click on the constructor method, a new window appears showing the IL (intermediate language) contained within the method:

The Common Language Runtime is stack based. So, in order to perform any operations, the operands are first pushed onto a virtual stack and then the operator executes. The operator grabs the operands off the stack, performs the desired operation and places the result back on the stack. At any one time, this method will have no more than 8 operands pushed onto the virtual stack. We can see thby looking at the ".maxstack" attribute ( Maximum Stack size ) that appears just before the IL code. In the above code maxstack is shown as 8.

Lets examine the IL code :

IL_0000:  ldarg.0 : Load Object this pointer in stack IL_0001:  call       instance void [mscorlib]System.Object::.ctor() IL_0006:  return the value loaded in stack 
If user make a double click on main: void()
It will look like this:

If we will examine IL Code:

IL_0000:  ldstr      "Hello, world!" IL_0005:  call       void [mscorlib]System.Console::WriteLine(class System.String) IL_000a:  ret 
LDSTR: Load String.
First line indicates load String in stack.
Second Line indicates call method System.Console:: WriteLine as the fetch the value from stack put in this method and again put the result in stack.
Third line shows fetch the final value from stack and return it.

There are some advance option also available. The extra options are enabled by running ILDASM with the /ADV ("ADVanced") command-line switch. When /ADV is specified, ILDASM enables additional command-line switches. For the user convenience I will summarize some basic instructions here below.

InstructionMeaning
LDCThis instruction pushes a hard coded number on the stack.
LDARG and LDARGALoad argument and load argument address, respectively
LDLOC and LDLOCALoad local variable and load local variable address, respectively
LDFLD and LDSFLDLoad Object Field and Load Static Field of a Class, respectively
LDELEMLoad an element of an array
LDLENLoad the length of an array
STARGStore a value in an argument slot
STELEMStore an element of an array
STFLDStore into a field of an object
CEQCompare equal
CGTCompare greater than
CLTCompare less than
BRUnconditional branch
BRFALSE and BRTRUEBranch on false and branch on true, respectively
CONVData conversion
NEWARRCreate a zero-based, one-dimensional array
NEWOBJCreate a new object
BOXConvert value type to object reference
UNBOXConvert boxed value type to its raw form
CALL and CALLVIRTCall a method and call a method associated at runtime with an object, respectively