Quantcast
Channel: IIS Field Readiness – blog of the European IIS team
Viewing all 131 articles
Browse latest View live

Encrypting connectionStrings in Web.Config using the NetFrameworkConfigurationKey in an IIS Web Farm scenario

$
0
0

One of the most recommended measure during a web application security audit is to encrypt the connectionStrings section from a Web.Config file. If this operation could be quite easy in a single IIS server environment, it could be really difficult in a Web Farm environment with data replication between every servers. If you encrypt this section using the default key named NetFrameworkConfigurationKey on a given server, everything should be fine. However, if the encrypted Web.Config file is then replicated on servers in the Farm, there could be an issue.

You need to know the NetFrameworkConfigurationKey is built with two parts:

  1. A unique ID identifying this key: d6d986f09a1ee04e24c949879fdb506c
  2. The machine GUID: 11cb3f60-488c-4a71-ad45-def6f31e5d62

This would give as an example: d6d986f09a1ee04e24c949879fdb506c_11cb3f60-488c-4a71-ad45-def6f31e5d62. You'll find this key in the folder C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys. If you check on various IIS servers, you'll see the GUID is each time different. As a consequence, if a Web.Config file is encrypted using the key from server 1, it won't be decrypted by the key from server 2 as the latest isn't matching the first one.

To avoid this issue, here are the steps to follow. The scenario detailed here consist in a Primary server from which all data are replicated using Application Provisioning to one or several Secondary server(s) (which includes the Web.Config file). If you don't use Application Provisioning but you want to copy/paste an encrypted Web.Config from a server to another one, this step by step will work too, however you'll have to replace the replication by a copy/paste (of course).

Remarque: I recommend to test this step by step firstly without following the steps where the keys "NetFrameworkConfigurationKey" are deleted from the Primary & Secondary servers. If the step by step without deletion is working well, you'll be able to follow the full scenario to get a clear configuration at the key level.

 

On the Primary server:

  • Open a CMD using Administrator privileges
    • Note: You have to do a right click > open as Administrator even if the CMD is opening by default with the Administrator account
      If you don't open it explicitly using Administrator, privileges aren't enough to execute the following command-lines.
  • Navigate to the folder "C:\Windows\Microsoft.NET\Framework64\v2.0.50727"
  • Execute the following command-line to suppress  the existing "NetFrameworkConfigurationKey" keys:
    • aspnet_regiis.exe -pz "NetFrameworkConfigurationKey"
  • Execute the following command-line to create the key "NetFrameworkConfigurationKey" with the private key set as exportable:
    • aspnet_regiis.exe -pc "NetFrameworkConfigurationKey" -exp
  • Execute the following command-line to export the key in an XML file named key.xml including the private key:
    • aspnet_regiis.exe -px "NetFrameworkConfigurationKey" key.xml -pri
  • If the Application Pool identity isn't the default one (ApplicationPoolIdentity) you need to run the following command-line to grant permissions to the Application pool identity to access this key:
    • aspnet_regiis -pa "NetFrameworkConfigurationKey" "Domain\ApplicationPooIdentityName"

 

On the Secondary server:

  • Open a CMD using Administrator privileges
    • Note: You have to do a right click > open as Administrator even is the CMD is opening by default with the Administrator account
      If you don't open it explicitly using Administrator, privileges aren't enough to execute the following command-lines.
  • Navigate to the folder "C:\Windows\Microsoft.NET\Framework64\v2.0.50727"
  • Execute the following command-line to suppress  the existing "NetFrameworkConfigurationKey" keys:
    • aspnet_regiis.exe -pz "NetFrameworkConfigurationKey"
  • Copy the key.xml file from Primary server to Secondary server in the folder "C:\windows\Microsoft.NET\Framework64\v2.0.50727":
  • Execute the following command-line to import the key "NetFrameworkConfigurationKey" :
    • aspnet_regiis.exe -pi "NetFrameworkConfigurationKey" key.xml
  • If the Application Pool identity isn't the default one (ApplicationPoolIdentity) you need to run the following command-line to grant permissions to the Application pool identity to access this key:
    • aspnet_regiis -pa "NetFrameworkConfigurationKey" "Domain\ApplicationPooIdentityName"

       

On the Primary server:

  • Execute the following command-line to encrypt the connectionStrings section in the Web.Config file located in the folder "C:\inetpub\wwwroot" (location to change in function of your needs):
    • aspnet_regiis.exe -pef "connectionStrings" "C:\inetpub\wwwroot"

At this stage, the encrypted Web.Config file will be forwarded to the Secondary server using the Application Provisioning (it's where you need to do it yourself if you don't use the Application Provisioning). As the keys are well set, the Web.Config decryption should work fine. However, let's check it.


On the Secondary server:

  • Copy the encrypted Web.Config to another folder like: C:\test
    • Note: It's better to set the Web.Config file to a folder which isn't using Application Provisioning to avoid getting it replace by the encreypted version during this test.
  • Execute the following command-line to decrypt the connectionStrings section from the Web.Config file located in "C:\test":
    • aspnet_regiis.exe -pdf "connectionStrings" "C:\test"

If it works, everything has been applied correctly J

I hope this article has been useful.
Sylvain Lecerf and the French Microsoft IIS Support Team


Background threads in ASP.net applications (Part 1 – the concept application)

$
0
0

When debugging memory dumps from customers, I have come to see, quite often, a pattern that you should not use in your application if you don't want to run into trouble. This pattern can be resumed in a simple rule: thou shall not create thy own threads in thy application! To show you what I mean, through a series of three blog articles I will create a simple application and show you what happens when we transgress this rule.

Let's build a simple stock quote lookup application. I am basing my sample on a sample I found earlier and bookmarked withwww.linqto.meunder this url: http://linqto.me/StockAPI. Since the API in question is now defunct, and I wanted to provide a working app, I have performed some major changes, including using the Yahoo API instead of the Google one. You can download the completed solution from my OneDrive here:

Background Threading Sample Applications

When you start the solution, you should see a page that allows you to input a stock symbol and get a quote. Here is the page loaded in Internet Explorer and showing the price for the MSFT stock symbol:

So how does this work? The page has a textbox called txtSymbol and a button which, when pressed, will submit the page and run some code behind to lookup the symbol in the Yahoo API. The code behind has a click event associated to the button, which looks like this:

protectedvoid cmdSubmit_Click(object sender, EventArgs e)
{
    //check if there is a symbol in the textbox
    if (!String.IsNullOrWhiteSpace(txtSymbol.Text.Trim()))
    {
       //get a quote from Yahoo:
       FetchQuote(txtSymbol.Text.Trim());

                //show the panel and set the labels
       pnlResults.Visible = true; 

       lblCompany.Text = company;
       lblPrice.Text = price.ToString();
       lblSymbol.Text = symbolName;
       lblVolume.Text = volume.ToString();
     }
}

If the textbox is not empty (I am not doing fancy error checking to keep this simple), the code will call a method called FetchQuote(System.String) to which it will pass the value of the textbox. Here is the code for this second method:

privatevoid FetchQuote(string symbol)
{
     string url = "http://finance.yahoo.com/webservice/v1/symbols/" + Server.UrlEncode(symbol) + "/quote";

     //load the xml document
     XDocument doc = XDocument.Load(url);

     //extract the data
     company = GetData(doc, "name");
     price = Convert.ToDouble(GetData(doc, "price"));
     symbolName = GetData(doc, "symbol");
     volume = Convert.ToInt32(GetData(doc, "volume"));

}

The method will compose the Yahoo Stock API url with the stock symbol we want to look up. It will down the XML document containing the quote data from Yahoo. Such a document will look something like this for the MSFT (Microsoft) stock symbol

<list version="1.0">
    <meta>
        <type>resource-list</type>
    </meta>

    <resources start="0" count="1">
        <resource classname="Quote">
            <field name="name">Microsoft Corporation</field>
            <field name="price">47.980000</field>
            <field name="symbol">MSFT</field>
            <field name="ts">1416603605</field>
            <field name="type">equity</field>
            <field name="utctime">2014-11-21T21:00:05+0000</field>
            <field name="volume">42887245</field>
        </resource>
    </resources>
</list>

Once the document is loaded into an XDocument object, we will parse the object to extract the data we need to show on the page. This is done by repeated calls to a method called GetData(System.Xml.LINQ.XDocument, System.String). Here is the code:

privatestring GetData(XDocument doc, string name)
{

    //get the requested attribute value from the XDocument
    return doc.Root.Element("resources").Element("resource").Elements().Where(n => n.FirstAttribute.Value == name).First().Value;

}

I will not go into the details of this implementation, suffice to say that it will attempt to get a hold of an XML element that has an attribute called name with the 'value' indicated by the String parameter passed in. It will just return the value of the XML element which is the quote data we are interested in.

Back to the FetchQuote method, which will load the retrieved data into local page variables. Following this, it will return control to the caller, the Click event handler of the button. This will display a Panel control and transfer the values of the page level variables to label controls on the page to display the data.

In the next installment, I will be modifying the application to load the data for stock symbols in the background, periodically, so that we do not lose time to load the data from Yahoo every time someone asks for a quote from the application. This will be done via background threads, the kind of pattern you should not be using. The implementation of this concept is detiled in the second blog post.

By Paul Cociuba
http://linqto.me/about/pcociuba

 

Background threads in ASP.net applications (Part 2 – thread implementation)

$
0
0

To continue the saga of developing ASP.net applications that make use of background threads, we will look at how to 'optimize' the application that I have proposed in the first article. The objective would be to have the application load the data in the background via a thread that would update the price of certain stock symbols every X seconds.

To do this, what I have seen many of my customer's do, is to spawn a new .Net thread in the background and have this thread run an infinite loop with the following pseudo-logic:

  1. When the application loads up, start the background thread and have it execute the infinite loop
  2. Once in the infinite loop, the thread will update the price for the symbols in the list
  3. When it has retrieved the information for the price of the symbols, it will sleep for a given amount of time
  4. It will then resume the loop from the beginning and attempt to refresh the prices again.

So let's look how this pseudo logic can be transformed into code. Keep in mind that I am doing the bare minimum to keep the application as simple as possible and focus on the essentials.

I will first start by declaring a POCO (short for Plain Old CRL Object) class that will represent a stock quote. Here is the code listing:

//base libraries

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;

//added imports

 namespace NotPatterns
{
  publicclassStockQuote
  {

      //class properties
      publicstring CompanyName { get; set; }

      publicstring SymbolName { get; set; }

      publicdouble Price { get; set; }

      publicint Volume { get; set; }

      publicDateTime LastUpdate { get; set; }

      }

}

This class only contains a couple of properties that expose both get and set so that we can store the data related to a stock quote inside an instance of the class. This will represent our Business Layer (Model).

When the application loads into the process memory (into the w3wp.exe worker process), the Application_Start event is fired. This seems like a good place to add the code to create a background thread.

The first method I will implement is located in the Global.asax file (but I could have implemented this inside another class if I had wanted to as well). This method is called UpdateQuotes. Here is the code listing:

  privatevoid UpdateQoutes()
  {
    //declare an array of strings to contain the stock
    string[] stockSymbols = { "MSFT", "AAPL", "GOOG", "AMZN" };

    string url = "http://finance.yahoo.com/webservice/v1/symbols/";

         string stockUrl; StockQuote quote; XDocument xdoc;

    do
    {
       //go through each of the symbols and run them
       foreach (String stockSymbol in stockSymbols)
       {
          stockUrl = url + Server.UrlEncode(stockSymbol) + "/quote";

          xdoc = XDocument.Load(stockUrl);

          //create a new qoute
          quote = newStockQuote();
          quote.CompanyName = GetData(xdoc, "name");
          quote.Price = Convert.ToDouble(GetData(xdoc, "price"));
          quote.Volume = Convert.ToInt32(GetData(xdoc, "volume"));
          quote.SymbolName = GetData(xdoc, "symbol");
          quote.LastUpdate = DateTime.Now; 

          //save the symbol
          Application[stockSymbol] = quote; 

        }

      //sleep for 100 Seconds
      System.Threading.Thread.Sleep(100000);

      } while (true);

   }

What the method does is that it will declare an array of four stock symbols (just to keep it simple – we could use something more dynamic afterwards) that will be updated every X seconds. Following this declaration, it will go into an infinite loop, represented by the do{} while(true) statement.

Each of the symbols in the array will be inspected, and we will try to obtain the current price for the symbol, by making a call to the Yahoo Finance API using the XDocument object. For each symbol we compose the correct URL and then we ask the XDocument object to retrieve the XML datagram corresponding to the stock quote.

Once we have loaded the XML inside the XDocument object, the method will create a new instance of the StockQuote class and populate its properties with data extracted from the XML – via the GetData(System.Xml.Linq.XDocument, System.String) method, just like in the previous sample. The newly created instance of StockQuote will be added to the Application object, an object that is available in all parts of the application and that can be thought of as a global dictionary of variables. (Some of you might remark that I could have checked if I already had an entry for the symbol in question inside the Application object and if so, I could just have performed an update on the object instead of creating a new instance every time, and you would be right to do so. However, in essence I am trying to keep the app basic, and not necessarily optimized).

Following the loading of the financial data for all symbols, the loop will put the thread to sleep 100 seconds (1minute 40 seconds). After this, the loop will be executed again and again, until the end of time, or the end of the process, whichever comes first.

Now for the interface to display these four stock symbols in a web-page. To do this, I have created a second page for the application, called AutoStock.aspx. This page contains a repeater control that is strongly–typed data-bound to an object of type StockQuote, via the ItemType property. If you are new to strongly typed data-binding, I would suggest you have a look at my ASP.net 4.5 Webforms video tutorial:

http://linqto.me/n/ASPNet45

Here are the tags for the repeater control – note the ItemType tag that indicates the type of object that will be displayed by the control and the SelectMethod tag which indicates which method in the code behind which will be called to load the elements to display when the control is loaded:

<asp:RepeaterID="rptStocks"runat="server"ItemType="NotPatterns.StockQuote"SelectMethod="GetQuotes">
   <ItemTemplate>
     <divstyle="float:left">
       <fieldset>
         <legend><%# Item.SymbolName %></legend>
            Company Name: <%# Item.CompanyName %>
            <br/>
            <br/>
            Price: <%# Item.Price %> $
            <br/>
            <br/>
            Volume: <%# Item.Volume %> shares
            <br/>
            <br/>
            Last Updated: <%# Item.LastUpdate.ToLongTimeString() %>
       </fieldset>
     </div>
   </ItemTemplate>
</asp:Repeater>

Inside the repeater control, we just display a fieldset tag for each stock quote, and inside this we display, the company name, the price and other information via strongly typed data-binding.

In the code behind, the GetQuotes() method does all the loading of the pricing data from the Application object – here is the code:

publicIEnumerable<NotPatterns.StockQuote> GetQuotes()
{
    //get the stocks from the application object
    List<NotPatterns.StockQuote> stocks = newList<NotPatterns.StockQuote>();

    //load the stocks
    if (Application.Count == 4)
    {
        stocks.Add((NotPatterns.StockQuote)Application["MSFT"]);
        stocks.Add((NotPatterns.StockQuote)Application["AAPL"]);
        stocks.Add((NotPatterns.StockQuote)Application["GOOG"]);
        stocks.Add((NotPatterns.StockQuote)Application["AMZN"]);
    } 

    return stocks;

}

The method declares a variable of type List<StockQuote> and attempts to add the stock quotes from the Application object to the newly created list. We check if the count of items in the Application object is equal to 4, since we only have four items in the stock array defined in the Global.asax. Should the background thread not have finished loading the prices for the first time before we show the page, we don't want to show null data and crash the page, hence the reason for the test. It ensures that the data was loaded by the background thread at least once.

There is also a refresh button on the page, witch a Click event handler in the code behind:

protectedvoid cmdRefresh_Click(object sender, EventArgs e)
{
    //force the rebinding
    rptStocks.DataBind();
}

All this does is that it forces the repeater control to data-bind once again – and hence to call the GetQuotes() method and reload the data from the Application object. Note that the data will not be reloaded by the background thread when this new call to GetQuotes() comes in. It will be refreshed only when the background thread wakes up the next time around and loads new data from Yahoo.

You can find the sample completed with this new code in my OneDrive as well, by following this link:

https://onedrive.live.com/?cid=9A83CB6EACC8118C&id=9A83CB6EACC8118C%21131

In the next installment, I will discuss the problems with this kind of pattern and why it is not recommended you make use of it in your ASP.net applications.

By Paul Cociuba
http://linqto.me/about/pcociuba 

 

 

 

 

 

 

 

Background threads in ASP.net applications (Part 3 – threading side effects)

$
0
0

In the final article of the series on the dangers of ASP.net background threading, to illustrate the dangers of such an architecture, I will start by introducing one more modification to the code of the sample application. This modification will be added to the code of the Global.asax that starts off the infinite loop thread that runs outside of the ASP.net thread pool. Here is the listing again, but notice the lines of code that have been added:

privatevoid UpdateQoutes()

{
  //declare an array of strings to contain the stock
  string[] stockSymbols = { "MSFT", "AAPL", "GOOG", "AMZN" };
  string url = "http://finance.yahoo.com/webservice/v1/symbols/";

  string stockUrl; StockQuote quote; XDocument xdoc; int loopIteration = 1;
  do
  {
    //should we have a loop iteration that yields modulo 6 = 0, crash
    if ((loopIteration % 5) == 0)
    {
      thrownewException("Random crash");
    } 

    //go through each of the symbols and run them
    foreach (String stockSymbol in stockSymbols)
    {
       stockUrl = url + Server.UrlEncode(stockSymbol) + "/quote";

       xdoc = XDocument.Load(stockUrl);

       //create a new qoute
       quote = newStockQuote();
       quote.CompanyName = GetData(xdoc, "name");
       quote.Price = Convert.ToDouble(GetData(xdoc, "price"));
       quote.Volume = Convert.ToInt32(GetData(xdoc, "volume"));
       quote.SymbolName = GetData(xdoc, "symbol");
       quote.LastUpdate = DateTime.Now; 

       //save the symbol
       Application[stockSymbol] = quote; 

     }

    //sleep for 100 Seconds
    System.Threading.Thread.Sleep(100000);

    //increment the loop iteration
    loopIteration = loopIteration + 1; 

   } while (true);

}

The first thing added is the creation of a private variable called loopIteration which we set to the value 1. Then inside the do {} while(true) infinite loop, I added a check to see if the result of the division of the value of the new variable by 5 is not zero – that is to say, each time the value of the variable is a multiple of 5 (5, 10, 15, etc) the if branch will be taken and an exception will be thrown by the code. If the if statement is not taken, the value of the variable will be incremented by one at the end of the loop.

If you attempt to run the sample in IIS, you will see that after a time, the application pool will crash, and you will get an event 5011 logged by WAS (the Windows Process Activation Service) inside the System application log:

    A process serving application pool 'MvcSample' suffered a fatal communication error with the Windows Process Activation Service. The process id was '2952'. The data field contains the error number.

So what just happened?

When request execution happens inside IIS, ASP.net gives you a safety net. Almost all exceptions (except for a System.OutOfMemoryException, System.StackOverflowException and System.ExecutionEngineException) are caught by the runtime, if they are not caught by your application directly. Hence an exception like FileNotFoundException that would occur during a page execution (request 3 in the picture below) would wind up with a 500 error page being sent to the connecting user – who would see the ASP.net yellow screen of death, but the w3wp.exe process hosting the application would not crash. Hence all other threads inside the process could go on treating requests, and the only impacted person would be the user requesting that particular page.

For thread that we launched with the Application_Start() event handler, there is no associated context and the Thread itself is not part of the ASP.net thread pool. Hence, when an exception occurs, if it is not handled in the application code, it will be handed directly to the operating system to treat: the way Windows deals with such an exception is by crashing the w3wp.exe process– and all other threads and requests inside it. This is what happens when the throw statement is executed in the loop. If you are unlucky enough to send a request exactly when this happens, you will see a message indicating that "This page cannot be displayed" in your browser.

Another, more subtle danger is application domain unloading and reloading. The application domain is a data structure that loads the entire code of your ASP.net application inside the w3wp.exe process. Since we cannot unload .dlls (assemblies) in .Net, when a change is made to the application on disk – files are changed inside the bin folder, the web.config file is updated, the entire application domain has to unload and reload with a new copy of your application. You can read more about this process by following this link:

http://linqto.me/AppDomainUnload

Supposing that the application domain has to be unloaded, but that the thread we launched at the beginning of the application is still running and holding on to objects in the current instance of the application domain, the unload cannot complete. Hence, you can wind up with behavior of the following sort: you perform a change to your application, and you do not see the change when refreshing the application in the browser. This is because the old app domain cannot unload until the spawned thread finishes dealing with all the objects it was holding on to inside the old app domain – and since we are in an infinite loop, this will be never.

Also, when you attempt to recycle the application pool in IIS, you may see warnings from WAS in the event log indicating that the w3wp.exe process serving the pool took too long to shut down, and hence was killed. IIS has a mechanism by which, a w3wp.exe process serving an application pool is allowed 90 seconds to gracefully stop all threads and shut down when a recycle has to occur. Passed this time period (controlled by the shutdownTimeLimit - http://www.iis.net/configreference/system.applicationhost/applicationpools/add/processmodel - parameter in the configuration), WAS will issue a kill process and force shutdown the w3wp.exe process. In the case of the sample application, the looping thread will never relinquish control and will not allow the process to shut down even after 90 seconds, so WAS will have to proceed with a kill.

Conclusion:

It is never a good idea to spawn up background threads in your application. If you need to have something treated on another thread outside the thread that does request treatment, consider calling the Threadpool.QueueWorkItem (http://msdn.microsoft.com/en-us/library/system.threading.threadpool.queueuserworkitem%28v=vs.110%29.aspx ) method call – this will use a free thread from the ASP.net threadpool to execute the code you give it. Another possibility is to consider asynchronous execution using the new .Net syntax with async and await. I will be doing a series on this next year (2015) so stay tuned.

 

By Paul Cociuba
http://linqto.me/about/pcociuba  

 

 

 

 

 

 

 

 

Security guidelines to detect and prevent DOS attacks targeting IIS/Azure Web Role (PAAS)

$
0
0


In a previous blog, we explained how to Install IIS Dynamic IP Restrictions in an Azure Web Role. In the present article, we'll provide guidelines to collect data and analyze it to be able to detect potential DOS attacks. We'll also provide tips to protect against such attack. While the article focuses on web applications hosted in Azure Web Role (PAAS), most of the article content is also applicable to IIS hosted on premise or on IAAS VMs.

I – Archive your logs

Without any history of IIS logs, there is no way to know if your web site has been attacked or hacked and when a potential threat started. Unfortunately, many customers are not keeping any history of their logs which is a real issue when the application is hosted as an Azure Web Role (PAAS) because PAAS VMs are "stateless " and can be reimaged/deleted on operations like scaling, new deployment …etc…

A comprehensive list of Azure logs is described in the following documents:

To keep logs history, Windows Azure platform provides everything needed with Windows Azure Diagnostics (WAD). All you have to do is simply to turn the feature on by Configuring Windows Azure Diagnostics and you'll get your IIS logs automatically replicated to a central location in blob storage. One caveat is that bad configuration of WAD can prevent log replication and log scavenging/cleanup which in worst case may cause IIS logging to stop (see IIS Logs stops writing in cloud service). You also need to consider that keeping history of logs in Azure storage can affect you Azure bill and one "trick" is to Zip your IIS log files before transferring with Windows Azure Diagnostics. For on premise IIS, there are many resources describing how to archive IIS logs and you may be interested in this script: Compress and Remove Log Files (IIS and others).

In some cases related to Azure Web Role, there are situations where you need to immediately gather all logs manually. This is true if you've not setup WAD or if you can't wait for the next log replication. In this situation, you can manually gather all logs with minimal effort using the procedure described in Windows Azure PaaS Compute Diagnostics Data (see "Gathering The Log Files For Offline Analysis and Preservation"). The main limitation of this manual procedure is that you need to have RDP access to all VM instances.

Now that you have your logs handy, let's see how to analyze them.

II – Analyse your logs

LOGPARSER is the best tool to analyze all kinds of logs. If you don't like command line prompt, you can use LOGPARSER Studio (LPS) and read the following cool blog from my colleague Sylvain: How to analyse IIS logs using LogParser / LogParser Studio. In this section, we'll provide very simple LOGPARSER queries on IIS and HTTPERR logs to spot potential DOS attacks.

Before running any log parser query, you may have a quick look at the log files size and see if it is stable day after day or if you can spot unexpected "spikes". Typically, a DOS attack that is trying to "flood" a web application may translate itself into significant increase in HTTPERR and IIS logs. To check for logs size, you can use Explorer but you can also LPS/LOGPARSER as it provides a file system provider (FSLOG). In LPS, you can use the built in queries "FS / IIS Log File Sizes" to query on log file sizes:

SELECT Path, Size, LastWriteTime FROM '[LOGFILEPATH]' ORDER BY Size DESC

This first step can help to filter out "normal" logs and only keep "suspicious" logs. The next step is to start logs analysis. When it comes to IIS/Web Role analysis, there are 2 main log types to use:

 

  • HTTPERR logs (default location: c:\system32\logfiles\httperr, location on web role : D:\WIndows\System32\LogFiles\HTTPERR)
  • IIS logs (default location: C:\inetpub\logs\LogFiles, location on web role: C:\Resources\Directory\{DeploymentID}.{Rolename}.DiagnosticStore\LogFiles\Web)

II.1 Analyzing HTTPERR log

HTTPERR logs are generally small and this is expected (see Error logging in HTTP APIs for details). Common errors are HTTP 400 (bad request), Timer_MinBytesPerSecond and Timer_ConnectionIdle. Timer_ConnectionIdle is not really an error as it simply indicate that inactive client was disconnected after the HTTP keep alive timeout was reached (see Http.sys's HTTPERR and Timer_ConnectionIdle). Note that the default HTTP Keepalive timeout in IIS is 120 seconds and a browser like Internet Explorer uses a HTTP keep alive timeout value of 60 seconds. In this scenario, IE always disconnects first and this shouldn't cause any Timer_ConnectionIdle error in HTTPERR. Having a very high number of Timer_ConnectionIdle may indicate a DOS/DDOS attack where an attacker tries to consume all available connections but it can also be a non IE client or a proxy that is using a high keep alive timeout (> 120s). Also, seeing a lot of Timer_MinBytesPerSecond errors may indicate that malicious client(s) trying to waste connections by sending "slow requests" but it can also be that some clients are simply getting poor/slow network connections…

For logs analysis, I generally use a WHAT/WHO/WHEN approach:

 

WHAT

 

SELECT s-reason, Count(*) as Errors FROM '[LOGFILEPATH]' GROUP BY s-reason ORDER BY Errors DESC

 

WHO

 

SELECT c-ip, Count(*) as Errors FROM '[LOGFILEPATH]' GROUP BY c-ip ORDER BY Errors DESC

 

WHEN

 

SELECT QUANTIZE(TO_TIMESTAMP(date, time), 3600) AS Hour, COUNT(*) AS Total FROM '[LOGFILEPATH]' GROUP BY Hour ORDER BY Hour

 

This allows to quickly see WHAT are the top errors, WHO triggered them (client IPs) and WHEN the errors occurred. Then, depending on the results, some further filtering may be needed. For example, if the number of Timer_ConnectionIdle errors is very high, you can check the client IPs involved for this specific error:

SELECT c-ip, Count(*) as Errors FROM '[LOGFILEPATH]' WHERE s-reason LIKE '%Timer_ConnectionIdle%' GROUP BY c-ip ORDER BY Errors DESC

Also, we can do some filtering on a suspicious IP trying to check when suspicious access occurred :

SELECT QUANTIZE(TO_TIMESTAMP(date, time), 3600) AS Hour, COUNT(*) AS Total FROM '[LOGFILEPATH]' WHERE c-ip='x.x.x.x' GROUP BY Hour ORDER BY Hour

If the above queries are pointing to a suspicious IP, we can then check the client IP using a reverse DNS tools (http://whois.domaintools.com/).

II.2 Analyzing IIS logs

For the IIS logs, I use the same WHAT/WHO/WHEN approach as above:

 

WHAT

 

SELECT cs-uri-stem, Count(*) AS Hits FROM '[LOGFILEPATH]' GROUP BY cs-uri-stem ORDER BY Hits DESC

 

WHO

 

SELECT c-ip, count(*) as Hits FROM '[LOGFILEPATH]' GROUP BY c-ip ORDER BY Hits DESC

 

WHEN

 

SELECT QUANTIZE(TO_TIMESTAMP(date, time), 3600) AS Hour, COUNT(*) AS Total FROM '[LOGFILEPATH]' GROUP BY Hour ORDER BY Hour

 

The above queries are voluntary simples. Depending on results, we will need to "polish" them by adding filtering/grouping…etc There are already a lot of excellent articles covering this topic so I won't reinvent the wheel:

 

III - What can I do to harden/protect my web application from DOS attacks ?

Security guidelines for IIS/Azure Web Role are described in the Windows Azure Network Security Whitepaper (see section "Security Management and Threat Defense" and "Guidelines for Securing Platform as a Service"). While Azure implements sophisticated DOS/DDOS defense for large scale DOS attacks against Azure DC or DOS attacks initiated from the DC itself, the document clearly mentions that "it is still possible for tenant applications to be targeted individually". This basically means that web application in Azure should use similar means as on premise application to protect themselves against attackers and pragmatically, this means you have to put in place a couple of actions:

 

 

While this is unrelated to DOS attack, but it is also worth mentioning some basic security rules:

  • make sure the GuestOS used is up to date (don't use a specific GuestOS version unless this is absolutely necessary) and understand how Guest OS are updated (see Role Instance Restarts Due to OS Upgrades)
  • enable anti malware which is now released for IAAS and PAAS :  Microsoft Antimalware Whitepaper

 

If you are interested in Azure Security, the following page is a very good central repository of resources: Security in Azure.

 

I hope you'll find the above information useful and remember that "forewarned is forearmed"…

 

Emmanuel

 

 

Perfmon & IIS / ASP.NET

$
0
0

One regular question posed to our team deals with the performance counters set-up to ensure IIS and ASP.Net application(s) are working properly. However, as every web-application and hence every IIS Server, will behave differently depending on what will be executed, obviously the threshold for which performance will be impacted will vary a lot. Thus it's essential to define those key values before putting your application in production. By doing a progressive load test, you should be able to identify when the application performance is degraded. Once you've determined the threshold limit before degradation and the threshold where everything is working well, you just have to collect the performance counters and you'll get your key values.

Now you know how to define your key values, you need to know how to collect them. To do so, you could use the excellent PAL tool (https://pal.codeplex.com/) which gives a predefined list of counters to monitor based on the selected product. By using those scripts, you should be able to get a good overview of your application and IIS behaviour under load.

However, it could be interesting to broaden the scope to get key values in order to have a global vision of how the whole server is behaving instead of just IIS or ASP.NET. You need to check CPU, Memory, etc and put this in balance with your application and IIS.

Here is a non-exhaustive list of counters you could use to get this vision:

For IIS:

Memory:
- Available Mbytes: Allows you to see the available memory. It's important to be sure the server isn't undersized for the needs of the application
- % Commited Bytes In Use: Allows you to see the used memory. It's interesting to put this in balance with the Available Mbytes counter

Process (For all W3WP.exe processes):
- % Processor Time: Allows you to see the CPU consumption for a given process
- Virtual Bytes: Allows you to see the virtual memory for the process W3WP.exe
- Private bytes: Allows you to see the private memory for the process W3WP.exe

Processor (All instances):
- % Processor Time: Allows you to put in balance the total CPU consumption with each W3WP.exe. For example, if your server is consuming 90% of CPU and the sum of the W3WP.exe CPU consumption is 10%, you clearly have an issue elsewhere than IIS

HTTP Service Request Queues (All instances):
- CurrentQueueSize: Allows you to see the size if the HTTP Kernel side queue and thus to see if a huge number of requests are getting queued without being handled by the User Mode side
- RejectedRequests: Allows you to see if requests are rejected from Kernel side without being handled by the User Mode side

APP_POOL_WAS (For all listed Application Pools):
- Current Application Pool State: Allows you to see the state of an Application Pool
- Current Application Pool Uptime: Allows you to see if the Application has been restarted or not (relay useful during a load test)

 

For ASP.NET:

ASP.NET Applications (For all applications you want to monitor):
- Compilations Total: Allows you to see the number of compiled pages
- Request Bytes In Total: Allows you to see the number of received bytes
- Request Bytes Out Total: Allows you to see the number of sent bytes
- Request Execution Time: Allows you to see the execution time for the most recent request
- Request Wait Time: Allows you to see the time spent in the queue before being handled for the most recent request
- Requests Executing: Allows you to see the number of requests being executed
- Request in Application Queue: Allows you to see the number of requests in the queue
- Requests Timed Out: Allows you to see the number of timed-out requests
- Requests/Sec: Allows you to see the number of requests executed per seconds
- Sessions Active: Allows you to see the number of active sessions

ASP.NET V4.0.30319:
- Application Restarts: Allows you to see the number of restarts for the Application Domain

 

With all this information, you should be able to determine the threshold where your application is behaving as expected and the threshold where problems should start to occur. In addition, if you want to go further on ASP.NET, you could have a look to this old but good article which explains some key counters and how to detect some known issues: http://linqto.me/perfaspnet

I hope this article has been useful.
Sylvain Lecerf and the French Microsoft IIS Support Team

Perfmon : IIS / ASP.NET

$
0
0

Une question qui revient régulièrement dans notre équipe concerne les compteurs de performance à mettre en place pour s'assurer qu'à la fois IIS et les application(s) ASP.NET fonctionnent bien. Cependant, comme chaque application, et donc chaque serveur IIS, vont se comporter différemment en fonction de ce qui sera exécuté, il est évident que le seuil au-delà duquel des pertes de performance seront notables va énormément varier. Il est donc indispensable de définir ces valeurs types dès le départ en testant votre application avant une mise en production. En effectuant une montée en charge progressive, vous devriez pouvoir identifier à partir de quel moment les performances de l'application se dégradent. Une fois que vous avez déterminé le seuil limite et le seuil optimal, il ne vous reste plus qu'à collecter les compteurs et cela correspondra à vos valeurs de référence.

Maintenant que vous savez comment définir ces valeurs de référence, il ne nous reste plus qu'à savoir quels compteurs mettre en place pour surveiller le tout. Pour ce faire, vous pouvez utiliser l'excellent PAL (https://pal.codeplex.com/) qui fournit une liste préétablie de compteurs à surveiller en fonction du produit sélectionné. En utilisant ces scripts, vous devriez avoir un excellent aperçu du comportement de votre application et de IIS en fonction de la charge.

Toutefois, il peut être intéressant d'élargir le champ dans le cadre de l'établissement des valeurs de référence afin d'avoir une vision globale du fonctionnement du serveur et non pas seulement de IIS ou d'ASP.NET. Il faut être capable de voir comment va se comporter le CPU, la mémoire, etc en fonction de la charge et en fonction de l'application.

Voici une liste non-exhaustive des compteurs qui peuvent être utilisés pour avoir cette vision :

Pour IIS :

Memory :
- Available Mbytes : Permet de voir la mémoire disponible. Important pour ne pas sous dimensionner son serveur par rapport au besoin de l'application
- % Commited Bytes In Use : Permet de voir la mémoire utilisée. Il est intéressant de mettre cette valeur en balance avec le compteur Available Mbytes

Process (Pour tous les processus W3WP.exe) :
- % Processor Time : Permet de voir la consommation CPU pour un processus donné.
- Virtual Bytes : Permet de voir la consommation de mémoire virtuelle du processus W3WP.exe
- Private bytes : Permet de voir la consommation de mémoire privée du processus W3WP.exe
Si vous ne connaissez pas la différence entre la mémoire virtuelle et privée, je vous invite à lire l'article suivant : http://blogs.msdn.com/b/friis/archive/2008/10/13/m-moire-recyclage-sous-iis-6.aspx
Pour résumer :
- Virtual Bytes = Reserved + Committed
- Committed = Private Bytes = Page File + Working Set

Processor (Tous les instances) :
- % Processor Time : Permet de mettre le ratio global de consommation CPU en rapport avec la consommation de chaque W3WP.exe. Si votre serveur consomme 90% de CPU alors que l'ensemble des processus W3WP.exe n'en consomme que 10%, il y a sûrement un problème ailleurs qu'au niveau de IIS

HTTP Service Request Queues (Toutes les instances) :
- CurrentQueueSize : Permet de voir la taille de la file HTTP côté Kernel et donc de voir si un trop grand nombre de requêtes s'empilent sans être prisent en charge
- RejectedRequests : Permet de voir si des requêtes sont rejetées côté Kernel sans même être traitées par le côté User Mode

APP_POOL_WAS (Pour tous les Application Pools listés) :
- Current Application Pool State : Permet de voir l'était d'un Application Pool
- Current Application Pool Uptime : Permet de savoir si l'Application Pool a été redémarré ou non pendant le test de charge

 

Pour ASP.NET :

ASP.NET Applications (Toutes les applications que vous voulaient surveiller) :
- Compilations Total : Permet de voir le nombre de page compilées
- Request Bytes In Total : Permet de voir le nombre de bytes reçus
- Request Bytes Out Total : Permet de voir le nombre de bytes envoyés
- Request Execution Time : Permet de voir le temps nécessaire pour l'exécution de la requête la plus récente
- Request Wait Time : Permet de voir le temps passé dans la file d'attente pour la requête la plus récente
- Requests Executing : Permet de voir le nombre de requêtes en cours d'exécution
- Request in Application Queue : Permet de voir le nombre de requêtes en file d'attente
- Requests Timed Out : Permet de voir le nombre de requêtes qui ont rencontré une erreur du type Time Out
- Requests/Sec : Permet de voir le nombre de requêtes exécutées par seconde
- Sessions Active : Permet de voir le nombre de sessions actives

ASP.NET V4.0.30319 :
- Application Restarts : Permet de voir le nombre de redémarrage d'Application Domain


Avec cet ensemble d'informations, vous devriez avoir les valeurs adéquates pour déterminer le seuil de fonctionnement optimal et le seul qui indique un problème pour votre/vos application(s). Enfin, si vous souhaitez aller plus loin sur la partie ASP.NET, vous pouvez jeter un oeil à cet article qui explique certain des compteurs les plus importants et comment détecter des problèmes connus (article en anglais) : http://linqto.me/perfaspnet

En espérant que cet article vous soit utile.
@ Bientôt
Sylvain Lecerf et l'équipe du support IIS Microsoft France

PowerShell – Comment éviter le prompt d’UAC pour automatiser l’exécution d’un script

$
0
0

L'un de mes clients a récemment soulevé le fait qu'il ne pouvait pas exécuter un script PowerShell automatiquement car le script nécessitait une élévation de privilège (via un prompt UAC) ce qui bloquait l'exécution du script.
Bien entendu, la désactivation d'UAC n'était pas envisageable. En effectuant quelques recherches, je suis tombé sur l'article de blog suivant :
http://blogs.technet.com/b/benshy/archive/2012/06/04/using-a-powershell-script-to-run-as-a-different-user-amp-elevate-the-process.aspx

J'ai donc proposé à mon client d'exécuter une tâche planifiée avec une commande de ce type :
Start-Process -Verb Runas powershell.exe c:\test.ps1

La tâche planifiée a pour but de démarrer un processus PowerShell.exe qui va exécuter le script test.ps1 situé dans le répertoire C:\ en forçant l'élévation de privilège.
C'est la propriété Runas qui permet de forcer l'élévation de privilège et ainsi de ne pas être gêné par UAC.

En espérant que cet article vous soit utile.
@ Bientôt
Sylvain Lecerf et L'équipe de support IIS Microsoft France


Azure Black IPs Intro

$
0
0

What is the Azure Black IPs Nuget Package.

In a previous post on our blog (http://blogs.msdn.com/b/friis/archive/2014/04/25/easily-detect-and-block-malicious-http-requests-targeting-iis-asp-net-using-blackips.aspx) we have discussed how we could go about detecting and blocking malicious input to an ASP.net website. We have taken this concept a step further with the v2.0 of the tool that is now available on Nuget.

This module was created to address a specific problem that you can encounter when running ASP.net applications in production on the Internet. The problem stems from the fact that the Internet is not always a nice and order abiding place – there are certain users and bots that will try to force your site and inject code into the input fields to see if the application can be made vulnerable to code / javascript or SQL injection.

To this effect, they will send requests to your application where they send input values like the following:

<script>
window.alert("Your site has been hacked");
</script>

This code is harmless enough, but it will trigger the request validation mechanisms built into ASP.net, and will make the Runtime raise an error. Error processing in an ASP.net application can be costly, especially if there is a high number of errors to treat in a short period of time. Hence the importance to know when such errors occur and from which IP addresses they are coming from.

Most of the time, the IP of the computers that are sending such junk requests to your site will be repetitive, and the objective for you should be to be able to locate them quickly and easily to be able to then restrict traffic to your server from those IP addresses – at least on a temporary basis.

To this end, Azure Black IPs will allow you to trap all such errors raised by your application and display them on a control panel page. You can then inspect the page even from a distance, without having to connect to your server via remote desktop (or something similar) and go through the event logs – which can be quite lengthy and time consuming.

 

What do you get when you install Azure Black IPs.

The Nuget package will add two .Net binaries (dlls) to your application: AzureBlackIP.dll and AzureErrorDisplay. These will contain an HttpModule that will listen for a specific type of error caused by such malicious / erroneous requests, and an HttpHandler that will display the control panel page.

A couple of configuration entries will be added to the site's web.config configuration file allowing you to configure the tracing. These entries are in the <system.web> and <system.webServer> tags of the configuration file:

<addverb="GET, POST"path="DisplayIP.err"type="Azure.BlackIPs.Handlers, ErrorDisplayHandler" />

<addname="Error Display Handler"path="DisplayIP.err"verb="*"type="Azure.BlackIPs.Handlers.ErrDisplayHandler"resourceType="Unspecified"preCondition="integratedMode" />

These two entries allow you to configure the url of the control panel page. By default, this is /Display.err at the root of your website.

The other value is:

<addkey="blackIpLogging"value="true" />

This controls the tracing – enabling it if the value is set to true, and disabling it if set to false.

 

Using Azure Black IPs:

You can download the Nuget package from Nuget.org and install it into your application (WebForms or MVC or WebPages). Alternately, you can run the following PowerShell command from the Package manager console inside Visual Studio:

Install-Package Azure-BlackIP

Once you deploy your application, assuming that you are running with the default settings, you can just type in the following address:

<yourappsurl.com>/Display.err

to display the control panel page. This will list all the recent errors (since the application started) that were encountered.

Clicking on one of the errors will display more details, like the stack trace, the user agent of the connecting client, the request headers, and most importantly the IP address of the machine that initiated the request.

Requirements:

The module requires that you are running ASP.net 4.5 or above for the application to work correctly.

Azure Black IPs – getting started video

$
0
0

This week, together with my colleague Emmanuel, we released a Nuget package called Azure Black IPs that would allow you to track IP addresses that send requests that trigger the ASP.net validation for your websites. Here is a quick video of how to install and get started with the Azure Black IPs Nuget pakage:

To read the intro article on the Azure Black IPs, please see go here: http://blackips.linqto.me 

Paul Cociuba
http://linqto.me/about/pcociuba

User Controls, Update Panels and JQuery scripts all working together happily.

$
0
0

While working on implementing new functionality on my online favorites manager (www.linqto.me) which I encourage everyone to check out, I came across the following problem:

  • Given a UserControl, I would like to have an UpdatePanel that would refresh some of the HTML that was generated by user control on the pages it is used on. Furthermore, I would like to control to dynamically inject some JavaScript into the page so that once the HTML is rendered, I could make use of JQuery to further change the DOM and animate things each time a partial postback occurs.

In order to explain how you could go about doing such a thing, I have created a simpler example (although you can look into the keywords feature described here for www.linqto.me to see a more complex usage). The sample I will walk you through looks like the following screen-shot: it consists of two bar like indicators that will show how much of a total value is used up – this can be business data, metrics, pressure, anything else you want.

The bar indicators are actually composed of two divs each (one div – with a blue or pink color - inside another div with a gray background). When you start the example, the two indicators (the divs) which are both gray will only appear, since the width values of the interior divs are 0:

<style>
  .grayDiv{
     background-color: lightgray;
     border: solid1px;
     border-color: gray;
     width: 500px;
   }

 

  .blueDiv{
     background-color: lightblue;
     width: 0px;
   }

</style>

 ...

 Indicator 1:
<br/>
<divid="bar1"class="grayDiv">
   <divid="indicator1"class="blueDiv">&nbsp;</div>
</div>
<br/>

Notice that the style element for the .blueDiv class attributes a width of 0 to the inner div, making it invisible.

When the 'Update Indicators' button is pressed, a portion of the markup is updated, and I then use a script, which will be triggered following the async postback from the update panel to make changes to the width of the inner divs and set a new value in the CSS style.

The entire control is contained in ASCX and ASCX.cs files that are named WebUserControl. If you look at the markup of the control, you will see that it makes use of an UpdatePanel control that contains a LinkButton control (the 'Update Indicators' button) and two hidden fields:

<asp:UpdatePanelID="updUpdateIndicators"runat="server"UpdateMode="Conditional">
   <ContentTemplate>
      <asp:HiddenFieldID="valIndicator1"runat="server"ClientIDMode="Static"/>
      <asp:HiddenFieldID="valIndicator2"runat="server"ClientIDMode="Static"/>
      <asp:LinkButtonID="lnkUpdateValues"runat="server"Text="Update Indicators"OnClick="lnkUpdateValues_Click"/>
   </ContentTemplate>
</asp:UpdatePanel>


When the button is clicked, the update panel initiates a partial postback to the server, where the code for the lnkUpdateValues_Click() event handler will run. Here is the code for the event handler in question:

//event handler for the link button click event
protectedvoid lnkUpdateValues_Click(object sender, EventArgs e)
{
   //create a couple of random numbers that vary between 0 and 500 and assign them to the hidden fields
   Random randGenerator = newRandom();
   int val1 = randGenerator.Next(0, 500);
   int val2 = randGenerator.Next(0, 500);

   //assign these new values to the hidden fiels and send the entire thing back to the update panel
   valIndicator1.Value = val1.ToString();
   valIndicator2.Value = val2.ToString();
}

All the code does is that it calculates two random integer values that are between 0 and 500 and then sets the resulting numbers as the values of the two hidden fields valIndicator1 and valIndicator2.

However, please note that the divs that are responsible for displaying the data in a more graphical format are not contained inside the update panel, and the markup resulting from the partial post-back will not change these elements.

Here is where the magic comes in. We need a script that is fired whenever the update panel is updated and will then make further changes to the DOM (Document Object Model). The script needs to be dynamically injected the first time the control is rendered on the page it will live on. To achieve this, we use the Page_Load event handler for the control with the following code:

protectedvoid Page_Load(object sender, EventArgs e)

{
    //attempt to inject javascript into the loading page
    ScriptManager.RegisterClientScriptInclude(
       this, GetType(), "valuesJqScript", ResolveUrl("~/scripts/ValuesUpdater.js"));
 
    ScriptManager.RegisterStartupScript(this.Page, GetType(), ClientID, "WireUpValues();", true);
}

This code is the interesting part: the first line, where we call the ScriptManager.RegisterClientScriptInclude method, will indicate to the page hosting the control that it should also load a JavaScript file that is located in the /scripts/ folder and that is called ValuesUpdater.js. This will append a script block to the page instructing the browser to also load the script alongside the rest of the resources that are used by this page.

The second line will call the ScriptManager.RegisterStartupScript overload method. This method will specify what the name of the JavaScript function to be called is: in our case the file called ValuesUpdater.js defines a function called WireUpValues, which should be called. Here is the code of the script function:

function WireUpValues() {
   //each time the update panel reload the HTML markup, get a hold of the hidden controls
   var val1Control = $("#valIndicator1").val();
   var val2Control = $("#valIndicator2").val();

   //select the two divs that are supposed to show the progress bars
   var div1 = $("#indicator1");
   var div2 = $("#indicator2");

   //set the width css width values of the two divs to match the values passed in
   div1.css("width", val1Control);
   div2.css("width", val2Control);
}

What the code does is that it uses JQuery to select the two hidden fields from the HTML that is rendered by the User Control, and stores their values into two variables called va1Control and val2Control. It then selects the divs we need to change the width for, using JQuery selectors, and will proceed to call the css() function on these JQuery wrapped objects to reset the width property to the values indicated by the hidden input controls.

The call to one of the two overloads of RegisterStartupScript achieves one of the following actions:

  • The script is called once when the control is loaded, if this overload is used
  • The script is called when the control is loaded and each and every time there is a partial postback on the control, if this overload is called – which is the overload used in the sample project.

To wrap up, here is the workings of the project from A-Z now that we have looked at the code:

  1. The control gets loaded on the page and the Page_Load event handler is called.
  2. This instructs the page to include a JavaScript file which should be loaded with the page and will wire up the function to be called once the page has loaded and when a partial postback from the control has completed (the HTML markup from the postback is incorporated into the page).
  3. The function fires on the page load and on subsequent partial postbacks and executes a simple logic of transferring the values of two integers (expressed as hidden fields) into CSS classes that make the divs be wider or smaller.

You can download the sample from the link below. Happy coding with ASP.net Webforms and JQuery.

Paul Cociuba
www.linqto.me/about/pcociuba

ASP.net segment heap sizes – or how much virtual memory my web-app will need

$
0
0

Many a times, customers come to me saying they have a feeling that their ASP.net application takes up more memory then it did before, especially if they are migrating from the .Net 2.0 Runtime to the .Net 4.0 Runtime and from a 32 bit architecture to a 64 bit architecture. Some time ago, I wrote a small cheat on .Net segment size vs Architecture, which you can find listed here:

    http://linqto.me/n/AspNetMemory

Today, I would like to go into a little more detail on how we go about computing the memory needed at startup by an ASP.net architecture, based on the machine we are running on, since there are several factors that come into play when calculating this sum.

Heaps, Heaps and more Heaps

.Net stores most of the variables you will create (except for the value types) on a data-structure called the heap. This lives in the process address space and grows as more and more variables are needed and allocated by the application. The key is the 'growing when needed'. If the .Net Framework simply waits to execute an instruction calling 'new' to allocate a variable, this would be very bad for performance reasons we will not discuss here. Hence the heap pre-allocates entire regions of memory (called segments) which can then be used to store variables.

The .Net Managed Heap is actually two data structures: the Small Object Heap and the Large Object Heap. The Small Object Heap (SOH) is used to store smaller sized objects. Everything that is larger in size than 86 Kb is placed on the Large Object Heap. You can learn more about the two by reading this article on my friend Tess's blog:

    http://blogs.msdn.com/b/tess/archive/2006/06/22/643309.aspx

Suffice to say, that when each of the two heaps is initialized, just before you application is loaded into the w3wp.exe process, a heap segment will be reserved for the SOH and a second heap segment will be reserved for the LOH. Hence we wind up with to heap segments of process address space that is reserved from the get go. To understand more about process address spaces and reserved memory, please go through the article I wrote together with my colleague Sylvain on memory management in a Windows process, some time ago:

    http://blogs.msdn.com/b/friis/archive/2008/10/13/m-moire-recyclage-sous-iis-6.aspx

What's inside the box

Your computer / server that is running the ASP.net application you have just written, be it a virtual machine or a physical machine, will be equipped with a CPU. The central processing unit can (and normally does) have more than one core. For modern processors, they tend to have multiple cores, multiple processors on the same chip. Each of the cores may be hyper threaded, resulting in the fact that Windows may see double the number of processors if each core is hyperthreaded.

If you start the Windows Task Manager, you can see how many cores you have available by looking at the Performance tab, on the CPU resources. If you only have one graph, make sure that you have selected the option (from the context menu) to Show the Logical Processors (see screenshot).

So why is this important? Because the .Net Framework will try and take maximum advantage of the architecture of the server / machine it is running on, and will make use of each logical core available. How can it do this? One way is by creating multiple Managed Heaps instead of just one. In this way, the memory allocation operations that are needed can be performed by the processor the heap is allocated to. Hence, you will have as many .Net Heaps (a SOH and LOH) as you have processors.

For the example screenshot above, the machine has eight processor cores. If we fire up an ASP.net application, the .Net Runtime will create 8 SOH and LOH heaps, each of which will reserve an initial segment of memory.

Don't forget about the architecture

The architecture that your computer runs is also a factor in the equation. Older servers used to run on 32 bit architectures, meaning that each pointer (number that points to an address in the process address space) had 32 digits which could be either 1 or 0. More recent machines have 64 bit architectures, meaning the pointers are 64 digits log.

The 64 bit architecture pointers are twice the size of the 32 bit ones, and hence we can represent a whole lot more virtual process address space on such an architecture. The .Net Framework can operate both on 32 and 64 bit architectures, but will create bigger or smaller heap segments base on the architecture it is running on.

Putting it all together

To answer the question: how much memory is reserved by the .Net Framework at the start of my ASP.net application, we need to take into consideration the factors listed above:

  • Each managed heap is actually composed of two heaps: SOH and LOH
  • There will be as many heaps as there are logical processors on the machine
  • Heap segment size depends on machine architecture.

With this in mind, we can now look at the .Net segment sizes based on architecture, Runtime version and heap type:

  • ASP.NET 2.0 on x86 : 64 Mb for small object segment per processor and 32 Mb for large object segment per processor
  • ASP.NET 2.0 on x64 : 512 Mb for small object segment per processor and 128 Mb for large object segment per processor
  • ASP.NET 4.0 on x86 : 64 Mb for small object segment per processor and 32 Mb for large object segment per processor
  • ASP.NET 4.0 on x64 : 1024 Mb for small object segment per processor and 256 Mb for large object segment per processor

Application pool gets recycled due to anti-virus?

$
0
0

It's not the first time that I heard of my customers complaining about their anti-virus: after a certain activity (such as a regular scanning for system files), their application pools get restarted automatically.

When this issue happens, some customers are seeing the following event in System Event Log:

Log Name: System

Source: Microsoft-Windows-WAS

Date: XXXX

Event ID: 5080

Task Category: None

Level: Information

Keywords: Classic

User: N/A

Computer: XXXX

Description:

The worker processes serving application pool '[Application pool name]' are being recycled due to 1 or more configuration changes in the application pool properties which necessitate a restart of the processes.

But the anti-virus didn't make any modification to the configuration file. How could this happen?

In fact, there may be several reasons. For example, when anti-virus scans the concerned file, it changed the "Last modification time"; It can also occur when WAS tries to detect if the configuration file has been changed, while anti-virus is scanning the file at the same time hence WAS detects the handle on the file and considers it being modified.

One effective way to avoid this scenario is by excluding the related configuration files of IIS from the anti-virus scanning scope.

Here is an exclusion list that you may consider to configure your anti-virus.

Attention: this is not an official list provided by Microsoft, it is simply a recommended list summarized according to our support experience. You should find your own compromise between security and performance. If you need any further information, please contact your anti-virus vendor.

  • Default folder for x86 compiled ASP.Net Code : %WINDIR%\Microsoft.NET\Framework\{version}\Temporary ASP.NET Files
  • Default folder for x64 compiled ASP.Net Code : %WINDIR%\Microsoft.NET\Framework64\{version}\Temporary ASP.NET Files
  • IIS Configuration Folder : %WINDIR%\System32\Inetsrv\Config
  • Default Content Location (where the web.config stands) : %SYSTEMDRIVE%\Inetpub\WWWRoot (or the customized folder)
  • Default Logging Location : %SYSTEMDRIVE%\Inetpub\Logs\LogFiles (or the customized folder)
  • Default FREB Logging Location : %SYSTEMDRIVE%\inetpub\logs\FailedReqLogFiles (or the customized folder)
  • Default HTTP.SYS Logging Location : %WINDIR%\System32\LogFiles\HTTPERR
  • Default History Location : %SYSTEMDRIVE%\Inetpub\History
  • Default Backup Location : %WINDIR%\System32\Inetsrv\backup
  • Default folder for storing Compressed Content : %SYSTEMDRIVE%\Inetpub\temp\IIS Temporary Compressed Files
  • Default folder for compiled ASP templates : %SYSTEMDRIVE%\Inetpub\temp\ASP Compiled Templates
  • Default Configuration Isolation Path : %SYSTEMDRIVE%\Inetpub\temp\appPools
  • Default Folder for Error pages : %SYSTEMDRIVE%\Inetpub\custerr

 

Hope this is useful for you.

Jin W. and IIS/ASP.NET support team of Microsoft France

Articles you may be interested in:

Microsoft Anti-Virus Exclusion List

http://social.technet.microsoft.com/wiki/contents/articles/953.microsoft-anti-virus-exclusion-list.aspx

IIS Application Pool Recycling Events

https://technet.microsoft.com/en-us/library/cc735206(v=ws.10).aspx

Common reasons why your application pool may unexpectedly recycle

http://blogs.msdn.com/b/johan/archive/2007/05/16/common-reasons-why-your-application-pool-may-unexpectedly-recycle.aspx

Debugging your custom FTP authentication provider module

$
0
0

If you are reading this article, I will make the assumption that you already know that in Microsoft FTP server that comes with IIS 7.5 or above, you have three possibilities for authentication:

  • Anonymous: you let all users in without requiring credentials from their side
  • Basic Authentication: users have to provide a username and password which will be matched by IIS to a local or domain account (the username and password will be sent in clear text via the control port 21 if you have not setup FTPS).
  • Custom authentication: you write your own authentication module to validate the username password combination that a user provides you with according to your own business rules.

There is a very complete article about how to create a custom authentication provider with FTP, written by Robert McMurray – which you can find here:

http://blogs.msdn.com/b/robert_mcmurray/archive/2011/06/30/how-to-create-an-authentication-provider-for-ftp-7-5-using-blogengine-net-s-xml-membership-files.aspx

Since this article dates a bit, you can follow the article below which provides a detailed walk through on how to install an FTP custom authentication provider once you have built one. This can be done by the IIS Manager Console GUI, contrary to what the article from Robert indicates (as I have said, the first article is a bit old):

https://www.iis.net/configreference/system.applicationhost/sites/site/ftpserver/security/authentication/customauthentication/providers

The question is, what happens if you try and install the provider, and it does not work. How do you get started with troubleshooting it. This article intends to give you a basic workflow:

1. Start with the GAC

In order for the custom authentication provider to be found and loaded by the FTP server, it must be present in the GAC (Global Assembly Cache). Hence, as per the articles above, when you are writing your module, you must make sure that it is signed, so that it can be deployed to the GAC.

Open a Windows Explorer and navigate to the C:\Windows\Assembly folder. This is where all the GAC dlls are located. If you have not deployed your module to the GAC, deploying it is a simple as dragging and dropping the dll from another Windows Explorer window to the one open to C:\Windows\Assembly.

(Note: in Windows 8.1 and Windows 10 the Assembly GAC shell is not present, so deployment to the GAC is only possible with GacUtil)

Please note that in some case, Windows Explorer will not refresh the contents of the GAC right away, after the drag and drop, hence I recommend you close all Windows Explorer windows and then start with a new instance of Windows Explorer and check for the presence of the assembly in the GAC. If the assembly is not present in the GAC (ie. you cannot find YouFTPModule.dll present in the GAC), then go no further, you have to fix this first.

You may use an elevated command line prompt to see the contents of the GAC as well. Navigate to the C:\Windows\Assembly folder. Running the dir command will list the contents of the GAC's folders (Windows Explorer has a special shell to display the GAC – if you want to see the real structure, you can use the command line as shown below):

As you can see from the screenshot above, there are several folders inside the GAC (these may vary if you are on 32 or 64 bit machine), but the folder we are interested in is the GAC_MSIL (short for Microsoft Intermediary Language – the assembly language your .Net code will be compiled in). It is this folder (GAC_MSIL) we should be navigating to, to see if the assembly we have developed is present in. Use the dir /p command to list the assemblies page per page instead of all at once. If the assembly is not present, it means there was an error in deploying it to the GAC.

You may use tools like gacutil (that comes with Visual Studio) to try and deploy the assembly via command line – since this tool will give you explicit error messages. You can learn more about the gacutil tool here:

https://msdn.microsoft.com/en-us/library/ex0ss12c%28v=vs.110%29.aspx?f=255&MSPPError=-2147217396

2. Where does the assembly get loaded once you try and authenticate?

Once the FTP server is configured to use your own authentication provider, and you try and authenticate a first time, the provider should get loaded into IIS. But where? The answer is that it will get loaded into a dllhost.exe process, and will be executed and hosted inside the process. Which dllhost.exe process is it – since you are likely to have more than one such process on the machine?

Open an elevated command prompt window and type in the following command: tasklist -svc. You need to look for a service called DcomLaunch which is running in a svchost.exe process (with PID 720 in my screenshot):

You will then need to download a tool called Process Explorer from the Microsoft site (http://linqto.me/Procmon). Unzip the tool and launch it with administrative privileges (right click and select 'Run as Administrator). This tool will let you peak into what is loaded inside each process in Windows.

Locate the svchots.exe process with the PID corresponding to the DcomLaunch service (which you have gotten from the previous step using the command line). Underneath this process, there should be a dllhost.exe process which should be loading the assembly containing your authentication provider. To view the dlls in this process, chose the View > Lower Panel > Dlls menus from the Process Explorer window.

If the assembly is not present on loaded inside this process, then there might be an ACL problem when trying to load the file – which is quite rare. You can download Process Explorer (http://linqto.me/Procmon) and use the tool to trace loading attempts. After you have downloaded and unzipped the tool, launch it and then click on the 'magnifier' button to stop tracing, the clear button (button with a gum) to clear the trace. Setup a filter in procmon by pressing the filter button:

Setup a filter where:

  • PID is the ID of the dllhost.exe you have identified
  • Path contains the name of the dll which contains your provider

Re-start the procmon capture and try and authenticate to the FTP server again. Personally, I recommend using a client such as FileZilla, since this will give you great, color-coded output of the authentication attempt. Stop the procmon trace. If there is no load attempt for the dll containing your provider, the FTP server is not configured to use the authentication provider you developed. If there are failed load attempts, inspect these since the tool will tell you why these fail.

3. My provider loads, but still does not work.

If you have gone through points 1 and 2 in this blog and your provider does load but still fails to authenticate, then it is possible that the code is throwing an error when called. You can either debug this with Visual Studio, if VS is installed on the same server you are setting the provider up on, or you can use Debug Diag (http://linqto.me/DebugDiag) to trace errors.

Setup a Debug Diag crash rule as explained in this blog post:

http://blogs.msdn.com/b/chaun/archive/2013/11/12/steps-to-catch-a-simple-crash-dump-of-a-crashing-process.aspx .

The rule will not produce crash dumps, but will record all .Net Exceptions that will be encountered by the process while the crash rule is tracking it. Hence, once the rule is setup, you can try and authenticate to the FTP server one or more times, then come and stop (kill) the dllhost.exe process that you were monitoring. This will prompt Debug Diag to create a log of the lifetime of the process and all errors encountered during the execution.

The log thus created can be found in: C:\Program Files\Debug Diag\Logs\<NameOfCrashRule>\ . There will be a text file inside this folder that will also contain dllhost.exe and the PID of the process that was tracked by the rule. If you open this file, you should see the details of the errors encountered during the execution of the process – with .Net callstacks – they will be located towards the end of the file. You will need to examine these stacks and error messages to understand what errors are raised in your code and why.

Happy debugging for all the FTP auth module developers out there.

by Paul Cociuba
http://linqto.me/about/pcociuba 

Security guidelines to detect and prevent DOS attacks targeting IIS/Azure Web Role (PAAS)

$
0
0


In a previous blog, we explained how to Install IIS Dynamic IP Restrictions in an Azure Web Role. In the present article, we’ll provide guidelines to collect data and analyze it to be able to detect potential DOS/DDOS attacks. We’ll also provide tips to protect against those attacks. While the article focuses on web applications hosted in Azure Web Role (PAAS), most of the article content is also applicable to IIS hosted on premise, on Azure VMs (IAAS) or Azure Web Site.

I – Archive Web Role logs

Without any history of IIS logs, there is no way to know if your web site has been attacked or hacked and when a potential threat started. Unfortunately, many customers are not keeping any history of their logs which is a real issue when the application is hosted as an Azure Web Role (PAAS) because PAAS VMs are “stateless ” and can be reimaged/deleted on operations like scaling, new deployment …etc…

A comprehensive list of Azure logs is described in the following documents:

To keep logs history, Windows Azure platform provides everything needed with Windows Azure Diagnostics (WAD). All you have to do is simply to turn the feature on by Configuring Windows Azure Diagnostics and you’ll get your IIS logs automatically replicated to a central location in blob storage. One caveat is that bad configuration of WAD can prevent log replication and log scavenging/cleanup which in worst case may cause IIS logging to stop (see IIS Logs stops writing in cloud service). You also need to consider that keeping history of logs in Azure storage can affect you Azure bill and one “trick” is to Zip your IIS log files before transferring with Windows Azure Diagnostics. For on premise IIS, there are many resources describing how to archive IIS logs and you may be interested in this script: Compress and Remove Log Files (IIS and others).

In some cases related to Azure Web Role, there are situations where you need to immediately gather all logs manually. This is true if you’ve not setup WAD or if you can’t wait for the next log replication. In this situation, you can manually gather all logs with minimal effort using the procedure described in Windows Azure PaaS Compute Diagnostics Data (see “Gathering The Log Files For Offline Analysis and Preservation”). The main limitation of this manual procedure is that you need to have RDP access to all VM instances.

Now that you have your logs handy, let’s see how to analyze them.

II – Analyse your logs

LOGPARSER is the best tool to analyze all kinds of logs. If you don’t like command line prompt, you can use LOGPARSER Studio (LPS) and read the following cool blog from my colleague Sylvain: How to analyse IIS logs using LogParser / LogParser Studio. In this section, we’ll provide very simple LOGPARSER queries on IIS and HTTPERR logs to spot potential DOS attacks.

Before running any log parser query, you may have a quick look at the log files size and see if it is stable day after day or if you can spot unexpected “spikes”. Typically, a DOS attack that is trying to “flood” a web application may translate itself into significant increase in HTTPERR and IIS logs. To check for logs size, you can use Explorer but you can also LPS/LOGPARSER as it provides a file system provider (FSLOG). In LPS, you can use the built in queries “FS / IIS Log File Sizes” to query on log file sizes:

SELECT Path, Size, LastWriteTime FROM ‘[LOGFILEPATH]’ ORDER BY Size DESC

This first step can help to filter out “normal” logs and only keep “suspicious” logs. The next step is to start logs analysis. When it comes to IIS/Web Role analysis, there are 2 main log types to use:

  • HTTPERR logs (default location: c:\system32\logfiles\httperr, location on web role : D:\WIndows\System32\LogFiles\HTTPERR)
  • IIS logs (default location: C:\inetpub\logs\LogFiles, location on web role: C:\Resources\Directory\{DeploymentID}.{Rolename}.DiagnosticStore\LogFiles\Web)


II.1 Analyzing HTTPERR log

HTTPERR logs are generally small and this is expected (see Error logging in HTTP APIs for details). Common errors are HTTP 400 (bad request), Timer_MinBytesPerSecond and Timer_ConnectionIdle. Timer_ConnectionIdle is not really an error as it simply indicate that inactive client was disconnected after the HTTP keep alive timeout was reached (see Http.sys’s HTTPERR and Timer_ConnectionIdle). Note that the default HTTP Keepalive timeout in IIS is 120 seconds and a browser like Internet Explorer uses a HTTP keep alive timeout value of 60 seconds. In this scenario, IE always disconnects first and this shouldn’t cause any Timer_ConnectionIdle error in HTTPERR. Having a very high number of Timer_ConnectionIdle may indicate a DOS/DDOS attack where an attacker tries to consume all available connections but it can also be a non IE client or a proxy that is using a high keep alive timeout (> 120s). Also, seeing a lot of Timer_MinBytesPerSecond errors may indicate that malicious client(s) trying to waste connections by sending “slow requests” but it can also be that some clients are simply getting poor/slow network connections…

For logs analysis, I generally use a WHAT/WHO/WHEN approach:

 

WHAT

 

SELECT s-reason, Count(*) as Errors FROM ‘[LOGFILEPATH]’ GROUP BY s-reason ORDER BY Errors DESC

 

WHO

 

SELECT c-ip, Count(*) as Errors FROM ‘[LOGFILEPATH]’ GROUP BY c-ip ORDER BY Errors DESC

 

WHEN

 

SELECT QUANTIZE(TO_TIMESTAMP(date, time), 3600) AS Hour, COUNT(*) AS Total FROM ‘[LOGFILEPATH]’ GROUP BY Hour ORDER BY Hour

 

This allows to quickly see WHAT the top errors are, WHO triggered them (client IPs) and WHEN the errors occurred. Combined with the graph feature of Log Parser Studio, our WHEN query can quickly spot unexpected peak(s):

Depending on the results, some further filtering may be needed. For example, if the number of Timer_ConnectionIdle errors is very high, you can check the client IPs involved for this specific error:

SELECT c-ip, Count(*) as Errors FROM ‘[LOGFILEPATH]’ WHERE s-reason LIKE ‘%Timer_ConnectionIdle%’ GROUP BY c-ip ORDER BY Errors DESC

Also, we can do some filtering on a suspicious IP trying to check when suspicious access occurred:

SELECT QUANTIZE(TO_TIMESTAMP(date, time), 3600) AS Hour, COUNT(*) AS Total FROM ‘[LOGFILEPATH]’ WHERE c-ip=’x.x.x.x’ GROUP BY Hour ORDER BY Hour

If the above queries are pointing to a suspicious IP, we can then check the client IP using a reverse DNS tools (http://whois.domaintools.com/) and blacklist it using either Windows firewall, PAAS/IAAS ACLs or IP Restrictions module…

II.2 Analyzing IIS logs

For the IIS logs, I use the same WHAT/WHO/WHEN approach as above:

 

WHAT

 

SELECT cs-uri-stem, Count(*) AS Hits FROM ‘[LOGFILEPATH]’ GROUP BY cs-uri-stem ORDER BY Hits DESC

 

WHO

 

SELECT c-ip, count(*) as Hits FROM ‘[LOGFILEPATH]’ GROUP BY c-ip ORDER BY Hits DESC

 

WHEN

 

SELECT QUANTIZE(TO_TIMESTAMP(date, time), 3600) AS Hour, COUNT(*) AS Total FROM ‘[LOGFILEPATH]’ GROUP BY Hour ORDER BY Hour

The following query can also be very useful to check for errors over time :

 

ERRORS / HOUR

 

SELECT date as Date, QUANTIZE(time, 3600) AS Hour, sc-status as Status, count(*) AS ErrorCount FROM ‘[LOGFILEPATH]’

WHERE sc-status >= 400 GROUP BY date, hour, sc-status ORDER BY ErrorCount DESC


The above queries are voluntary simples. Depending on results, we will need to “polish” them by adding filtering/grouping…etc There are already a lot of excellent articles covering this topic so I won’t reinvent the wheel:

III – Mitigating Denial Of Service attacks (DOS)

Security guidelines for IIS/Azure Web Role are described in the Windows Azure Network Security Whitepaper (see section “Security Management and Threat Defense” and “Guidelines for Securing Platform as a Service”). While Azure implements sophisticated DOS/DDOS defense for large scale DOS attacks against Azure DC or DOS attacks initiated from the DC itself, the document clearly mentions that “it is still possible for tenant applications to be targeted individually”. This basically means that web application in Azure should use similar means as on premise application to protect themselves against attackers and pragmatically, this means you have to put in place a couple of actions:

 

IV – Mitigating Distributed Denial Of Service attacks (DDOS)

While the above mitigations are valid for simple DDOS attacks, there may not be enough to mitigate sophisticated DDOS attacks.

In some situation, it’s possible to find a “pattern” through log analysis. For example, inspecting the HTTP referrer header may allow to see that DDOS requests coming from users visiting suspicious sites (typically porn sites). In this case, URLREWRITE can be used to filter malicious requests (see Blocking Image Hotlinking, Leeching and Evil Sploggers with IIS Url Rewrite).

Mitigating more sophisticated DDOS originating from SPAM/botnets requires additional methods when it is not possible to find a pattern to distinguish legitimate requests from malicious ones. In this case, we need to combine sophisticated approaches to mitigate a sophisticated attack:


Unfortunately, implementing the above mitigations require to use custom code or rely on ISP capabilities (SINKHOLE). Dedicated security software like Barracuda Web Application Firewall implements some of the above features and provides an “all-in-one” approach to protect your web server. The following blog provides a quick summary of the setup steps: How to Setup and Protect an Azure Application with a Barracuda Firewall


V –Other security best practices

While this is unrelated to DOS/DDOS attack, it is worth mentioning some basic security rules:

If you are interested in Azure Security, the following page is a very good central repository of resources: Security in Azure.

I hope you’ll fin
d the above information useful and remember that “forewarned is forearmed”…

Emmanuel

 


 


Perfmon & IIS / ASP.NET

$
0
0

One regular question posed to our team deals with the performance counters set-up to ensure IIS and ASP.Net application(s) are working properly. However, as every web-application and hence every IIS Server, will behave differently depending on what will be executed, obviously the threshold for which performance will be impacted will vary a lot. Thus it’s essential to define those key values before putting your application in production. By doing a progressive load test, you should be able to identify when the application performance is degraded. Once you’ve determined the threshold limit before degradation and the threshold where everything is working well, you just have to collect the performance counters and you’ll get your key values.

Now you know how to define your key values, you need to know how to collect them. To do so, you could use the excellent PAL tool (https://pal.codeplex.com/) which gives a predefined list of counters to monitor based on the selected product. By using those scripts, you should be able to get a good overview of your application and IIS behaviour under load.

However, it could be interesting to broaden the scope to get key values in order to have a global vision of how the whole server is behaving instead of just IIS or ASP.NET. You need to check CPU, Memory, etc and put this in balance with your application and IIS.

Here is a non-exhaustive list of counters you could use to get this vision:

For IIS:

Memory:
- Available Mbytes: Allows you to see the available memory. It’s important to be sure the server isn’t undersized for the needs of the application
- % Commited Bytes In Use: Allows you to see the used memory. It’s interesting to put this in balance with the Available Mbytes counter

Process (For all W3WP.exe processes):
- % Processor Time: Allows you to see the CPU consumption for a given process
- Virtual Bytes: Allows you to see the virtual memory for the process W3WP.exe
- Private bytes: Allows you to see the private memory for the process W3WP.exe

Processor (All instances):
- % Processor Time: Allows you to put in balance the total CPU consumption with each W3WP.exe. For example, if your server is consuming 90% of CPU and the sum of the W3WP.exe CPU consumption is 10%, you clearly have an issue elsewhere than IIS

HTTP Service Request Queues (All instances):
- CurrentQueueSize: Allows you to see the size if the HTTP Kernel side queue and thus to see if a huge number of requests are getting queued without being handled by the User Mode side
- RejectedRequests: Allows you to see if requests are rejected from Kernel side without being handled by the User Mode side

APP_POOL_WAS (For all listed Application Pools):
- Current Application Pool State: Allows you to see the state of an Application Pool
- Current Application Pool Uptime: Allows you to see if the Application has been restarted or not (relay useful during a load test)

 

For ASP.NET:

ASP.NET Applications (For all applications you want to monitor):
- Compilations Total: Allows you to see the number of compiled pages
- Request Bytes In Total: Allows you to see the number of received bytes
- Request Bytes Out Total: Allows you to see the number of sent bytes
- Request Execution Time: Allows you to see the execution time for the most recent request
- Request Wait Time: Allows you to see the time spent in the queue before being handled for the most recent request
- Requests Executing: Allows you to see the number of requests being executed
- Request in Application Queue: Allows you to see the number of requests in the queue
- Requests Timed Out: Allows you to see the number of timed-out requests
- Requests/Sec: Allows you to see the number of requests executed per seconds
- Sessions Active: Allows you to see the number of active sessions

ASP.NET V4.0.30319:
- Application Restarts: Allows you to see the number of restarts for the Application Domain

 

With all this information, you should be able to determine the threshold where your application is behaving as expected and the threshold where problems should start to occur. In addition, if you want to go further on ASP.NET, you could have a look to this old but good article which explains some key counters and how to detect some known issues: http://linqto.me/perfaspnet

I hope this article has been useful.
Sylvain Lecerf and the French Microsoft IIS Support Team

Perfmon : IIS / ASP.NET

$
0
0

Une question qui revient régulièrement dans notre équipe concerne les compteurs de performance à mettre en place pour s’assurer qu’à la fois IIS et les application(s) ASP.NET fonctionnent bien. Cependant, comme chaque application, et donc chaque serveur IIS, vont se comporter différemment en fonction de ce qui sera exécuté, il est évident que le seuil au-delà duquel des pertes de performance seront notables va énormément varier. Il est donc indispensable de définir ces valeurs types dès le départ en testant votre application avant une mise en production. En effectuant une montée en charge progressive, vous devriez pouvoir identifier à partir de quel moment les performances de l’application se dégradent. Une fois que vous avez déterminé le seuil limite et le seuil optimal, il ne vous reste plus qu’à collecter les compteurs et cela correspondra à vos valeurs de référence.

Maintenant que vous savez comment définir ces valeurs de référence, il ne nous reste plus qu’à savoir quels compteurs mettre en place pour surveiller le tout. Pour ce faire, vous pouvez utiliser l’excellent PAL (https://pal.codeplex.com/) qui fournit une liste préétablie de compteurs à surveiller en fonction du produit sélectionné. En utilisant ces scripts, vous devriez avoir un excellent aperçu du comportement de votre application et de IIS en fonction de la charge.

Toutefois, il peut être intéressant d’élargir le champ dans le cadre de l’établissement des valeurs de référence afin d’avoir une vision globale du fonctionnement du serveur et non pas seulement de IIS ou d’ASP.NET. Il faut être capable de voir comment va se comporter le CPU, la mémoire, etc en fonction de la charge et en fonction de l’application.

Voici une liste non-exhaustive des compteurs qui peuvent être utilisés pour avoir cette vision :

Pour IIS :

Memory :
- Available Mbytes : Permet de voir la mémoire disponible. Important pour ne pas sous dimensionner son serveur par rapport au besoin de l’application
- % Commited Bytes In Use : Permet de voir la mémoire utilisée. Il est intéressant de mettre cette valeur en balance avec le compteur Available Mbytes

Process (Pour tous les processus W3WP.exe) :
- % Processor Time : Permet de voir la consommation CPU pour un processus donné.
- Virtual Bytes : Permet de voir la consommation de mémoire virtuelle du processus W3WP.exe
- Private bytes : Permet de voir la consommation de mémoire privée du processus W3WP.exe
Si vous ne connaissez pas la différence entre la mémoire virtuelle et privée, je vous invite à lire l’article suivant : http://blogs.msdn.com/b/friis/archive/2008/10/13/m-moire-recyclage-sous-iis-6.aspx
Pour résumer :
- Virtual Bytes = Reserved + Committed
- Committed = Private Bytes = Page File + Working Set

Processor (Tous les instances) :
- % Processor Time : Permet de mettre le ratio global de consommation CPU en rapport avec la consommation de chaque W3WP.exe. Si votre serveur consomme 90% de CPU alors que l’ensemble des processus W3WP.exe n’en consomme que 10%, il y a sûrement un problème ailleurs qu’au niveau de IIS

HTTP Service Request Queues (Toutes les instances) :
- CurrentQueueSize : Permet de voir la taille de la file HTTP côté Kernel et donc de voir si un trop grand nombre de requêtes s’empilent sans être prisent en charge
- RejectedRequests : Permet de voir si des requêtes sont rejetées côté Kernel sans même être traitées par le côté User Mode

APP_POOL_WAS (Pour tous les Application Pools listés) :
- Current Application Pool State : Permet de voir l’était d’un Application Pool
- Current Application Pool Uptime : Permet de savoir si l’Application Pool a été redémarré ou non pendant le test de charge

 

Pour ASP.NET :

ASP.NET Applications (Toutes les applications que vous voulaient surveiller) :
- Compilations Total : Permet de voir le nombre de page compilées
- Request Bytes In Total : Permet de voir le nombre de bytes reçus
- Request Bytes Out Total : Permet de voir le nombre de bytes envoyés
- Request Execution Time : Permet de voir le temps nécessaire pour l’exécution de la requête la plus récente
- Request Wait Time : Permet de voir le temps passé dans la file d’attente pour la requête la plus récente
- Requests Executing : Permet de voir le nombre de requêtes en cours d’exécution
- Request in Application Queue : Permet de voir le nombre de requêtes en file d’attente
- Requests Timed Out : Permet de voir le nombre de requêtes qui ont rencontré une erreur du type Time Out
- Requests/Sec : Permet de voir le nombre de requêtes exécutées par seconde
- Sessions Active : Permet de voir le nombre de sessions actives

ASP.NET V4.0.30319 :
- Application Restarts : Permet de voir le nombre de redémarrage d’Application Domain

Avec cet ensemble d’informations, vous devriez avoir les valeurs adéquates pour déterminer le seuil de fonctionnement optimal et le seul qui indique un problème pour votre/vos application(s). Enfin, si vous souhaitez aller plus loin sur la partie ASP.NET, vous pouvez jeter un oeil à cet article qui explique certain des compteurs les plus importants et comment détecter des problèmes connus (article en anglais) : http://linqto.me/perfaspnet

En espérant que cet article vous soit utile.
@ Bientôt
Sylvain Lecerf et l’équipe du support IIS Microsoft France

PowerShell – Comment éviter le prompt d’UAC pour automatiser l’exécution d’un script

$
0
0

L’un de mes clients a récemment soulevé le fait qu’il ne pouvait pas exécuter un script PowerShell automatiquement car le script nécessitait une élévation de privilège (via un prompt UAC) ce qui bloquait l’exécution du script.
Bien entendu, la désactivation d’UAC n’était pas envisageable. En effectuant quelques recherches, je suis tombé sur l’article de blog suivant :
http://blogs.technet.com/b/benshy/archive/2012/06/04/using-a-powershell-script-to-run-as-a-different-user-amp-elevate-the-process.aspx

J’ai donc proposé à mon client d’exécuter une tâche planifiée avec une commande de ce type :
Start-Process -Verb Runas powershell.exe c:\test.ps1

La tâche planifiée a pour but de démarrer un processus PowerShell.exe qui va exécuter le script test.ps1 situé dans le répertoire C:\ en forçant l’élévation de privilège.
C’est la propriété Runas qui permet de forcer l’élévation de privilège et ainsi de ne pas être gêné par UAC.

En espérant que cet article vous soit utile.
@ Bientôt
Sylvain Lecerf et L’équipe de support IIS Microsoft France

Azure Black IPs Intro

$
0
0

What is the Azure Black IPs Nuget Package.

In a previous post on our blog (http://blogs.msdn.com/b/friis/archive/2014/04/25/easily-detect-and-block-malicious-http-requests-targeting-iis-asp-net-using-blackips.aspx) we have discussed how we could go about detecting and blocking malicious input to an ASP.net website. We have taken this concept a step further with the v2.0 of the tool that is now available on Nuget.

This module was created to address a specific problem that you can encounter when running ASP.net applications in production on the Internet. The problem stems from the fact that the Internet is not always a nice and order abiding place – there are certain users and bots that will try to force your site and inject code into the input fields to see if the application can be made vulnerable to code / javascript or SQL injection.

To this effect, they will send requests to your application where they send input values like the following:

<script>
window.alert(“Your site has been hacked”);
</script>

This code is harmless enough, but it will trigger the request validation mechanisms built into ASP.net, and will make the Runtime raise an error. Error processing in an ASP.net application can be costly, especially if there is a high number of errors to treat in a short period of time. Hence the importance to know when such errors occur and from which IP addresses they are coming from.

Most of the time, the IP of the computers that are sending such junk requests to your site will be repetitive, and the objective for you should be to be able to locate them quickly and easily to be able to then restrict traffic to your server from those IP addresses – at least on a temporary basis.

To this end, Azure Black IPs will allow you to trap all such errors raised by your application and display them on a control panel page. You can then inspect the page even from a distance, without having to connect to your server via remote desktop (or something similar) and go through the event logs – which can be quite lengthy and time consuming.

 

What do you get when you install Azure Black IPs.

The Nuget package will add two .Net binaries (dlls) to your application: AzureBlackIP.dll and AzureErrorDisplay. These will contain an HttpModule that will listen for a specific type of error caused by such malicious / erroneous requests, and an HttpHandler that will display the control panel page.

A couple of configuration entries will be added to the site’s web.config configuration file allowing you to configure the tracing. These entries are in the <system.web> and <system.webServer> tags of the configuration file:

<add verb=GET, POST path=DisplayIP.err type=Azure.BlackIPs.Handlers, ErrorDisplayHandler />

<add name=Error Display Handler path=DisplayIP.err verb=* type=Azure.BlackIPs.Handlers.ErrDisplayHandler resourceType=Unspecified preCondition=integratedMode />

These two entries allow you to configure the url of the control panel page. By default, this is /Display.err at the root of your website.

The other value is:

<add key=blackIpLogging value=true />

This controls the tracing – enabling it if the value is set to true, and disabling it if set to false.

 

Using Azure Black IPs:

You can download the Nuget package from Nuget.org and install it into your application (WebForms or MVC or WebPages). Alternately, you can run the following PowerShell command from the Package manager console inside Visual Studio:

Install-Package Azure-BlackIP

Once you deploy your application, assuming that you are running with the default settings, you can just type in the following address:

<yourappsurl.com>/Display.err

to display the control panel page. This will list all the recent errors (since the application started) that were encountered.

Clicking on one of the errors will display more details, like the stack trace, the user agent of the connecting client, the request headers, and most importantly the IP address of the machine that initiated the request.

Requirements:

The module requires that you are running ASP.net 4.5 or above for the application to work correctly.

Azure Black IPs – getting started video

$
0
0

This week, together with my colleague Emmanuel, we released a Nuget package called Azure Black IPs that would allow you to track IP addresses that send requests that trigger the ASP.net validation for your websites. Here is a quick video of how to install and get started with the Azure Black IPs Nuget pakage:

To read the intro article on the Azure Black IPs, please see go here: http://blackips.linqto.me 

Paul Cociuba
http://linqto.me/about/pcociuba

Viewing all 131 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>