Quantcast
Channel: IIS Field Readiness – blog of the European IIS team
Viewing all 131 articles
Browse latest View live

User Controls, Update Panels and JQuery scripts all working together happily.

$
0
0

While working on implementing new functionality on my online favorites manager (www.linqto.me) which I encourage everyone to check out, I came across the following problem:

  • Given a UserControl, I would like to have an UpdatePanel that would refresh some of the HTML that was generated by user control on the pages it is used on. Furthermore, I would like to control to dynamically inject some JavaScript into the page so that once the HTML is rendered, I could make use of JQuery to further change the DOM and animate things each time a partial postback occurs.

In order to explain how you could go about doing such a thing, I have created a simpler example (although you can look into the keywords feature described here for www.linqto.me to see a more complex usage). The sample I will walk you through looks like the following screen-shot: it consists of two bar like indicators that will show how much of a total value is used up – this can be business data, metrics, pressure, anything else you want.

The bar indicators are actually composed of two divs each (one div – with a blue or pink color – inside another div with a gray background). When you start the example, the two indicators (the divs) which are both gray will only appear, since the width values of the interior divs are 0:

<style>
  .grayDiv{
     background-color: lightgray;
     border: solid 1px;
     border-color: gray;
     width: 500px;
   }

 

  .blueDiv{
     background-color: lightblue;
     width: 0px;
   }

</style>

 

 Indicator 1:
<br />
<div id=”bar1″ class=”grayDiv”>
   <div id=”indicator1″ class=”blueDiv”>&nbsp;</div>
</div>
<br />

Notice that the style element for the .blueDiv class attributes a width of 0 to the inner div, making it invisible.

When the ‘Update Indicators’ button is pressed, a portion of the markup is updated, and I then use a script, which will be triggered following the async postback from the update panel to make changes to the width of the inner divs and set a new value in the CSS style.

The entire control is contained in ASCX and ASCX.cs files that are named WebUserControl. If you look at the markup of the control, you will see that it makes use of an UpdatePanel control that contains a LinkButton control (the ‘Update Indicators’ button) and two hidden fields:

<asp:UpdatePanel ID=”updUpdateIndicators” runat=”server” UpdateMode=”Conditional”>
   <ContentTemplate>
      <asp:HiddenField ID=”valIndicator1″ runat=”server” ClientIDMode=”Static” />
      <asp:HiddenField ID=”valIndicator2″ runat=”server” ClientIDMode=”Static” />
      <asp:LinkButton ID=”lnkUpdateValues” runat=”server” Text=”Update Indicators” OnClick=”lnkUpdateValues_Click” />
   </ContentTemplate>
</asp:UpdatePanel>

When the button is clicked, the update panel initiates a partial postback to the server, where the code for the lnkUpdateValues_Click() event handler will run. Here is the code for the event handler in question:

//event handler for the link button click event
protected void lnkUpdateValues_Click(object sender, EventArgs e)
{
   //create a couple of random numbers that vary between 0 and 500 and assign them to the hidden fields
   Random randGenerator = new Random();
   int val1 = randGenerator.Next(0, 500);
   int val2 = randGenerator.Next(0, 500);

   //assign these new values to the hidden fiels and send the entire thing back to the update panel
   valIndicator1.Value = val1.ToString();
   valIndicator2.Value = val2.ToString();
}

All the code does is that it calculates two random integer values that are between 0 and 500 and then sets the resulting numbers as the values of the two hidden fields valIndicator1 and valIndicator2.

However, please note that the divs that are responsible for displaying the data in a more graphical format are not contained inside the update panel, and the markup resulting from the partial post-back will not change these elements.

Here is where the magic comes in. We need a script that is fired whenever the update panel is updated and will then make further changes to the DOM (Document Object Model). The script needs to be dynamically injected the first time the control is rendered on the page it will live on. To achieve this, we use the Page_Load event handler for the control with the following code:

protected void Page_Load(object sender, EventArgs e)

{
    //attempt to inject javascript into the loading page
    ScriptManager.RegisterClientScriptInclude(
       this, GetType(), “valuesJqScript”, ResolveUrl(“~/scripts/ValuesUpdater.js”));
 
    ScriptManager.RegisterStartupScript(this.Page, GetType(), ClientID, “WireUpValues();”, true);
}

This code is the interesting part: the first line, where we call the ScriptManager.RegisterClientScriptInclude method, will indicate to the page hosting the control that it should also load a JavaScript file that is located in the /scripts/ folder and that is called ValuesUpdater.js. This will append a script block to the page instructing the browser to also load the script alongside the rest of the resources that are used by this page.

The second line will call the ScriptManager.RegisterStartupScript overload method. This method will specify what the name of the JavaScript function to be called is: in our case the file called ValuesUpdater.js defines a function called WireUpValues, which should be called. Here is the code of the script function:

function WireUpValues() {
   //each time the update panel reload the HTML markup, get a hold of the hidden controls
   var val1Control = $(“#valIndicator1″).val();
   var val2Control = $(“#valIndicator2″).val();

   //select the two divs that are supposed to show the progress bars
   var div1 = $(“#indicator1″);
   var div2 = $(“#indicator2″);

   //set the width css width values of the two divs to match the values passed in
   div1.css(“width”, val1Control);
   div2.css(“width”, val2Control);
}

What the code does is that it uses JQuery to select the two hidden fields from the HTML that is rendered by the User Control, and stores their values into two variables called va1Control and val2Control. It then selects the divs we need to change the width for, using JQuery selectors, and will proceed to call the css() function on these JQuery wrapped objects to reset the width property to the values indicated by the hidden input controls.

The call to one of the two overloads of RegisterStartupScript achieves one of the following actions:

  • The script is called once when the control is loaded, if this overload is used
  • The script is called when the control is loaded and each and every time there is a partial postback on the control, if this overload is called – which is the overload used in the sample project.

To wrap up, here is the workings of the project from A-Z now that we have looked at the code:

  1. The control gets loaded on the page and the Page_Load event handler is called.
  2. This instructs the page to include a JavaScript file which should be loaded with the page and will wire up the function to be called once the page has loaded and when a partial postback from the control has completed (the HTML markup from the postback is incorporated into the page).
  3. The function fires on the page load and on subsequent partial postbacks and executes a simple logic of transferring the values of two integers (expressed as hidden fields) into CSS classes that make the divs be wider or smaller.

You can download the sample from the link below. Happy coding with ASP.net Webforms and JQuery.

Paul Cociuba
www.linqto.me/about/pcociuba

UserControlSample.zip


ASP.net segment heap sizes – or how much virtual memory my web-app will need

$
0
0

Many a times, customers come to me saying they have a feeling that their ASP.net application takes up more memory then it did before, especially if they are migrating from the .Net 2.0 Runtime to the .Net 4.0 Runtime and from a 32 bit architecture to a 64 bit architecture. Some time ago, I wrote a small cheat on .Net segment size vs Architecture, which you can find listed here:

    http://linqto.me/n/AspNetMemory

Today, I would like to go into a little more detail on how we go about computing the memory needed at startup by an ASP.net architecture, based on the machine we are running on, since there are several factors that come into play when calculating this sum.

Heaps, Heaps and more Heaps

.Net stores most of the variables you will create (except for the value types) on a data-structure called the heap. This lives in the process address space and grows as more and more variables are needed and allocated by the application. The key is the ‘growing when needed’. If the .Net Framework simply waits to execute an instruction calling ‘new’ to allocate a variable, this would be very bad for performance reasons we will not discuss here. Hence the heap pre-allocates entire regions of memory (called segments) which can then be used to store variables.

The .Net Managed Heap is actually two data structures: the Small Object Heap and the Large Object Heap. The Small Object Heap (SOH) is used to store smaller sized objects. Everything that is larger in size than 86 Kb is placed on the Large Object Heap. You can learn more about the two by reading this article on my friend Tess’s blog:

    http://blogs.msdn.com/b/tess/archive/2006/06/22/643309.aspx

Suffice to say, that when each of the two heaps is initialized, just before you application is loaded into the w3wp.exe process, a heap segment will be reserved for the SOH and a second heap segment will be reserved for the LOH. Hence we wind up with to heap segments of process address space that is reserved from the get go. To understand more about process address spaces and reserved memory, please go through the article I wrote together with my colleague Sylvain on memory management in a Windows process, some time ago:

    http://blogs.msdn.com/b/friis/archive/2008/10/13/m-moire-recyclage-sous-iis-6.aspx

What’s inside the box

Your computer / server that is running the ASP.net application you have just written, be it a virtual machine or a physical machine, will be equipped with a CPU. The central processing unit can (and normally does) have more than one core. For modern processors, they tend to have multiple cores, multiple processors on the same chip. Each of the cores may be hyper threaded, resulting in the fact that Windows may see double the number of processors if each core is hyperthreaded.

If you start the Windows Task Manager, you can see how many cores you have available by looking at the Performance tab, on the CPU resources. If you only have one graph, make sure that you have selected the option (from the context menu) to Show the Logical Processors (see screenshot).

So why is this important? Because the .Net Framework will try and take maximum advantage of the architecture of the server / machine it is running on, and will make use of each logical core available. How can it do this? One way is by creating multiple Managed Heaps instead of just one. In this way, the memory allocation operations that are needed can be performed by the processor the heap is allocated to. Hence, you will have as many .Net Heaps (a SOH and LOH) as you have processors.

For the example screenshot above, the machine has eight processor cores. If we fire up an ASP.net application, the .Net Runtime will create 8 SOH and LOH heaps, each of which will reserve an initial segment of memory.

Don’t forget about the architecture

The architecture that your computer runs is also a factor in the equation. Older servers used to run on 32 bit architectures, meaning that each pointer (number that points to an address in the process address space) had 32 digits which could be either 1 or 0. More recent machines have 64 bit architectures, meaning the pointers are 64 digits log.

The 64 bit architecture pointers are twice the size of the 32 bit ones, and hence we can represent a whole lot more virtual process address space on such an architecture. The .Net Framework can operate both on 32 and 64 bit architectures, but will create bigger or smaller heap segments base on the architecture it is running on.

Putting it all together

To answer the question: how much memory is reserved by the .Net Framework at the start of my ASP.net application, we need to take into consideration the factors listed above:

  • Each managed heap is actually composed of two heaps: SOH and LOH
  • There will be as many heaps as there are logical processors on the machine
  • Heap segment size depends on machine architecture.

With this in mind, we can now look at the .Net segment sizes based on architecture, Runtime version and heap type:

  • ASP.NET 2.0 on x86 : 64 Mb for small object segment per processor and 32 Mb for large object segment per processor
  • ASP.NET 2.0 on x64 : 512 Mb for small object segment per processor and 128 Mb for large object segment per processor
  • ASP.NET 4.x on x86 : 64 Mb for small object segment per processor and 32 Mb for large object segment per processor
  • ASP.NET 4.x on x64 : 1024 Mb for small object segment per processor and 256 Mb for large object segment per processor


[2nd of February 2016] Here is a small side note add on:

If you are running on a 32 bit architecture, based on the number of processors, the segment size will shrink as such:
- if you are running on a machine with more than 4 logical processors, the segment sizes for the managed heaps will be: 32 Mb for the small object heap and 16 Mb for the large object heap
- if you are running on a machine with more than 8 logical processors, the sizes are as follows: 16 Mb for the small object heap and 8 Mb for the large object heap

This is done to prevent the .Net Runtime from actually reserving more memory than is possible in a 32 bit address space (2 Gb max assuming that you are not using the /3GB swicth) 

[15th of February 2016] – and just for completeness, here is a table with all segment sizes and possibilities for the initial segment size:

Framework Version

Architecture

# of Logical Processors

Small Object Heap

Large Object Heap

Total par processor

.Net Framework 2

X86

Nb proc <= 4

64 Mb

32 Mb

96 Mb

.Net Framework 2

X86

4 < Nb proc <= 8

32 Mb

16 Mb

48 Mb

.Net Framework 2

X86

8 < Nb proc

16 Mb

8 Mb

24 Mb

.Net Framework 2

X64

Any

512 Mb

128 Mb

640 Mb

.Net Framework 4.x

X86

Nb proc <= 4

64 Mb

32 Mb

96 Mb

.Net Framework 4.x

X86

8 < Nb proc <=8

32 Mb

16 Mb

48 Mb

.Net Framework 4.x

X86

8 < Nb proc

16 Mb

8 Mb

24 Mb

.Net Framework 4.x

X64

Any

1024 Mb

256 Mb

1280 Mb

 

Application pool gets recycled due to anti-virus?

$
0
0

It’s not the first time that I heard of my customers complaining about their anti-virus: after a certain activity (such as a regular scanning for system files), their application pools get restarted automatically.

When this issue happens, some customers are seeing the following event in System Event Log:

Log Name: System

Source: Microsoft-Windows-WAS

Date: XXXX

Event ID: 5080

Task Category: None

Level: Information

Keywords: Classic

User: N/A

Computer: XXXX

Description:

The worker processes serving application pool ‘[Application pool name]‘ are being recycled due to 1 or more configuration changes in the application pool properties which necessitate a restart of the processes.

But the anti-virus didn’t make any modification to the configuration file. How could this happen?

In fact, there may be several reasons. For example, when anti-virus scans the concerned file, it changed the “Last modification time”; It can also occur when WAS tries to detect if the configuration file has been changed, while anti-virus is scanning the file at the same time hence WAS detects the handle on the file and considers it being modified.

One effective way to avoid this scenario is by excluding the related configuration files of IIS from the anti-virus scanning scope.

Here is an exclusion list that you may consider to configure your anti-virus.

Attention: this is not an official list provided by Microsoft, it is simply a recommended list summarized according to our support experience. You should find your own compromise between security and performance. If you need any further information, please contact your anti-virus vendor.

  • Default folder for x86 compiled ASP.Net Code : %WINDIR%\Microsoft.NET\Framework\{version}\Temporary ASP.NET Files
  • Default folder for x64 compiled ASP.Net Code : %WINDIR%\Microsoft.NET\Framework64\{version}\Temporary ASP.NET Files
  • IIS Configuration Folder : %WINDIR%\System32\Inetsrv\Config
  • Default Content Location (where the web.config stands) : %SYSTEMDRIVE%\Inetpub\WWWRoot (or the customized folder)
  • Default Logging Location : %SYSTEMDRIVE%\Inetpub\Logs\LogFiles (or the customized folder)
  • Default FREB Logging Location : %SYSTEMDRIVE%\inetpub\logs\FailedReqLogFiles (or the customized folder)
  • Default HTTP.SYS Logging Location : %WINDIR%\System32\LogFiles\HTTPERR
  • Default History Location : %SYSTEMDRIVE%\Inetpub\History
  • Default Backup Location : %WINDIR%\System32\Inetsrv\backup
  • Default folder for storing Compressed Content : %SYSTEMDRIVE%\Inetpub\temp\IIS Temporary Compressed Files
  • Default folder for compiled ASP templates : %SYSTEMDRIVE%\Inetpub\temp\ASP Compiled Templates
  • Default Configuration Isolation Path : %SYSTEMDRIVE%\Inetpub\temp\appPools
  • Default Folder for Error pages : %SYSTEMDRIVE%\Inetpub\custerr

 

Hope this is useful for you.

Jin W. and IIS/ASP.NET support team of Microsoft France

Articles you may be interested in:

Microsoft Anti-Virus Exclusion List

http://social.technet.microsoft.com/wiki/contents/articles/953.microsoft-anti-virus-exclusion-list.aspx

IIS Application Pool Recycling Events

https://technet.microsoft.com/en-us/library/cc735206(v=ws.10).aspx

Common reasons why your application pool may unexpectedly recycle

http://blogs.msdn.com/b/johan/archive/2007/05/16/common-reasons-why-your-application-pool-may-unexpectedly-recycle.aspx

Debugging your custom FTP authentication provider module

$
0
0

If you are reading this article, I will make the assumption that you already know that in Microsoft FTP server that comes with IIS 7.5 or above, you have three possibilities for authentication:

  • Anonymous: you let all users in without requiring credentials from their side
  • Basic Authentication: users have to provide a username and password which will be matched by IIS to a local or domain account (the username and password will be sent in clear text via the control port 21 if you have not setup FTPS).
  • Custom authentication: you write your own authentication module to validate the username password combination that a user provides you with according to your own business rules.

There is a very complete article about how to create a custom authentication provider with FTP, written by Robert McMurray – which you can find here:

http://blogs.msdn.com/b/robert_mcmurray/archive/2011/06/30/how-to-create-an-authentication-provider-for-ftp-7-5-using-blogengine-net-s-xml-membership-files.aspx

Since this article dates a bit, you can follow the article below which provides a detailed walk through on how to install an FTP custom authentication provider once you have built one. This can be done by the IIS Manager Console GUI, contrary to what the article from Robert indicates (as I have said, the first article is a bit old):

https://www.iis.net/configreference/system.applicationhost/sites/site/ftpserver/security/authentication/customauthentication/providers

The question is, what happens if you try and install the provider, and it does not work. How do you get started with troubleshooting it. This article intends to give you a basic workflow:

1. Start with the GAC

In order for the custom authentication provider to be found and loaded by the FTP server, it must be present in the GAC (Global Assembly Cache). Hence, as per the articles above, when you are writing your module, you must make sure that it is signed, so that it can be deployed to the GAC.

Open a Windows Explorer and navigate to the C:\Windows\Assembly folder. This is where all the GAC dlls are located. If you have not deployed your module to the GAC, deploying it is a simple as dragging and dropping the dll from another Windows Explorer window to the one open to C:\Windows\Assembly.

(Note: in Windows 8.1 and Windows 10 the Assembly GAC shell is not present, so deployment to the GAC is only possible with GacUtil)

Please note that in some case, Windows Explorer will not refresh the contents of the GAC right away, after the drag and drop, hence I recommend you close all Windows Explorer windows and then start with a new instance of Windows Explorer and check for the presence of the assembly in the GAC. If the assembly is not present in the GAC (ie. you cannot find YouFTPModule.dll present in the GAC), then go no further, you have to fix this first.

You may use an elevated command line prompt to see the contents of the GAC as well. Navigate to the C:\Windows\Assembly folder. Running the dir command will list the contents of the GAC’s folders (Windows Explorer has a special shell to display the GAC – if you want to see the real structure, you can use the command line as shown below):

As you can see from the screenshot above, there are several folders inside the GAC (these may vary if you are on 32 or 64 bit machine), but the folder we are interested in is the GAC_MSIL (short for Microsoft Intermediary Language – the assembly language your .Net code will be compiled in). It is this folder (GAC_MSIL) we should be navigating to, to see if the assembly we have developed is present in. Use the dir /p command to list the assemblies page per page instead of all at once. If the assembly is not present, it means there was an error in deploying it to the GAC.

You may use tools like gacutil (that comes with Visual Studio) to try and deploy the assembly via command line – since this tool will give you explicit error messages. You can learn more about the gacutil tool here:

https://msdn.microsoft.com/en-us/library/ex0ss12c%28v=vs.110%29.aspx?f=255&MSPPError=-2147217396

2. Where does the assembly get loaded once you try and authenticate?

Once the FTP server is configured to use your own authentication provider, and you try and authenticate a first time, the provider should get loaded into IIS. But where? The answer is that it will get loaded into a dllhost.exe process, and will be executed and hosted inside the process. Which dllhost.exe process is it – since you are likely to have more than one such process on the machine?

Open an elevated command prompt window and type in the following command: tasklist -svc . You need to look for a service called DcomLaunch which is running in a svchost.exe process (with PID 720 in my screenshot):

You will then need to download a tool called Process Explorer from the Microsoft site (http://linqto.me/Procmon). Unzip the tool and launch it with administrative privileges (right click and select ‘Run as Administrator). This tool will let you peak into what is loaded inside each process in Windows.

Locate the svchots.exe process with the PID corresponding to the DcomLaunch service (which you have gotten from the previous step using the command line). Underneath this process, there should be a dllhost.exe process which should be loading the assembly containing your authentication provider. To view the dlls in this process, chose the View > Lower Panel > Dlls menus from the Process Explorer window.

If the assembly is not present on loaded inside this process, then there might be an ACL problem when trying to load the file – which is quite rare. You can download Process Explorer (http://linqto.me/Procmon) and use the tool to trace loading attempts. After you have downloaded and unzipped the tool, launch it and then click on the ‘magnifier’ button to stop tracing, the clear button (button with a gum) to clear the trace. Setup a filter in procmon by pressing the filter button:

Setup a filter where:

  • PID is the ID of the dllhost.exe you have identified
  • Path contains the name of the dll which contains your provider

Re-start the procmon capture and try and authenticate to the FTP server again. Personally, I recommend using a client such as FileZilla, since this will give you great, color-coded output of the authentication attempt. Stop the procmon trace. If there is no load attempt for the dll containing your provider, the FTP server is not configured to use the authentication provider you developed. If there are failed load attempts, inspect these since the tool will tell you why these fail.

3. My provider loads, but still does not work.

If you have gone through points 1 and 2 in this blog and your provider does load but still fails to authenticate, then it is possible that the code is throwing an error when called. You can either debug this with Visual Studio, if VS is installed on the same server you are setting the provider up on, or you can use Debug Diag (http://linqto.me/DebugDiag) to trace errors.

Setup a Debug Diag crash rule as explained in this blog post:

http://blogs.msdn.com/b/chaun/archive/2013/11/12/steps-to-catch-a-simple-crash-dump-of-a-crashing-process.aspx .

The rule will not produce crash dumps, but will record all .Net Exceptions that will be encountered by the process while the crash rule is tracking it. Hence, once the rule is setup, you can try and authenticate to the FTP server one or more times, then come and stop (kill) the dllhost.exe process that you were monitoring. This will prompt Debug Diag to create a log of the lifetime of the process and all errors encountered during the execution.

The log thus created can be found in: C:\Program Files\Debug Diag\Logs\<NameOfCrashRule>\ . There will be a text file inside this folder that will also contain dllhost.exe and the PID of the process that was tracked by the rule. If you open this file, you should see the details of the errors encountered during the execution of the process – with .Net callstacks – they will be located towards the end of the file. You will need to examine these stacks and error messages to understand what errors are raised in your code and why.

Happy debugging for all the FTP auth module developers out there.

by Paul Cociuba
http://linqto.me/about/pcociuba  

Disabling TLS 1.0 on your Windows 2008 R2 server – just because you still have one

$
0
0

Windows 2008 R2 server is a very popular distribution of Windows that has been used time and time again to power servers running ASP.net websites – either on the Internet or on Intranets. Although this Windows version has somewhat aged from 8 years ago, I still tend to see quite a lot of these installs around, and happen to have some myself, which are running for my bookmarking service www.linqto.me .

If you have been reading about all the security problems creeping up on the internet lately, you should have come across words like Heart Beat Bleed, Poodle and other such vulnerabilities that are problematic when encrypting an HTTP connection between a client and a server. For the record, when it comes to securing a connection between client and server for HTTP related exchanges, there is an entire list of protocols we can chose from (or rather the client and server have to agree on). The list, with the dates these were released should give you an idea of how old some of these technologies are:

    SSL (short for secure sockets layer) version 1, 2 and 3 – initial specs for these came out in 1995 – that is more than 20 years ago!

    TLS (short for transport layer security) version 1.0 – came out in 1999

    TLS version 1.1 – came out in 2006

    TLS version 1.2 – came out in 2008

SSL protocols should not be used any more, as they are full of known vulnerabilities. TLS 1.0 has had its share of vulnerabilities, and more and more organizations are beginning to turn this off as a choice for negotiation of encryption between client and server. I recommend that you do too, and use more secure versions like TLS 1.1 or 1.2 if possible. If you are already on this blog post, chances are you are trying to do just this – turn off TLS 1.0 on your Windows 2008 R2 server. Which should be easy to do… or not, so keep reading.

Steps to turn off TLS 1.0 on a Windows 2008 R2 server.

There is a Microsoft Support Knowledge base article that discusses this in some detail and also recommends that you download a ‘Fix it for me’ automated repair tool. The article in question is the following: https://support.microsoft.com/en-us/kb/187498 . However, there are a couple of problems and loopholes in the article above, so I want to go through them in some detail.

  • The first is that the ‘Fix it for me’ automated installer is no longer available. Microsoft has decided to retire this technology, hence also stopping you from having an automated solution to disable TLS 1.0 and leave only TLS 1.1. and 1.2
  • The manual solution indicates that you should change some registry keys, but I have found this to be somewhat incomplete, because just changing the indicated keys will turn off all TLS communications, including TLS 1.1 and 1.2 – which is not what you want when you are running a site that has HTTPS bindings.
  • The article never mentions that if you are connecting to your server via Remote Terminal Service (or Remote Desktop), you will also be cutting the branch out from under your feet – these methods of communication with a remote server actually rely on TLS 1.0, and once it is disabled you will not be able to connect to your server any more, not via Remote Desktop anyways. If your server is in a remote location or data-center, that can become a serious problem that can cause you much grief and downtime.

To correctly disable TLS 1.0 follow the steps below:

  • Install the Microsoft patch that allows you to continue using Remote Terminal Services or Remote Desktop after TLS 1.0 is disabled: https://support.microsoft.com/en-us/kb/3080079 . This should be the first step on your list, as missing this patch will leave you unable to connect to your server after disabling TLS 1.0.
  • Disable TLS 1.0 from the registry, using the registry editor. This one requires several sub-steps which you have to go through:
    • Open the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\ registry key
    • If a TLS 1.0 key is present go inside, if not you will have to create a new Key and name it ‘TLS 1.0’
    • If the TLS 1.0 key exists, you should also have a key called ‘Client’ and one called ‘Server’ underneath, if not you will have to create them as you did in the previous step:

    • The next steps will have to be done both for the ‘Client’ and ‘Server’ keys as we want to disable TLS 1.0 when the OS is acting as a server (typically in the case of a website), but also when it is acting as a client and connecting to other resources that require secure connections. Go into the ‘Client’ key and create a DWORD (32 bit) entry and call this ‘Enabled’ and set its value to 0. Then repeat, and create a new DWORD (32 bit) entry for the ‘Server’ key and call it ‘Enabled’ and set the value to 0. This will disable TLS (all versions) for both client and server.
    • Now we have to enable versions 1.1 and 1.2 of TLS. For this, we need to create new keys called ‘TLS 1.1’ and ‘TLS 1.2’ underneath the ‘Protocols’ key.
    • For each of the TLS 1.1 and TLS 1.2 keys, you should also create a ‘Client’ and a ‘Server’ key, as shown in the screenshot below:

    • Once the key structure is created, you can proceed to creating a DWORD (32 bit) entry called ‘DisabledByDefault’ and set its value to ‘0’ in each of the four keys: TLS 1.1/Client, TLS 1.1/Server, TLS 1.2/Client and TLS 1.2/Server.

I have created a small export of the registry from my server which I am pasting below as text for reference:

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Client]
“DisabledByDefault”=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Client]
“Enabled”=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Server]
“Enabled”=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client]
“DisabledByDefault”=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server]
“DisabledByDefault”=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client]
“DisabledByDefault”=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server]
“DisabledByDefault”=dword:00000000

Time to restart your server following these changes and you will see that the next connection attempt to resources that require https (secure connections) for your sites will be using TLS 1.1 or TLS 1.2. Hope this helps you secure your servers and dodge some nasty security vulnerabilities.

Paul Cociuba
follow what I post on www.linqto.me

Setup IIS with URL Rewrite as a reverse proxy for real world apps.

$
0
0

Url Rewrite, one of the many modules that can be added on to the IIS web-server to make this a very versatile tool can be used to perform a variety of tasks, including allowing you to setup your IIS web-server as a reverse-proxy server to some other back-end HTTP service. A reverse proxy is a network device that takes in traffic coming from the Internet (for example), and forwards this traffic to a backend server on your private network, allow that backend server to be accessible to people who are not necessarily connected to your network. There are a lot of articles on how to use IIS and Url Rewrite as a reverse proxy, but I have found that many are incomplete with regards to real world scenarios from today’s web applications.

Scenario: Setting up IIS with URL rewrite as a reverse proxy with SSL offloading for a backend service.

Details: suppose that we have a web-application hosted on one of our backend web-servers, IIS or another web server, and that this application server cannot be configured to use SSL and is not accessible to the end users because the end users do not have access to the network the server is on. We want IIS to perform the following tasks:

  • Take in requests from the end users for content from this application using SSL
  • Route these requests to the backend application server using HTTP
  • Rewrite all responses from the backend server, so that any hyperlinks, form action tags and such are constructed with the URL that the IIS reverse proxy server has.

Below is the diagram of the setup we wish to accomplish using IIS as a reverse proxy server:

I would like to take you through the configuration steps required to setup such a system, where requests are routed via the IIS server to the backend application server and the re-written back again with the public host-name of the IIS server and sent back to the connecting clients.

Install URL Rewrite

The first step is to install the add-on module for URL Rewrite. With Windows Server 2012 R2, you can use the Microsoft Web Platform Installer (WebPI) to download and install the URL Rewrite Module. Just search for ‘URL Rewrite’ in the search options and click ‘Add’. You can also download the extension from IIS.net – http://www.iis.net/downloads/microsoft/url-rewrite .

Once the module is installed in IIS, you will see a new Icon in the IIS Administration Console, called URL Rewrite. This icon is present at the level or each site and web-application you have in the server, and will allow you to configure re-write rules that will apply from that level downwards.


Setup a Reverse Proxy rule using the Wizard.

Open the IIS Manager Console and click on the Default Web Site from the tree view on the left. Select the URL Rewrite Icon from the middle pane, and then double click it to load the URL Rewrite interface.

Chose the ‘Add Rule’ action from the right pane of the management console, and the select the ‘Reverse Proxy Rule’ from the ‘Inbound and Outbound Rules’ category.

Now we can proceed to fill in the routing information based on the diagram above in the Wizard window that is provided to us.

While still in the same configuration window, we also need to provide information to take care of the responses that will be emitted by the backend server and will transit the IIS server on their way back to the requesting browser. These responses may have absolute hyperlinks inside and other information which contains the hostname of the backend server. If these are sent to the browser as is, the end user will not be able to access the resources these links point to simply because the browser does not know where http://privateserver:8080/HomePage.aspx is located and how it can be reached. We need to convert these into the hostname of the reverse proxy server, and have them look like: https://www.mypublicserver.com/HomePage.aspx . For this reason, we will check the ‘Rewrite the domain names of the links in HTTP responses’ checkbox in the Outbound Rules section.

The basic setup for the reverse proxy is now complete, with IIS able to capture incoming traffic and forward it to the backend server, and inspect responses from the backend server and rewrite URL links inside the responses to match the host headers that IIS uses to publish the site.

Read on in part number 2 to see where the problems with this setup start.

By Paul Cociuba
http://linqto.me/about/pcociuba

IIS with URL Rewrite as a reverse proxy – part 2 – dealing with 500.52 status codes

$
0
0

This is the second article in a three-part series of articles dealing with setting up IIS as a reverse proxy. Check out part one here.

IIS acting as reverse proxy: Where the problems start:

Testing this new setup for basic scenarios may work, but you can also be presented with a couple of issues. The first one is that you may have 500 status codes when you try to access your backend server. If you do FREB tracing, you will see that these status codes are actually logged by IIS and Url Rewrite with the following message in the trace:

Outbound rewrite rules cannot be applied when the content of the HTTP response is encoded (“gzip”).

Status code for this is 500.52.

This is because the responses that are coming from the back end server are using HTTP Compression, and URL rewrite cannot modify a response that is already compressed. This causes a processing error for the outbound rule resulting in the 500.52 status code.

Fixing the 500.52 status code cause by compressed responses.

A client indicates to the server that it is willing to accept compressed content by indicating this in the http headers it sends to the server alongside the request. This is indicated in the ‘Accept-Encoding’ Header.

There are two ways to work around this: either you turn off compression on the backend server that is delivering the HTTP responses (which may or may not be possible, depending on your configuration), or we attempt to indicate to the backend server the client does not accept compressed responses by removing the header when the request comes into the IIS reverse proxy and by placing it back when the response leaves the IIS server.

I will only detail the second alternative, with regards to the removal and re-instatement of the HTTP header. To do this, we will first need to create two HTTP Variables in URL Rewrite. After selecting the URL Rewrite Icon and double clicking it in the IIS Manager Console, you will have a ‘View Server Variables’ action button on the right hand side pane. Click this button to be able to add new server variables.

Click the ‘Add’ button on the right hand side pane to add a new server variable. We will need to add two variables named HTTP_ACCEPT_ENCODING and HTTP_X_ORIGINAL_ACCEPT_ENCODING as shown here:

Once this is complete, we will need to use these variables both in the inbound rules, to remove the Accept-Encoding header and in the Outbound Rules to place this header back again.

Go to the Inbound Rules section in Url Rewrite. This section should just contain one inbound rule, called ‘Reverse Proxy Inbound Rule 1’. Select this rule and click the ‘Edit’ action link on the right hand side panel of the IIS Administration Console to be able to edit the details of this rule.


In the ‘Server Variables’ section we will need to add the two server variables that we have declared earlier. We will be copying the contents of the HTTP_ACCEPT_ENCODING server variable (which captures the content of the Accept-Encoding Header) into the HTTP_X_ORIGINAL_ACCEPT_ENCODING. To do this, click the Add button on the interface, and then chose the HTTP_X_ORIGINAL_ACCEPT_ENCODING from the dropdown list that appears in the ‘Set Server Variables’ window:

Set this variable to capture the value of HTTP_ACCEPT_ENCODING by placing the string {HTTP_ACCEPT_ENCODING} in the Value textbox. Whenever you see something between curly braces {} in URL Rewrite, this means that URL Rewrite will use the value of whatever expression is inside the braces – in this case the server variable.

Now it is time to repeat the process for the HTTP_ACCEPT_ENCODING variable which we should be setting to empty. This variable will be used by URL Rewrite when it builds the request to forward to the backend server. So if we do not wish this request to have an Accept-Encoding header, we must empty its value. Press the ‘Add’ button again on the ‘Server Variables’ pane, and then fill in the ‘Set Server Variable’ window as follows:

Note that the interface will not allow you to set the variable’s value to empty, hence you can set this to any arbitrary string (I just use ‘eee’). We will correct this manually in the configuration files afterwards. Once this is done, press the ‘Apply’ button to save the configuration changes to the IIS configuration store – in this case the web.config of your website.

Once the changes are saved, time to do some manual tweaking using Notepad or Notepad++, or any other XML editor of your choice. Open the web.config file that is present at the root of your website, and find the <rewrite><rules> section. Here you should find the InboundReverseProxyRule1 rule definition which should look like the snippet below:

<rule name=”ReverseProxyInboundRule1″ stopProcessing=”true”>
<match url=”(.*)” />
<conditions logicalGrouping=”MatchAll” trackAllCaptures=”false” />
<serverVariables>
<set name=”HTTP_X_ORIGINAL_ACCEPT_ENCODING” value=”{HTTP_ACCEPT_ENCODING}” />
<set name=”HTTP_ACCEPT_ENCODING” value=”eee” />
</serverVariables>
<action type=”Rewrite” url=”http://privateserver:8080/{R:1}” />
</rule>

In the <ServerVariables> section, set the value of the HTTP_ACCEPT_ENCODING variable to empty (delete the value that is between the quotes). The new line of configuration should look like the following:

<set name=”HTTP_ACCEPT_ENCODING” value=”” />

Note: if you cannot save the file because of elevation privileges requirements, then you can save the web.config to another folder, like ‘My Documents’ and then copy it over manually replacing the original web.config. This will require you to confirm the replace with an elevated prompt as well, but that should not be a problem.

Now on to the outbound rule modification.

When we receive the responses from the backend server, we need to forward them back to the browser. To be able to correctly do this, we will need to restore the value of the HTTP_ACCEPT_ENCODING variable to what it was before we changed it to empty. Create a new Outbound Rule from the Url Rewrite Pane, by clicking the ‘Add Rule’ action link on the right hand side pane, and then selecting the ‘Blank Rule’ from the Outbound Rules section of the ‘Add Rule(s)’ Window.

Call the new rule ‘RestoreAcceptEncoding’. Outbound rules in URL Rewrite are only executed if we are able to match a precondition. A pre-condition is a check we will be running on the response to determine if we wish to perform an action or not. So the next part of the configuration will be to create a new pre-condition to be used with the outbound rule we are creating.

Select <New Precondition> from the Preconditions dropdown, and then configure the precondition as follows. Give the precondition a name – call it NeedsRestoringAcceptEncoding, and the select ‘Regular Expression’ from the ‘Using’ dropdown:

Select the ‘Match All’ from the ‘Logical Grouping’ dropdown list and proceed to add a condition by pressing the ‘Add’ button. The condition will be the check we will be running to determine if we wish to apply the transformation which will be detailed in the outbound rule. We can have several conditions grouped together in one precondition clause. Configure the condition as follows: set the {HTTP_X_ORIGINAL_ACCEPT_ENCODING} as a value for the ‘Condition Input’ textbox, select the ‘Matches the Pattern’ item from the ‘Check if input String’ dropdown, and finally place ‘.+‘ as a pattern.

After having created the pre-condition for the outbound rule, we can now proceed to configure the rule itself. Select ‘Server Variable’ from the Matching Scope dropdown, and place the HTTP_ACCEPT_ENCODING variable name in the ‘Variable Name’ textbox. Select ‘Matches the Pattern’ in the Variable Value dropdown and the ‘Regular Expressions’ in the Using dropdown, and place the following pattern ‘^(.*)‘ in the Pattern textbox:

In the ‘Actions’ pane, select ‘Rewrite’ as an action from the ‘Action’ dropdown, and place the {HTTP_X_ORIGINAL_ACCEPT_ENCODING} value in the ‘Value’ textbox. Check the ‘Replace Existing Server variable value’ checkbox.

Click the ‘Apply’ button to save the changes entered by this rule to the IIS configuration store.

By configuring the Inbound and Outbound rules, we are now able to mitigate the 500.52 status code if our backend server was compressing the responses as a result of the client browser sending ‘Accept-Encoding’ headers in the incoming requests.

In the next part, we will look at configuring more outbound rules to deal with complex scenarios of javascript encoded data.

By Paul Cociuba
http://linqto.me/about/pcociuba

IIS with URL Rewrite as a reverse proxy – part 3 – rewriting the outbound response contents

$
0
0

This is the third part of the article series dealing with IIS using URL rewrite as a reverse proxy for real world apps. Check out part 1 and part 2 before reading on.

Configuring outbound rules for Javascript encoded content.

More and more applications send content to the browser in the form of Javascript encoded content, which the javascript running in the page that has requested the content then integrates into the DOM (Document Object Model) of the page. This content might include such things as Anchor <a> tags, or form tags which have action attributes. Below are examples of such snippets of code:

<a href=\”http://privateserver:8080/coding_rules/#rule_key=OneVal%3APPErrorDirectiveReached\” target=\”_blank\”>
<form method=\”post\” class=\”rule-remediation-form-update\” action=\”http://privateserver:8080/admin_rules_remediation/update\”>

Note the \ (inverted slash) before each of the values of the href and action attributes.

If we look at the ‘ReverseProxyOutboundRule1’ in the rules section of URL Rewrite, rule which was created in the Reverse Proxy wizard we ran in part 1 of this blog series and we check the Preconditions associated with this rule, we will see that a precondition was created during the Reverse Proxy setup wizard, the precondition is called ResponseIsHtml1.

If you click on the ‘Edit’ button next to the ResoponseIsHtml1 precondition, we can see the configuration of the precondition. This precondition matches any responses coming from the back end server that have the response content type set to text/html.

Since Javascript encoded content is text/application-javascript, the easiest way to work around this limitation is to change the precondition to match responses with the content type of type text/* – text followed by slash anything. To do this, click on the {Response_Content_Type} in the list and then click the ‘Edit’ button next to this. This will allow you to edit the regular expression that is used to inspect the content type of responses coming from the backend server.

Change the pattern of the regular expression to ‘^text/(.+)‘ – meaning that the content type should start with text/ followed by any character, but at least one character. Click the ‘Ok’ button to save these changes.

Site Node: you could also create a second precondition called ResponseIsTextStar and set the new regular expression in this precondition as we will be creating more outbound rules. In this way you can have a rule for only HTML content and a rules for the rest.

Now we will need to create two new outbound rules to address the case of the <a> anchor tags and the action attributes of the form tags which are encoded. Because they are encoded we cannot use the built in tag scanning that URL Rewrite provides for us in outbound rules. We will have to write a regular expression to match these two tags in all content.

Let’s start with the anchor <a> tags. Create a new blank outbound rule from the Rule Wizard, and then configure it to use the precondition we created / modified earlier. In the Match pane configure the rule as shown below:

Set the ‘Matching Scope’ to ‘Response’ in the dropdown, make sure that all the items within the ‘Match Content Within’ dropdown are deselected – this will mean URL Rewrite will scan the entire response not just specific tags. Select ‘Matches the Pattern’ in the ‘Content’ dropdown and ‘Regular Expressions’ in the ‘Using’ dropdown. Use the following pattern in the Pattern textbox: ‘href=(.*?)http://privateserver:8080/(.*?)\s‘ – you should replace privateserver:8080 with the url of your backend server.

Moving down to the Actions pane, configure the following:

Set the ‘Action’ dropdown to ‘Rewrite’ and then use the following pattern: ‘href={R:1}https://www.mypublicserver.com/{R:2}‘ in the Pattern textbox. Replace the https://www.mypublicserver.com/ with the URL of your server. Finally press the ‘Apply’ action link on the right hand pane to create the new rule.

We will need to add a second outbound rule to deal with the form element’s encoded action attributes. To do this, we will create a second blank outbound rule. The configuration of the rule is the same as above in the Match pane, except for the regular expression to be used, which changes to: ‘action=(.*?)http://privateserver:8080/(.*?)\\‘ – again replace the http://privateserver:8080/ with the URL of the backend server.

The configuration is identical to the first rule in the Action pane as well. The pattern to be used here is the following: ‘action={R:1}https://www.mypublicserver.com/{R:2}\‘ where you will need to replace https://www.mypublicserver.com/ with the IIS server URL accessible to your users. Once you have pressed the Apply action link on the right hand side pane, the rule is saved and the configuration is now applied.

In conclusion:

We now have an IIS web-server that uses URL Rewrite to act as a reverse proxy. The server can deal with the issue of compressed responses coming out of the backend web-application by disabling the accept-encoding header, and is able to modify content coming back from the backend web-application even if this content is javascript encoded and contains anchor tags or action attributes on form elements.

By Paul Cociuba
http://linqto.me/about/pcociuba


IIS web-servers running in Windows Azure may reveal their private IP for certain requests.

$
0
0

Internet Information Services (the handy web-server from Microsoft) runs on Windows server OS but also in the Microsoft Azure Cloud. If you are building virtual machines and deploying them to the cloud (IAAS – Infrastructure as a Service) or using Cloud Services from Windows Azure (PAAS – Platform as a Service), you will basically be using an IIS web-server in behind the scenes to host your service.

When deployed inside Windows Azure, the virtual machines (IAAS or PAAS) that are running your IIS server are allocated private IP addresses. Windows Azure does the job of forwarding traffic from the public IP address and port that you are using to the private IP address and port combination of the virtual machine(s) that you are hosting in the cloud. The scenario diagram looks a bit like the one shown below:

You request a resource from you service in Azure, this is routed to the public IP address the service is hosted on. The request is then routed further by Windows Azure to the private IP address of (one of) the server(s) that host this service. The details of how this is done is beyond the purpose of this blog.

What is interesting to note is that we can send some requests to the IIS server which will make it respond with the internal (private IP) that the server has in the Cloud. Some may consider this a disclosure of information that is not intended for the end client, so we may wish to mitigate against this disclosure, but let’s first try to understand what happens.

HTTP (Hyper text transfer protocol) currently supports three versions:

  • Version 1.0 of the protocol specification (the original version)
  • Version 1.1 of the protocol (which is the most widely used)
  • Version 2.0 which is starting to gain traction in today’s web-server world.

The requests that are problematic for this scenario are all sent using the HTTP 1.0 version of the protocol. There are a couple of differences between HTTP 1.0 and HTTP 1.1 but the one that we are interested in here is the fact that in HTTP 1.0 we did not have to specify a ‘host’ http-header in the request.

When sending http requests to a server, we usually type in the name of the site / service we are trying to reach. You would type in http://www.linqto.me
should you be trying to reach my bookmarking service. The ‘www.linqto.me’ is the host name, which will be resolved by the browser to the server’s public IP address. Hence the request to the site would look something like this:

GET / HTTP/1.1
Accept    text/html, application/xhtml+xml, */*
Accept-Encoding    gzip, deflate
Accept-Language    fr-FR,en-US;q=0.5
Connection    Keep-Alive

Host    www.linqto.me
User-Agent    Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko

Note that Host http-header in the sample above is set to www.linqto.me. However, to open the TCP connection, the browser also needs to resolve this name into the IP address of the server – which it will do under the covers.

In HTTP 1.0, it is possible to instruct a client (browser or other) to open a TCP connection to a web-server and send a request without sending the Host header, and only send the name of the requested resource (which is / (slash) in the above example). In the response from the server contains a redirection to another page or resource (http status codes 301 or 302), the server will specify where the client should redirect to via a Location http header. This header will contain the name of the internal IP of the IIS server if no host http-header is provided for the request.

Why is this happening?

When performing a redirect, IIS needs to tell the connecting client where to look for the resource. It needs to build the url for the redirection based on information it has: the incoming request and the settings for the IIS web-site. Since the incoming request does not contain a host header (like in the case of the HTTP 1.0 request), and the IIS website does not have any host header mappings setup (and is basically listening for all HTTP traffic on an IP and port combination), the only thing it can be sure of is the IP and port address combination the site is bound to.

It cannot do any reverse DNS lookups – since traffic it forwarded to the private address of the server by Windows Azure (and IIS does not do reverse IP DNS queries anyways). Hence, to build the URL for the redirect it needs to perform, it will use the private IP address of the server inside Windows Azure when it issues the response.

You can see this issue happening when using a tool like curl to issue an HTTP 1.0 request. Consider the following command in curl:

curl http://xxx.xxx.xxx.xxx/somepage.aspx –http1.0 –header “Accept:” –header “Connection:” –header “Host:” -i -v

Where xxx.xxx.xxx.xxx is the public IP address of the service running in azure, and the /somepage.aspx would result in a 301 or 302 redirect status code from the server. Since we are setting the Host header to null when sending the request, and we are indicating we are using http1.0, we will not be sending any information about the name of the site to the IIS server.

How can we work around this issue?

There are a couple of ways to work around the problem. The first is to define an alternateHostName for the server or the site you wish to protect. Here are the appcmd or powershell commands you can use to set this parameter up:

For a website:

%windir%\system32\inetsrv\appcmd.exe
set
config
“[SiteName]”section:system.webServer/serverRuntime /alternateHostName:“[AltHostName]”  /commit:apphost

Set-WebConfigurationProperty
-pspath
‘MACHINE/WEBROOT/APPHOST’ -location ‘[SiteName]’
-filter
“system.webServer/serverRuntime”
-name
“alternateHostName” -value “[AltHostName]”

For the entire server:

%windir%\system32\inetsrv\appcmd.exe
set
config  –section:system.webServer/serverRuntime /alternateHostName:“[AltHostName]”  /commit:apphost

Set-WebConfigurationProperty
-pspath
‘MACHINE/WEBROOT/APPHOST’  -filter
“system.webServer/serverRuntime”
-name
“alternateHostName” -value “[AltHostName]”

You may also use the configuration editor in IIS, and navigate into the system.WebSerer/ServerRuntime tag and set the value as shown below:

This will give the server an extra piece of information on how to deal with the response url it needs to build. Instead of just basing itself on the IP address of the site’s bindings, it will use the alternateHostName value if it is provided.

The other way in which you can work around this issue is to use Url Rewrite to deny HTTP 1.0 traffic to your site. You can download url rewrite for the IIS.net site here: http://www.iis.net/downloads/microsoft/url-rewrite.

You will need to configure a new ‘Inbound Rule’ in url rewrite. Name the rule something meaningful like ‘Block HTTP 1.0 traffic’ and then set the match type to ‘Math URL’. The Requested URL should be ‘Matches the Pattern’ and the pattern type should be ‘Wildcards’ and the pattern should be ‘*’ to trap all incoming requests.

In the conditions part of the rule configuration, you need to add a condition to match the {SERVER_PROTOCOL} variable to the ‘HTTP/1.0’ pattern. The SERVER_PROTOCOL variable will be populated by the IIS server based on the HTTP version specified in the incoming request.

Finally, the action that should be taken when such a request is detected is to ‘Abort Request’, essentially closing the TCP connection to the client by sending back a TCP RST (reset) on the connection. The entire configuration is in the screenshot below:


When looking into the web.config of the site, the resulting Url Rewrite rule is the following:

<rule name=”Block HTTP 1.0 Rule” patternSyntax=”Wildcard” stopProcessing=”true”>
<match url=”*” />
<conditions logicalGrouping=”MatchAll” trackAllCaptures=”false”>
<add input=”{SERVER_PROTOCOL}” pattern=”HTTP/1.0″ />
</conditions>
<action type=”AbortRequest” />
</rule>

For this rule to work, you will need to make sure that it is the first rule listed inside the inbound rules section for your website or web-application in the Url Rewrite section:

If this is not the case, you can use the ‘Move Up’ action button on the right hand side of the IIS console to make sure the rule is the first one to be interpreted on any incoming request, essentially blocking all HTTP 1.0 traffic to the site.

By Paul Cociuba
http://linqto.me/about/pcociuba


Using RSCA to help you understand what your IIS server requests are doing

$
0
0

RSCA – an acronym for Runtime Service and Control API is a little know and little talked about feature of the IIS server starting with now obsolete version 7.0 (which came with Windows 2008 Server). This feature can provide real time snapshots of what is going on inside the IIS worker process without impacting performance or the need for outside tools. RSCA is actually a component that is built into the IIS server and can be used easily from the IIS Manager console.

In this article, I want to demonstrate how to setup and how to make use of this IIS component.

Installing RSCA:

Even though it is part of the IIS server, RSCA is seldom installed by default. Starting with IIS 7.0, the IIS web-server architecture is modular, and by default, when you install IIS you only get a subset of the modules that allow the server to serve static content – such as HTML pages, images and JavaScript files. Other components, such as those that allow you to execute ASP.net sites are optional and can be installed from the Server Manager.

To install RSCA, or check that you have RSCA installed, head to the Server Manager, and check the server ‘Roles’ part – if you are running Windows 2008 R2 Server, or go to the Server Manager and check the ‘All Servers’ item in the Server Manager, chose the server you are on from the list of servers, and then right click it and select ‘Add Roles and Features’ from the context menu. Then select the Web-Server role from the Wizard.

Server Manager on Windows 2008 R2 SP 1

Server Manager on Windows 2012 R2

In both cases – Windows 2008 R2 server and Windows 2012 R2 server, you will now be presented with the list of features that is available for the IIS web-server role. Underneath the ‘Health and Diagnostics’ feature, you will need to make sure that the ‘Request Monitor’ checkbox is checked for RSCA to be installed.

Add Roles and Features Wizard.

If you have to install RSCA, just know that there will not be any interruption to the services, web-applications or web-sites that you are running on the IIS server. They will continue to run and RSCA will be available on all of them following the install. You will only need to close and restart any IIS consoles that you have open.

Using RSCA:

RSCA runs on a client server architecture principle. There is a component from the Runtime Service and Control APIs that will be loaded inside the Windows Process Activation service. This acts as a server for both the IIS console, which loads a RSCA client component, as well as the IIS worker processes which also load a component allowing runtime information to be queried via RSCA. The architecture diagram with the interactions of the components looks like the below:


RSCA Architecture Diagram

Now let’s start using the RSCA functionality to get data from the server. The first thing we have access to is the list of worker processes (w3wp.exe) that are running and which application pool they are associated with. If you click on the server node in the IIS manager console tree view control, you will be shown a ‘Worker Processes’ icon in the center pane:

Once you have located a worker process that is service the application pool you are interested in, you can right click this worker process and select ‘View Current Requests’ from the context menu. This is like taking a snapshot of activity in the process. It will show you all executing requests that have been inside the w3wp.exe for more than 0 seconds.

If you have requests which are slow or you think are stuck, you can check if these requests have actually made it into the IIS worker process. The below screenshot shows some requests which have longer execution times and are slow to respond. This can be for a couple of reasons, but you can also clearly see the pipeline stage the request is in, as well as the request URL, the IP address of the client, and how many seconds the request has been inside the worker process for at the time when the snapshot was taken.

You may find out more about pipeline stages by watching this video (http://linqto.me/IISArchP2) in the IIS architecture and components series I have published online some time ago, and which can be found here: http://linqto.me/n/IISArchitecture.

In the snapshot above, all requests are in the ExecuteRequestHandler stage inside the ISAPI module -this is because my application pool was running in classic mode and not in integrated pipeline mode. If you hit the refresh key on your keyboard (F5) while in this view, it will cause RSCA to generate a new snapshot, to see if the requests are still in the same state as before. If a request is still present, you will note that the ‘Time Elapsed’ column value for the specific request will have increased.

This is an easy way to see if you have some requests which are getting queued or are slow while executing inside the IIS worker process. You can then use tools like Debug Diag 2.2 (http://linqto.me/DebugDiag ) to gather memory dumps of the process for further investigations.

By Paul Cociuba
http://linqto.me/about/pcociuba

Using URL Re-write in IIS to change Content-Disposition Headers

$
0
0

Browsers have several ways in which they can handle a file that is downloaded from a web-server and that does not contain HTML or is an HTML page associated resource. The way in which attachments are dealt with is quite neatly described in this blog post from the HttpWatch team: https://blog.httpwatch.com/2010/03/24/four-tips-for-setting-up-http-file-downloads/ .

The way to make a browser attempt to display a downloaded attachment inline, meaning inside the browser itself, or to pop-up a small window, asking if the end user wishes to save or open the file can be controlled by an http header called the ‘Content-Disposition’ header. Setting the value of this header to ‘inline’ will cause the browser to attempt to load the program that is associated with the document extension inside the browser window to display the file (think of a PDF file that is opened directly inside the browser window). Contrary, setting the value to ‘attachment’ will cause the browser to display a small dialogue asking the user if they wish to save or open the file instead (like the window shown below).

I have recently come across a situation, where a web-application was sending down Office files (Excel, PowerPoint or Word) and these were being dispatched to the client with a Content-Disposition header value of ‘inline’ as the one show below:

Content-Disposition: inline; filename=<file.ext>

There was an issue with displaying Office documents inline with some of the PCs that were accessing this application, and to work around this, I was requested to change the Content-Disposition header value from the one listed above to – note the fact that ‘inline’ would be replaced with ‘attachement’ but that the file name part would be kept as is:

Content-Disposition: attachment; filename=<file.ext>

In this walkthrough, I will describe the steps in which this can be implemented inside IIS using the Url Rewrite feature.

  • The first step, is to install Url Rewrite, if you do not already have this module present on your IIS server. It can be downloaded from the following location: https://www.iis.net/downloads/microsoft/url-rewrite. At the time of this writing, the version of the module is 2.0. The installation will not stop any of your IIS services or impact websites, but will require you to restart any open IIS Manager Consoles that you had open during the installation for the interface to be displayed.

  • Launching the IIS Manager Console, you can now select the site / web-application that you wish to implement this change for, from the tree view on the left-hand side. Then double click the Url Rewrite Icon that is located inside the middle pane of the IIS Manager Console.


  • We can now create a new Outbound Rule from the rule templates. Click the ‘Add Rules’ action button on the right-hand side of the IIS Manager Console and select a ‘Blank Rule’ from the ‘Outbound Rules’ section. Outbound rules will affect the response generated by IIS to incoming requests. We can use this outbound rule to modify properties of the response object (such as an http header value) to obtain the desired result.


  • In order for the outbound rule to target specific responses, we need to create a pre-condition. Think of this as a filter that will only give back some responses that match certain conditions: in our case, responses that are dealing with files being sent down to the client as attachments. Select ‘Create New Precondition’ from the ‘Precondition’ dropdown to bring up the pre-condition creation Wizard


  • In the precondition wizard, we will use regular expressions to match the responses we wish to chance. Select ‘Regular Expressions’ from the ‘Using’ drop-down. Add a new input condition by pressing the ‘Add’ button on the right-hand side of the Window. In the input condition editor window, we will indicate that we want to match responses based on content type. Hence we will be examining the {RESPONSE_CONTENT_TYPE} variable for each response. We will chose to see if the value of this variable matches a pattern – hence chose ‘Matches the Pattern’ in the ‘Check if Input String’ dropdown. For this example, I will provide the patter to match for Excel documents, which is: ^application/vnd.openxmlformats . This translates to match anything that starts with the string ‘application/vnd.openxmlformats’. You may add several patters if you also want to match Word or PowerPoint documents in the same pre-condition. If this is the case, do not forget to switch the ‘Logical Grouping’ dropdown to ‘Match Any’.


  • Now that we have specified how we wish to find interesting responses, we need to specify what to modify inside the response. This is done in the ‘Match’ part of the rule definition. We want to try and match the ‘Content-Disposition’ which at this point will be present inside a server variable associated with the request. The name of the server variable is ‘RESPONSE_Content_Disposition‘ and we need to look for values of interest inside the values of this variable using regular expressions. The pattern we are looking for in the values of this variable is the following: inline;(.*) – this is a regular expression that will match a string that contains the word ‘inline;’ and then proceed to match 0 or more characters after it. You way familiarize yourself more with regular expressions by reading this article I have bookmarked: http://linqto.me/Regex

  • The next and final step is to specify how we wish the response to be changed by the url rewrite rule. We will do this in the ‘Action’ configuration settings of the rule. Specify an action of type ‘Rewrite’, while the value of the ‘Action Properties’ should be: attachment; {R:1} . This will replace the previous value with the word ‘attachement;’ followed by whatever the regular expression in the last step matched after the ‘inline;’ string. This is represented by the {R:1} place holder. Do not forget to check the ‘Replace existing server variable value’ checkbox so that the new value we have composed will overwrite the old.


  • Save the new url rewrite outbound rule and you have completed the changes.

This will allow you to intercept all responses that had a ‘Content-Disposition’ header that would display the contents of the response inline in the browser, by a content disposition directive that will make the browser prompt the end user to save the file to disk via a dialogue.

By Paul Cociuba
http://linqto.me/about/pcociuba

How to perform a clean reinstallation of IIS

$
0
0

I’ve seen several scenarios where our customers need to reinstall IIS, a typical one is related to configuration file corruption. For example, you may see following event in System Event Log:

“The configuration section ‘system.webServer’ cannot be read because it is missing a section declaration.”

The configuration file corruption can lead to a dysfunctionalities or outages of your website/application, and instead of checking the configuration files (such as ApplicationHost.config or web.config) one by one, line by line, it’s usually more efficient to reinstall the web server then the application itself.

As we are all very familiar with how to install IIS, by simply checking the box “Web Server” in Server Manager (or you can check here : link), thus naturally we’ll follow the same track to uncheck the box to uninstall IIS.

But after reinstallation, we’ll find the same configuration on the server: same websites, same application, and probably the same problem.

That’s because the configuration files for IIS (under C:\Windows\System32\inetsrv\config) are still in place, therefore when running IIS, it reads from the old files from same place – and if this folder contains the corrupted one, then the issue will still persist.

To completely uninstall IIS, you’ll need to remove the following roles:

WARNING: this manipulation will erase all your configuration on IIS. It’s highly recommended to make a full backup of your server before performing this action.

  • Web Server (IIS) under tab “Server Roles” in Server Manager :

    how-to-perform-a-clean-reinstallation-of-iis_1

  • And Windows Process Activation Service in “Features” tab in Server Manager :

    how-to-perform-a-clean-reinstallation-of-iis_2

    Attention: A server restart is necessary after the uninstallation.

  • Then delete the files or rename the folder (preferred) for C:\inetpub and C:\Windows\System32\inetsrv.

Here the key step is to uninstall Windows Process Activation Service (WAS). This is the service responsible for managing application pool configuration, creating and managing lifetime of worker process for HTTP and other protocols. Once WAS uninstalled, we can safely remove the configuration files located under C:\Windows\System32\inetsrv to finally make a clean uninstallation of IIS.

To reinstall IIS, just to follow the same steps: add Web Server (IIS) as well as WAS.

You may also be interested in:

IIS Windows Process Activation Service (WAS)

https://technet.microsoft.com/en-us/library/cc735229(v=ws.10).aspx

IIS Configuration Reference

https://www.iis.net/configreference

New features introduced in IIS 10.0

https://www.iis.net/learn/get-started/whats-new-in-iis-10/new-features-introduced-in-iis-100

The complete list of changes to make to activate Client Certificate Mapping on IIS using Active Directory

$
0
0

Setting up client certificate mapping in IIS 8.5 and above using Active Directory has never been very complex, however, I find that there is little to no documentation that walks you through the entire process from A-Z. In this article, I intend to look at setting up Client Certificate Mapping, and explain how client certificate mapping settings should work to give you more insight into these type of authentication.

When you make use of client certificates, what happens is that the browser you are working with will attempt to send a client certificate that is located on your computer to the web-server hosting the site you are trying to navigate to. In order for this to happen:

  • The website you are trying to reach must implement https (via TLS or SSL – not recommended because SSL is vulnerable to attacks) and request that connecting clients send client certificates.
  • The client browser is able to locate a client certificate on your machine that is issued by one of the certification authorities that is trusted by the web-server you are trying to connect to.

You may wish to have a look at an older article on troubleshooting client certificate issues on this blog, once you have completed your reading of this article: https://blogs.msdn.microsoft.com/friis/2011/11/15/troubleshooting-403-7-client-certificate-required-errors-step-by-step-to-make-sure-your-client-certificate-is-displayed-and-selected/

Once the client certificate is sent to the server, the web-server will perform a couple of checks on the certificate: is the certificate not expired, is the certificate issued by a certification authority that is trusted by the web-server, and has the certificate not been revoked by the certification authority that issued it?

The authentication part (or rather the mapping), will be done by a separate component, called the client certificate mapper. The module will connect to the domain controller of the domain the web-server belongs to, and will attempt to find a user account that has been associated with the client certificate that has been send over by the client browser. If such an account is found, it will be mapped to the executing request.

Setup:

The first part of the setup is to install Client Certificate Mapping using Active Directory on your IIS web-server. This component is optional and will not be present in a standard installation of IIS. The simplest way to do this is to start the Server Manager, and select the ‘Add roles and features’ from the Dashboard, as shown in the screenshot below:

Select a Role-Based installation from the Wizard that is launched from the Server Manager:

After selecting the server on which you wish to install the new features on from the list of servers available (generally, this will be the local server on which you have launched the Server Manager on), you will be presented with a list of Server Roles that are possible. Within the list, you need to choose the Web-Server (IIS) node in the tree view, and inside the Web-Server sub-node, and the Security feature, you will find the ‘Client Certificate Mapping and Authentication’. Check this feature to install it:

Finish the Wizard to start the installation of the feature. As with other features of IIS, the installation of this component will not cause a service interruption: all your websites will continue to run, but you will need to restart the any IIS Administration consoles that were already launched prior to the install in order to view the newly installed feature in the interface.

Enabling the Client Certificate Mapping with Active Directory.

Once the feature is enabled and the IIS Management Console has been restarted (if you had a console open during the install of the component), you will be able to see the ‘Active Directory Client Certificate Authentication’ feature – just select the server node from the left-hand side tree view and click on the ‘Authentication’ icon in the central pane of the IIS Manager Console.

Enabling the ‘Active Directory Client Certificate Authentication’ when inside the server level Authentication feature, will perform a couple of changes that are interesting to note:

Enabling the DS Mapper on the SSL binding will allow the Active Directory Client Certificate module (authCert.dll) to look at the client certificate that has been sent by the browser on the incoming request, and attempt to map this certificate to an Active Directory account. If the DS Mapper is not enabled on the SSL binding, even if the Active Directory Client Certificate module is enabled, the client certificate mapping will not trigger.

To see if the DS Mapper is enabled or not, you can use the following NetSh command, command which you will need to run from a command line prompt with elevated privileges:

NetSh http show SSL

The command output should resemble the following:

SSL Certificate bindings:
————————-

IP:port : 0.0.0.0:443
Certificate Hash : <CertificateHash>
Application ID : <ApplicationIdentifier>
Certificate Store Name : MY
Verify Client Certificate Revocation : Enabled
Verify Revocation Using Cached Client Certificate Only : Disabled
Usage Check : Enabled
Revocation Freshness Time : 0
URL Retrieval Timeout : 0
Ctl Identifier : (null)
Ctl Store Name : (null)
DS Mapper Usage : Enabled
Negotiate Client Certificate : Enabled

Note that the DS Mapper Usage is set to enable. The Negotiate Client Certificate need not be set to Enabled. If this setting is enabled, the client certificate will be sent by the client browser when the initial secure connection with the web-server is negotiated. If it is disabled, an initial secure connection will be negotiated between the web-server and the browser based on the server certificate, and then the connection will be re-negotiated when the client sends the client certificate. Active Directory client certificate mapping will work even if Negotiate Client Certificate is not set to enabled.

These settings are reflected in a registry key called DefaultFlags. The key is of type DWORD32 and is located in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\SslBindingInfo\0.0.0.0:443\ . The value of the key can be one of the following values:

0 – DS Mapper disabled and Negotiate Client Certificate is disabled.
1 – DS Mapper is enabled and Negotiate Client Certificate is disabled.
2 – DS Mapper disabled and Negotiate Client Certificate is enabled.
3 – DS Mapper and Negotiate Client Certificate are both enabled.

You may change the value of this key manually using the registry editor as per the values above. However, should you do this, you will need to stop and restart the http service using the following commands for the settings to take effect:

Net stop http
Net start http

You can also apply the settings by restarting your server. This may be necessary as in some installations of Windows 2012 R2, the http service has some unregistered dependencies and will not stop correctly. Following this, it will not be possible to restart the service without a reboot.

Enabling the Client Certificate Mapping on the site level

The final step is to enable the client certificate mapping on the site or web-application level. To achieve this, select the website or web-application you wish to use this feature with and ensure the following:

  • An SSL binding is present for the website that you wish to target. This must be true for the parent website of the web-application you are trying to target as well. If the binding is not present, you should create one, and preferably perform this step before enabling the ‘Active Directory Client Certificate Authentication’ described in the last step – if not the DS Mapper will not be activated. A client certificate can only be sent by the browser over a secure connection.
  • In the SSL Settings for your website or web-application, the ‘Require SSL’ checkbox should be checked and the Client Certificates radio button should be set to ‘Require’.

  • In the ‘Authentication’ section of the target website or web-application, you need to ensure that all other types of authentication are set to ‘Disabled’ as shown in the screenshot below:

If any other type of authentication is enabled (especially anonymous), the client certificate mapping will not work.

Activating the Client Certificate Mapping through code and script.

The article on the IIS.net website does list a couple of ways to enable the Client Certificate Mapping through either appcmd commands or through the usage of the IIS .Net managed APIs: https://www.iis.net/configreference/system.webserver/security/authentication/clientcertificatemappingauthentication. When executing these commands, it is important to note that only the IIS configuration will be changed. Since the DS Mapper settings are in the registry and not in the IIS configuration, the change will not be performed on the binding, hence the listings in the article will not be sufficient to enable the client certificate mapping using Active Directory.

To enable client certificate mapping using Active Directory, we will need to make some changes to the commands listed in the IIS.net article.

Using appcmd commands:

appcmd.exe set config “Default Web Site” -section:system.webServer/security/authentication/clientCertificateMappingAuthentication /enabled:”True” /commit:apphost

appcmd.exe set config “Default Web Site” -section:system.webServer/security/access /sslFlags:”Ssl, SslNegotiateCert” /commit:apphost

To these commands, we need to append the following script to enable the DS Mapper for the SSL binding:

‘ Connect to the WMI WebAdministration namespace.

Set oWebAdmin = GetObject("winmgmts:root\WebAdministration")
' Get the secure binding instances and display their properties.
Set oBindings = oWebAdmin.InstancesOf("SSLBinding")

For Each oBinding in oBindings
   IF (oBinding.Port = 443 AND StrComp(oBinding.IPAddress, "*") = 0) THEN
      Binding.SslUseDsMapper = TRUE
      oBinding.SslAlwaysNegoClientCert = TRUE
      oBinding.Put_
   END IF
Next

This script will iterate through all bindings that are configured to use port 443 and will enable both the DS Mapper and the Client Certificate Negotiation – essentially setting the registry key value for the DefaultFlags key to the value 3.

Using the managed .Net APIs for IIS

Another way of doing the same is by suing the .Net managed APIs to enable client certificate mapping:

using System;
using System.Text;
using Microsoft.Web.Administration;

internal static class Sample
{
   private static void Main()
   {
      using (ServerManager serverManager = new ServerManager())
      {

          Configuration config = serverManager.GetApplicationHostConfiguration();
          ConfigurationSection clientCertificateMappingAuthenticationSection = config.GetSection
                ("system.webServer/security/authentication/clientCertificateMappingAuthentication",
                    "Default Web Site");
          clientCertificateMappingAuthenticationSection["enabled"] = true;
         
          ConfigurationSection accessSection = config.GetSection
                ("system.webServer/security/access", "Default Web Site");
          accessSection["sslFlags"] = @"Ssl, SslNegotiateCert";

          //iterate through the sites on the server and get the site named 'Default Web Site'
          Site selectedSite = serverManager.Sites.Where(s => s.Name.Equals("Default Web Site",

          StringComparison.OrdinalIgnoreCase)).FirstOrDefault();
          if (selectedSite != null)
          {
                //iterate through the bindings of the site and attempt to retrieve an https binding
                Binding sslBinding = selectedSite.Bindings.Where(b => b.Protocol.Equals("https",
                    StringComparison.OrdinalIgnoreCase)).FirstOrDefault();

                if (sslBinding != null)
                {
                      //enable the DS Mapper
                      sslBinding.UseDsMapper = true;
                }
           }

           serverManager.CommitChanges();
      }
   }
}

In the code above, we use LINQ to objects to iterate through all the sites in the ServerManager’s sites collection object, searching for a site called ‘Default Web Site’. If your site is named differently, change this line to match the name of the site you are targeting. Following this, if we can find a site, we will iterate in the same way through the bindings of the site searching for a binding that is using ‘https’.

When the binding is found, we set the UseDsMapper property of the binding to true to enable the DS Mapper. This does not need any service restart or server reboot to take effect since we are going through the APIs to enable the mapper. If you intend to copy the code above, make sure that you check that the quote symbol (“) is correct and that dashes (-) have not been replaced by long dashes by the browser when displaying the article.

By Paul Cociuba
Follow what I read: http://linqto.me/about/pcociuba

Delete inbound cookies in IIS using URL Rewrite

$
0
0

I have recently come across a few issues where some web apps were having a bad time due to some “evil” cookies in the HTTP request headers.

Although web applications would normally expect to receive back the cookies they previously set, they don’t really control what user agents include in the HTTP headers. Not to mention cases where proxies, load balancers or layer-7-devices may tamper with the original request and inject custom headers. To make it even worse, applications may be deployed in web farms and end up sharing the same hostname with many others. If any of those doesn’t limit the cookie scope to its own path, user agents will just send the same cookies to all the sub applications in the domain.

These conditions, and many others more, are a clear indication that a robust web application must make no assumptions around the HTTP cookies it receives. Indeed, it should be able to cope with any kind of junk they might carry.

That said, it may well be that you find yourself very suspicious about a specific cookie received by one of your apps. Unfortunately, instrumenting a code change just to validate your hypothesis is quite inconvenient. Similarly, it may be very challenging to find where that one cookie is coming from. Moreover, given your suppositions turns out to be right, it can take time to implement an application fix and have it deployed, which leaves the problem exposed without any workaround.

Whether you would like a quick way to validate your hypothesis, a temporary workaround to wait for the application to be updated or simply a way to remove some specific content from the request headers, this trick might come in handy.

 

Before we get started, you would need to install ARR and the URL Rewrite module in IIS. For the latest binaries, please refer to these pages:

https://www.iis.net/downloads/microsoft/application-request-routing

https://www.iis.net/downloads/microsoft/url-rewrite

 

Once ARR and URL Rewrite are setup, let’s create an empty rewrite rule. What we need to do is:

  • filter every request, unless you want it to be limited to a specific path
  • look for a pattern of your choice in the “Cookies” header
  • if found, replace the whole header with something else. Namely, all the existing cookies except the one to remove
  • take no action on the request routing itself, meaning that no specific routing actions or URL changes will take place, unless you want to

The XML that produces this behavior looks like this:

<rule name="remove cookie" enabled="true">
<match url=".*" />
<conditions>
<add input="{HTTP_COOKIE}" pattern="^(.*)(EvilCookie=.*?[;\s$]+)(.*)$" />
</conditions>
<serverVariables>
<set name="HTTP_COOKIE" value="{C:1}{C:3}" />
</serverVariables>
<action type="None" />
</rule>

The same result can be obtained using the GUI URL Rewrite exposes in the IIS manager.

More in detail:

  • Select the website in IIS manager and then the URL Rewrite feature
  • In the actions panel click on Add Rule(s) and then select a Blank rule among the inbound list. Confirm with OK
  • Give the new rule a name and make it run for all the inbound traffic to the website.
    • If you’re using regular expressions select “.*” as pattern.
    • If you prefer the wildcard match use just “*
      In this example we are going to use the regex pattern
  • Now expand the Conditions panel and add a new one
  • Populate the window with the following values
    • Condition input: {HTTP_COOKIE}
      This means the condition we are working on is the server variable named HTTP_COOKIE. I believe you guessed what that is, right?
    • Check if input string: Matches the Pattern
    • Pattern: ^(.*)(EvilCookie=.*?[;\s$]+)(.*)$
    • This is actually the interesting piece. This string needs to match 3 parts of the original Cookies header.
      • The cookie to remove itself, including separators: (EvilCookie=.*?[;\s$]+)
      • The trailing cookies after the one to remove, if any: (.*)$
      • The leading cookies before the one to remove, if any: ^(.*)


        Note that some of the fancy characters in the string are regex patterns to make sure we look like cool kids and handle in just one line the following conditions: 

      • the cookie to remove is the first in the list
      • it is in a middle position
      • it is the last one in the list
        If that looks too awkward or scary, you can split each case into a separate condition and ensure to have the logical grouping set to match any (logical OR).

  • Expand the Server Variables panel and Add a new one
  • Pick HTTP_COOKIE as variable name and {C:1}{C:3} as the value name. Do not forget to check “Replace the existing value”
    Here’s the actual hack! URL Rewrite uses some placeholders that reference both the URL match ({R:x} syntax) or a condition match ({C:x} syntax).
    This means you can build all sorts of fancy strings based on a combination of the request URL and headers conditions!
    In this very case, the syntax {C:1}{C:3} tells URL Rewrite to replace the Cookies header with the first and last match from the conditions. Which in other words means, strip out the evil cookie from the header!
    N
    ote that {C:0} and {R:0} exist only if there is a pattern match and they are used as references to the whole input string we are testing our pattern against.

  • To finish off, scroll down to the Action panel and set the Action type to None, meaning we are not going to change the request routing.
    Do not stop the processing of other rules as you may want to add more one day.
  • Click on Apply in the top right corner to save the rule which will be immediately active.

     

    Now that we have our rule in place, let’s give it a try.

    To demonstrate the effectiveness of the hack, I’m going to browse to a simple ASP.NET page that lists all the cookies in the request header. You can find the source code for the page attached to this post. I’m also using Fiddler to customize my request header, in order to easily control the cookies I’m passing to IIS.

    First off, let’s make sure the page works. Let’s disable our URL Rewrite rule in IIS and start fiddling!

    GET http://be1i.albigi.lab/echocookies.aspx HTTP/1.1
    Cookie: Good=IAmAValidCookie; Bad=IAmANotSoBadCookie; EvilCookie=bXVoYWhhaGFoYQ==; Ugly=LeaveMeAloneIAmJustUgly;
    Host: be1i.albigi.lab

    The EvilCookie is there, as expected. Our poor app is in danger!

     

    Now, let’s enable the rule in the URL Rewrite and touch wood. Time to take another ride!

    GET http://be1i.albigi.lab/echocookies.aspx HTTP/1.1
    Cookie: Good=IAmAValidCookie; Bad=IAmANotSoBadCookie; EvilCookie=bXVoYWhhaGFoYQ==; Ugly=LeaveMeAloneIAmJustUgly;
    Host: be1i.albigi.lab


    Urray!! No EvilCookie this time!!

     

    One last consideration to make is about where, in the IIS configuration, we may want to setup this hack.

    This is a general question that applies to any URL Rewrite rule we may want to add. Two things we’d better consider are:

        1. If we setup the rule at the site level, it will run when IIS has already mapped the request to a specific site. This means the inbound URL path to match is going to be the relative path of the application, given the domain name can no longer be changed at this stage.

          This configuration is equivalent to setting up the XML in the application web.config file and it applies the rule for an individual site only.

        2. If we setup the rule at the web server level, instead, we can still change the domain name in the URL. This is the case, for instance, if we need to route the request to a different server (ARR Routing to server farms) or if we want the rule to apply for all the websites. This configuration is equivalent to setting up the XML in the applicationHost.config file that stores the whole web server configuration.

     

    I hope you’ll find this one helpful!

     

    Alessandro Bigi (albigi)

     

    Download: echoCookies.aspx

    IIS Dynamic Compression and new Dynamic Compression features in IIS 10

    $
    0
    0

    Dynamic Compression is one of the features that largely goes unnoticed in the everyday work a server does but is one of the unsung heroes of the Internet, saving bandwidth for each packet of data that it compresses. You can find out about how to enable dynamic compression for the IIS web-server by consulting the official documentation which is located here: https://technet.microsoft.com/en-us/library/cc753681(v=ws.10).aspx . You can also read about the benefits of using dynamic compression in this following blog entry: https://weblogs.asp.net/owscott/iis-7-compression-good-bad-how-much .

    This article means to discuss the way that dynamic compression can be configured in the IIS web-server, version 7 and above, as well as some new features that have been introduced as of IIS 10 and Windows Server 2016. So let's start with the basics:

    IIS Dynamic Compression configuration:

    Dynamic compression is a feature that allows the IIS web-server to compress responses coming from such handlers as the ASP.net Managed Handler, ISAPI Extensions or CGI handlers that dynamically generate responses for requests they handle. (For more on handlers and the integrated pipeline, see the video series of IIS Architecture and Components: http://linqto.me/n/IISArchitecture ). Dynamic compression addresses content that is generated dynamically, hence the name, as opposed to static compression, which deals with files that are read from disk and sent across the network to the requesting client (think of PDFs, images, javascript files and CSS files as examples). Contrary to static compression, dynamic compression has to be done on each outgoing response, since there is very little chance that two dynamic responses from a web-application will be the same and the result of the compression of response A will be reused in response B.

    By default, the dynamic compression module is not installed on the IIS server if you are just doing a standard install via the Server Manager and choosing to install the web-server role. Hence, and you need to install it by going into the Role Features and checking the 'Dynamic Compression' feature under the Web Server (role) > Web Server (feature) > Performance. The configuration for this feature is done at two distinct levels on the IIS web-server. At the server level we can configure which types of content dynamic compression is enabled for, and what are the cut-off values in terms of CPU consumption for stopping compression from occurring on outgoing responses and resuming it. You can access these settings by clicking on the server name in the left side tree view of the IIS Administration Console, and then selecting the 'Configuration Editor' from the middle pane.

    Once inside the configuration editor, you can use the 'Section' drop-down (as shown below) to navigate to the system.webServer/httpCompression configuration section. One very good thing about the Configuration Editor is that it allows you to inspect IIS configuration settings that are available and apply to the location in the tree view (server vs site vs web-application) where you are located. Configuration sections that are not present at the level you have selected cannot be configured at that level, even if you manually try to enter them in a web.config file.

    From the listing above, we can see several dynamicCopmpression parameters that are of interest:

    dynamicCompressionEnableCPUUsagethis value indicates that underneath this CPU consumption threshold, the IIS server can resume compressing dynamic content. Compression can be a CPU consuming activity, and we want to stop using it if the server's CPU is already quite busy with other computations as to not degrade performance further.

    dynamicCompressionDisableCPUUsage –
    this is the inverse of the previous value: and indicates at which CPU consumption threshold the server will stop compressing dynamic content so that it does not add any more load to an already overloaded server.

    dynamicTypesthis setting is a collection of values, which allows the server administrator to indicate what content types we should be performing dynamic compression on. This is an important change from Windows 2003 and IIS 6, where a server administrator had to list the extensions of all request urls for which the server should be performing static and compression. Starting with IIS 7, we can list content types (MIME types) and can even resort to using wildchar type listings such as text/* - which is actually the first MIME type defined for which the IIS server will attempt to perform dynamic compression for. Note that we can enable or disable compression for each of the listed types as desired. An example of this is the */* MIME type for which dynamic compression is disabled by default.

    At the site level
    / web application level / folder level – there are two more settings that control dynamic compression. These two are accessible via the configuration editor when selecting the website / web-application / folder we wish to target from the left-hand side tree view of the IIS Administration Console. The configuration section that contains these settings is the system.webServer/urlCompression
    section. The two configuration values that can be used are:

    doDynamicCompression this setting enables or disables dynamic compression for the website, web-application or folder we are configuring the setting for. It will disable or disable the compression for the MIME types that are listed in the system.webServer/httpCompression section.

    doDynamicCompressionBeforeCache – this is an interesting setting whose name might not be self-explanatory. If a dynamic response is determined by IIS to be cachable (will be stored in cache for the next requests), the question becomes: should IIS compress the response before placing it in cache or not? If the doDynamicCompressionBeforeCache is enabled, and the doDynamicCompression is also enabled, and the response is determined to be cachable, the compression will take place before the response is placed in cache. Hence, IIS will store the cached copy of the response. If doDynamicCompression is turned on but doDynamicCompressionBeforeCache is turned off, the response stored in cache will not be compressed. A second request that comes in for the same content on the IIS server will retrieve the content from cache, and the content will have to be recompressed before it is sent to the requesting client. Turning this setting on can greatly contribute to diminishing CPU consumption in some cases.

    Determine if dynamic compression is working for your content:

    The easiest way to determine if dynamic compression is working for the content that you are serving from your IIS web-server is to use FREB (short for Failed Request Tracing). You can find out more about FREB by looking at thus getting started guide here: https://docs.microsoft.com/en-us/iis/troubleshoot/using-failed-request-tracing/troubleshooting-failed-requests-using-tracing-in-iis . To determine if dynamic compression is working, consider configuring FREB tracing to trace requests you are interested in: such as request for ASP.net pages (*.aspx) and set the response code to 200 – OK (you can see a lift of all IIS http status codes here: http://linqto.me/IISCodes
    ) since this is the status code indicating that execution has completed correctly.

    Below is an excerpt from the 'Compact View' tab of a FREB trace taken for an ASP.net webpage that shows that the dynamic compression module has activated and compressed the response:

    As you can see, the response has been compressed from an initial response size of 4701 bytes down to a response size of 2011 bytes. This is because the response MIME type (text/html) matches the MIME types configured for dynamicCompression (it matches text/*), and that the doDynamicCompression was turned on for the site which served the request for which I had captured the FERB trace for.

    New features in Windows 2016 and IIS 10:

    In IIS 10 the configuration system of the web-server has been extended to allow the definition of some of the server level configuration sections at the site and web application level as well. We can see this very easily by using the same Configuration Editor to inspect the configuration at site or web-application level. In our case, contrary to Windows 2012 R2 with IIS 8.5 and earlier versions of IIS, the system.webServer/httpCompression section is now also available at the site and web-application level. Note in the screenshot below, the tree view on the left indicates that the website called 'ThrottleTest' is selected, and the Configuration Editor is showing that values of the configuration variables inside the system.WebServer/httpCompression section for this website not for the server level:

    You will note that there are six dynamicTypes defined, since I have defined an extra MIME type at the level of my website. This allows a site administrator to define or override the definitions of the dynamic and static compression MIME types that were defined at a server level. This gives a site administrator more flexibility in terms of being able to control the way in which a certain website can deliver dynamic content of a specific type (compressed or uncompressed) while leaving the server's settings unchanged. Hopefully, this will be one more reason for everyone using IIS to consider moving to the version 10 of the web-server.

    By Paul Cociuba

    Follow what I read: http://linqto.me/about/pcociuba


    Troubleshooting TLS / SSL communication problems for ASP.NET applications making HTTP Web Request or WCF queries to SSL endpoints – Introduction

    $
    0
    0

    This is the introduction post of a series of articles about troubleshooting TLS / SSL communications problem when you make Http Web Request or WCF queries from your ASP.NET applications to SSL endpoints.

    Consider the following set up:

    You are running an ASP.NET application which makes an HTTPS request to an endpoint and the response will be then be sent for display in the end user’s browser. This may be an HttpWebRequest, WebRequest or a web service / WCF call to an SSL endpoint.

    To make the things easier, we are going to use a very simple ASP.NET 4.6 application which uses the following demontration purposly-written code:

    protected void Page_Load(object sender, EventArgs e)
    {
    WebRequest wreq = WebRequest.Create("https://iis85.buggybits.com/");
    WebResponse wres = wreq.GetResponse();
    Stream str = wres.GetResponseStream();
    StreamReader strr = new StreamReader(str);
    string realresp = strr.ReadToEnd();
    Response.Write(realresp);
    strr.Close();
    wres.Close();
    }

    As seen in the code above the ASP.NET application acts as a client and makes an HTTP call to https://iis85.buggybits.com/. That application is running on another server.

    We will cover the following problems:

    Scenario 1:

    When we run our application we get the following error message:

    The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

    You can start reading the troubleshooting steps for scenario 1 here.

    Scenario 2:

    This scenario covers the same error message as scenario 1 but there is a different root cause. You can start reading the troubleshooting steps for scenario 2 here.

    Scenario 3:

    This scenario covers the troubleshooting steps for the following error message:

    The remote certificate is invalid according to the validation procedure.

    We are going to use System.Net tracing to find the problem. You can start reading the troubleshooting steps for scenario 3 here.

    Happy troubleshooting...

    Troubleshooting TLS / SSL communication problems for ASP.NET applications making HTTP Web Request or WCF queries to SSL endpoints – Scenario 1

    $
    0
    0

    This is the first part of a series of articles about troubleshooting TLS / SSL communications problem when you make Http Web Request or WCF queries from your ASP.NET applications to SSL endpoints.

    As explained in the introduction article, we will cover some of the problems for our simple ASP.NET 4.6 application which makes an Http Web Request to an SSL endpoint. Here is the simple code:

    protected void Page_Load(object sender, EventArgs e)
    {
    WebRequest wreq = WebRequest.Create("https://iis85.buggybits.com/");
    WebResponse wres = wreq.GetResponse();
    Stream str = wres.GetResponseStream();
    StreamReader strr = new StreamReader(str);
    string realresp = strr.ReadToEnd();
    Response.Write(realresp);
    strr.Close();
    wres.Close();
    }

    As seen in the code above the ASP.NET application acts as a client and makes an HTTP call to https://iis85.buggybits.com/. That application is running on another server.

    When we run our application we get the following error message:

    The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

    If we disable the custom errors locally and access the application from local server we get the following details:

    Server Error in '/' Application.
    --------------------------------------------------------------------------------
    An existing connection was forcibly closed by the remote host
    Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
    ...
    [SocketException (0x2746): An existing connection was forcibly closed by the remote host]
    System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) +139
    System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) +146
    ...
    [IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.]
    System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) +742

    This generic error message tells us that the TCP connection was closed and that the closure was not done a friendly way at all. The stack trace itself is not very useful here as we only see that we are getting that exception while we receive the information from the socket but there is no clue what it is that the ASP.NET application is receiving so no clue about the problem.

    Note that the ASP.NET application acts as a client here and sends an HTTP request to https://iis85.buggybits.com/ running on a remote machine. To reach the remote site, the test website intiates a TCP connection to the remote server. Opening the connection requires a three-way handshake should take place and then, since this connection attempt request is made over SSL, an SSL Handshake should take place. It is possible that it is either failing at TCP handshake or SSL handshake. It is possible that for some reason the certificate could be invalid or underlying SSL handshake could fail.

    To analyze the handshake we can get a network trace and analyze it but in some scenarios it is very useful to run a "browser test" first. The rule is that the ASP.NET client should contact the remote server without receiving a certificate error. Unlike the "human clients", an ASP.NET client cannot say "Ok, I understand the certificate is not valid for my request but I'll take the risk and continue to the website" so if it detects a problem the site's security certificate it will fail (unless you explicitly tell the HTTP web request to ignore the certificate errors, which I do not suggest at all).

    If we open a browser on the machine where the ASP.NET application runs and browse the URL we get the following message:

    If you click and continue to the web site and check the certificate details it clearly tells us that there is a name mismatch:

    And if you click "view certificate" you will see that the certificate is an SAN (server alternate name) certificate obtained for www.buggybit.com and the SAN DNS entries do not contain our URL, iis85.buggybits.com:

    So, the certificate is not valid. To fix this problem we go to the IIS machine where iis85.buggybits.com works and configure the web site with a correct certificate. After that we can run our browser test again and now this time we successfully browse https://iis85.buggybits.com/ and the page opens just fine without a certificate warning.

    However our application still fails with the same error message and we will cover the second scenario in our next post.

    Happy troubleshooting...

    Troubleshooting TLS / SSL communication problems for ASP.NET applications making HTTP Web Request or WCF queries to SSL endpoints – Scenario 2

    $
    0
    0

    This is the second part of our series of articles about troubleshooting TLS / SSL communications problems when you make Http Web Request or WCF queries from your ASP.NET applications to SSL endpoints.

    In our first scenario, we troubleshooted a "The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel" error message. We made some basic tests, such as a "browser test" and found that the certificate used was not valid. However fixing the certificate did not solve the issue and we still see the same error message telling us that the existing connection is forcibly closed by the remote host when we run our ASP.NET application which makes an HTTP web request to https://iis85.buggybits.com/ URL.

    About "browser tests"...

    Although a browser can be used to test the communication, this may not be the safest way to test each and every scenario because this test will use current user's certificate store. When ASP.NET application is used it will use the computer account's store. This difference may cause different behaviors.

    Also, when testing with browser, we rely on the browser TLS negotiation settings and its choices during the SSL handshake. When it is an ASP.NET application that runs as a client, depending on the .NET configuration, TLS negotiation in the SSL handshake may behave differently.

    A few words about SSL handshake

    SSL handshake is explained greatly here, so I will not give all the detail of it and I suggest you to read it if your unfamiliar with it. Basically, after the TCP handshake;

    • Client sends a CLIENT HELLO package to the server and it includes the SSL / TLS versions and the cipher suites it supports.
    • Then the server responds with a SERVER HELLO package which includes the SSL / TLS versions and the cipher suits that it supports.
    • If the client sends a TLS version lower than the server supports the negotiation fails.
    • If the server responds with a lower TLS version and if the client supports that TLS version, SSL handshake continues with that TLS version. This is called TLS fallback. For example, if the client supports both TLS 1.0 and TLS 1.2, and the server supports only TLS 1.0, the SSL handshake may start with TLS 1.2 by client, and then it may actually happen in TLS 1.0 when server replies with "I support TLS 1.0 and let's continue with that" message.
    • Cipher suite negotiation also happens here.

    Now it is a good time to capture a network trace from the machine where the client ASP.net application is hosted on. It is always good practice to capture the network trace from both client and the server, even from the intetmediate devices or servers, such as proxies, where possible. Having multiple network traces collected from different places along the transit could help analyze the issue easier because you can check if the same traffic / package from the client machine arrives at the remote server without a problem. In our case it is OK to capture and take a look at the trace from client machine as we are currently trying to understand if TCP and SSL handshake is done sucessfully.

    I personally choose Network Monitor but some of you may prefer Wireshark over Network Monitor. It is up to you to choose the tool to capture and analyze a network trace.

    You can see a screenshot of the network trace I captured while I reproduce the same problem:

    (Please note that the TMG is just the name of the server, and not the "Threat Management Gateway", sorry for the confusion)

    As seen in the filtered trace above, the TCP handshake is successfull (the first three frames). Then the client sends the "Client Hello" in the fourth frame (frame # 818) but instead of a "Server Hello" reply it receives a RESET package from the server as seen in the sixth frame (frame # 822) – you can see the R flag, which stands for TCP RST (reset) is indicated on the frame. So SSL handshake is failing here.

    We know that the TLS and cipher suites are negotiated in the SSL handshake so those are the first things to check. If we look at the Client Hello details in the trace we see that the client sends TLS 1.0 request to the server:

    This is interesting because the machine runs the ASP.NET application supports TLS 1.2 and we can confirm in the registry at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols (ref.: https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/manage-ssl-protocols-in-ad-fs):

    As seen in the screenshot above, TLS 1.2 is enabled as both client and server. If we check the other TLS protocols we confirm that TLS 1.0, 1.1 and 1.2 are all enabled.

    Here is a theory comes at this point: our ASP.NET client tries communicating via TLS 1.0 but the remote server does not support TLS 1.0. How can we confirm our theory? Remember that we have a working scenario, if we browse the same URL via Internet Explorer, the page loads fine. So let's take a network trace from the working scenario and see what the client requests and what the server responds:

    (Please note that the TMG is just the name of the server, and not the "Threat Management Gateway", sorry for the confusion)

    As seen above, there is Client Hello and server response with Server Hello and the certificate without its private key then the handshake suceeds. Let's take a closer look at the Client Hello (frame # 256) first:

    This time, in sucessfull scenario with Ineternet Explorer, we are seeing that the client starts negotiation with TLS 1.2, actually it is the strongest version of the TLS supported on the machine.

    Let's take a look at what Server Hello tells the client:

    Bingo! Server supports TLS 1.2. Then it is expected to fail if ASP.NET client requests to use TLS 1.0 which is not supported by the server.

    IE just works fine because it uses the same TLS version with server but it is interesting to see that the ASP.NET client goes with TLS 1.0, instead of stronger version of TLS 1.2, even both IE and the ASP.NET client runs on the same server.

    A quick word about Internet Explorer's TLS settings

    If you open Internet Explorer's advanced settings you can configure which SSL and TLS versions that Internet Explorer can use:

    So Internet Explorer uses the strongest version of the protocol. Then why does not ASP.NET use the strongest one? Here comes a security update in the picture:

    Microsoft Security Advisory 2960358 - Update for Disabling RC4 in .NET TLS
    https://technet.microsoft.com/en-us/library/security/2960358.aspx

    This update disables the RC4 and SSL 3.0 for .NET Framework and there are updates for different .NET Framework versions. As described in the article you can use the following registry key to force the usage of the strongest TLS version:

    For 32-bit applications on 32-bit systems and 64-bit applications on x64-based systems:

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319
    "SchUseStrongCrypto"=dword:00000001

    For 32-bit applications on x64-based systems:

    HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\.NETFramework\v4.0.30319
    "SchUseStrongCrypto"=dword:00000001

    Since we are using .NET 4.6.x, instead of setting the registry key we can just set targetFramework=4.6 in the web.config file to force using the strongest TLS version:

    <compilation targetFramework="4.6" />
    <httpRuntime targetFramework="4.6" />

    After setting the targetFramework in my application the TLS version negotiation started to work as expected, as confirmed in the network trace screenshot below:

    (Please note that the TMG is just the name of the server, and not the "Threat Management Gateway", sorry for the confusion)

    Now if you check the Client Hello and Server Hello details you would see that TLS negotiation is suceeded as both are using TLS 1.2:

    I would like to say "we nailed it" but no, not at all. It is still not working. If you look at closely to the network trace, this time client is sending a FIN package and closing the TCP connection (frame # 185). Why is that? Let's analyze it in our last scenario.

    Happy troubleshooting...

    Troubleshooting TLS / SSL communication problems for ASP.NET applications making HTTP Web Request or WCF queries to SSL endpoints – Scenario 3

    $
    0
    0

    In our first and second posts about troubleshooting the TLS / SSL problems, we worked to fix a "The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel" error message. We made some basic tests, such as a "browser test" and found that the certificate used was not valid and then in the second scenario, we compared a network trace for successful and failing traffic and saw that the TLS negotiation between the client and the server was failing.

    We fixed that issue as well and we know that the TLS versions match for the client and the server now and the certificate is obtained for correct name but we are still getting an error message.

    If you recall the outcome of the scenario 2, we were seeing a FIN package coming from the client and the server was resetting the TCP connection:

    (Please note that the TMG is just the name of the server, and not the "Threat Management Gateway", sorry for the confusion)

    This time the error we get is different:

    Server Error in '/' Application.
    --------------------------------------------------------------------------------
    The remote certificate is invalid according to the validation procedure.
    Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
    Exception Details: System.Security.Authentication.AuthenticationException: The remote certificate is invalid according to the validation procedure.

    The exception message tells us that the remote certificate is not valid. We know that we passed the TLS and Cipher negotiation so there should be some other validation failing. What could it be and how can we troubleshoot this?

    Here the SChannel ETL logging and System.Net tracing come in the picture. We will not focus on SChannel ETL logging here as it could be another post's topic. Instead we will use System.Net tracing to troubleshoot this problem.

    A quick word about System.Net tracing

    A System.Net tracing is a diagnostic option availabile in .NET Framework since the version 2.0. This is a part of System.Diagnostic namespace and it could be used to capture the details of System.Net classes including the System.Net.Socket operations. To enable System.Net tracing you can use the configuration sample from the following article:

    Collect network trace data within your ASP.NET application
    https://blogs.msdn.microsoft.com/amb/2011/02/16/collect-network-trace-data-within-your-asp-net-application/

    To enable the tracing we copy the configuration from the article and paste it in our ASP.NET application's web.config file. Pay attention to the folder the traces will be created:

    <add name="System.Net" type="System.Diagnostics.TextWriterTraceListener" initializeData="c:\tracelogs\network_trace.log" />

    Here in the example above, the trace will be written in the c:\tracelogs\ directory and the application pool's identity should have the write permission on that folder.

    How to read System.Net trace logs

    If you are not familiar with System.Net trace logs, I suggest you to take some minutes to read the following post:

    Use System.Net Trace and SSL Alert Protocol to troubleshoot SSL connection issue.
    https://blogs.msdn.microsoft.com/webapps/2012/11/05/use-system-net-trace-and-ssl-alert-protocol-to-troubleshoot-ssl-connection-issue/

    Once we reproduce the same issue a network_trace.log file should be written in the trace folder we specified. Let's open the trace log in our favorite text editor. If you look closely you would see the following reported at the end of the trace:

    [Public Key]
    Algorithm: RSA
    Length: 2048
    ...
    ...
    ...
    System.Net Information: 0 : [2504] SecureChannel#26517107 - Remote certificate has errors:
    System.Net Information: 0 : [2504] SecureChannel#26517107 - A certificate chain processed, but terminated in a root certificate which is not trusted by the trust provider.
    System.Net Information: 0 : [2504] SecureChannel#26517107 - Remote certificate was verified as invalid by the user.
    System.Net.Sockets Verbose: 0 : [2504] Socket#16754362::Dispose()
    System.Net Error: 0 : [2504] Exception in HttpWebRequest#46228029:: - The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel..
    System.Net Error: 0 : [2504] Exception in HttpWebRequest#46228029::GetResponse - The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel..

    This error message "A certificate chain processed, but terminated in a root certificate which is not trusted by the trust provider" tells us that there is a problem with the CA's (certification authority) root certificate and it is not trusted, so most probably the certificate store for trusted root CAs does not contain the root CA for the remote certificate.

    But, remember the browser test, it was working so the user's trusted root CA certificate store should have the root CA for the remote certificate. Let's go and check the certificate once more with "browser test": open a browser on the server where your ASP.NET application works, browse the endpoint URL (which is https://iis85.buggybits.com/ in our case) and click the details of the certificate and move to the "Certification Path" tab:

    As seen the certificate is from an internal CA called amborp-AMBDC-CA.

    Let's look at the user's certificate store. We confirm that the root certificate is already there:

    But this is user's certificate store. ASP.NET client will use the "local computer" account's "Trusted Root Certification Authorities" store. If we open and check that store we see that the CA's root certificate is not installed there. Once we import the CA's root certificate in computer account's trusted root CA store, the application starts to work fine.

    Summary:

    • In our first post, we learnt that the certificate must be a valid one. Unlike "human clients", an ASP.NET client cannot click the "Ok, I trust this certificate / web site" if a certificate error occurs, unless you explicitly tell the application to ignore the certificate errors, which is actually not a recommended approach.
    • In our second post, we learnt the TLS / Cipher negotiation is a requirement for a successful SSL handshake.
    • In this third and final post, we learnt how System.Net tracing could be helpful when troubleshooting TLS / SSL related issues.

    If you have any comment or suggestion, please use the comments section below.

    Happy troubleshooting...

    Setting up CORS request with Windows Integrated Authentication and ASP.net CORE

    $
    0
    0

    Some time ago, I worked on an issue, where a website needed to execute a CORS (Cross Origins Resource Sharing) request to a second website which was protected by Windows Integrated authentication. Since the setup does have a few hidden caveats, and is detailed only in part in various forum posts, I will like to give a detailed walk through of how such a setup would have to be configured.

    Through this tutorial, we will look at:

    The configuration of IIS for this scenario to work is the trickiest bit since it relies on some subtleties that I have not been able to find very much documentation on – hence this article series.

    For the rest of the article, I will be looking at explaining what the end result should look like and how the flow of control should be between components.

    The initial diagram – CORS tutorial

    For the demo project, I will create a ASP.net Core Razor Pages site, with one page that will be our HTML front end site. This website, to implement its functionality, will require to make JavaScript POST calls to a backend ASP.net Core WebAPI controller, in order to send and retrieve data. Both the Razor Pages site and the WebAPI site are hosted using IIS as a reverse proxy and Kestrel as a server.

    The Razor Pages website sits behind an IIS webserver that is configured to listen to requests with a host header of CoreRazor. The WebAPI site sits behind an IIS server that is configured to listen for traffic with a host header of CoreWebApi.

    The browser will load the HTML page generated by the Razor Pages application (from http://corerazor) – in step 1. It will then execute JavaScript on the client to make XHttpRequests to the WebAPI site (http://corewebapi) sending and receiving JSON objects – in step 2.

    Both these sites are using Windows Integrated Authentication to authenticate incoming requests and make sure only authorized users can access the content.

    The front end website (ASP.net Core Razor Pages) looks like the below:

    When executed, the AsyncPoster Razor Page, will print out the date and time it was created on, using the server's time, since this piece of the code is run inside the dotnet.exe process running on the server called CoreRazor.

    It also displays a text area control that allows a user to input some text, and then send the text to a backend HTTP service, implemented as a WebAPI controller in ASP.net Core 2. This is done by pressing the 'Echo Text' button that will trigger some JQuery script to create a POST request to the CoreWebAPI server and send a JSON object along, containing the text entered in the text area.

    Once the request reaches the CoreWebAPI site, it will be processed by the POST method of a WebAPI controller called EchoText. All this controller will do, is that it will capture the text that has been sent in, add the 'Echo back: ' prefix and send the text back to the page as a JSON object.

    On the page, the JQuery script that has executed the POST request will capture the response from the WebAPI controller, and display the text that has been returned inside a div element on the page. It will also clear the text area so the user can type in more text.

    Here is what the page looks like when some text has been entered, sent for processing and displayed back:

    Proceed to the next article in the series ->

    By Paul Cociuba
    http://linqto.me/about/pcociuba

    Viewing all 131 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>