Exchange 2016 Hybrid : TLS negotiation failed with error UnknownCredenta


I was adding couple of Exchange 2016 servers with CU2 to the Hybrid configuration wizard to send and receive emails to Exchange Online. On Exchange Online Admin center, I configured the receive connector to Office 365 o verify the subject name on the certificate for TLS authentication.

The problem is that emails are not being sent to Office 365 via the send connector. After enabling the protocol logging on my Exchange 2016 hybrid servers [Get-SendConnector “outbound to Office 365” |Set-SendConnector -ProtocolLoggingLevel verbose] , and opening the smtpsend log file, I can see many TLS failures:

016-07-19T12:13:14.863Z,Outbound to Office 365,08D3AFC581A92DD3,3,,,>,EHLO,
2016-07-19T12:13:14.910Z,Outbound to Office 365,08D3AFC581A92DD4,2,,,<,”220 Microsoft ESMTP MAIL Service ready at Tue, 19 Jul 2016 12:13:14 +0000″,
2016-07-19T12:13:14.910Z,Outbound to Office 365,08D3AFC581A92DD4,3,,,>,EHLO,
2016-07-19T12:13:15.004Z,Outbound to Office 365,08D3AFC581A92DD3,4,,,<,250 Hello [] SIZE 157286400 PIPELINING DSN ENHANCEDSTATUSCODES STARTTLS 8BITMIME BINARYMIME CHUNKING,
2016-07-19T12:13:15.004Z,Outbound to Office 365,08D3AFC581A92DD3,5,,,>,STARTTLS,
2016-07-19T12:13:15.051Z,Outbound to Office 365,08D3AFC581A92DD4,4,,,<,250 Hello [] SIZE 157286400 PIPELINING DSN ENHANCEDSTATUSCODES STARTTLS 8BITMIME BINARYMIME CHUNKING,
2016-07-19T12:13:15.051Z,Outbound to Office 365,08D3AFC581A92DD4,5,,,>,STARTTLS,
2016-07-19T12:13:15.145Z,Outbound to Office 365,08D3AFC581A92DD3,6,,,<,220 2.0.0 SMTP server ready,
2016-07-19T12:13:15.145Z,Outbound to Office 365,08D3AFC581A92DD3,7,,,*,” CN=*, OU=IT, O=contoso International (L.L.C), L=Dubai, S=Dubai, C=AE CN=thawte SHA256 SSL CA, O=””thawte, Inc.””, C=US 0D92CFF6070B73AD5722EC8B4DA3389B AAA3D3DADA6891A2CCB3134D0B2D7764F1351BC4 *”,Sending certificate Certificate subject Certificate issuer name Certificate serial number Certificate thumbprint Certificate subject alternate names
2016-07-19T12:13:15.145Z,Outbound to Office 365,08D3AFC581A92DD3,8,,,*,,TLS negotiation failed with error UnknownCredentials

I am sure the certificate is fine as the other hybrid servers are using the same certificate and they are able to send emails to Office 365. Also on the event viewer, I am seeing the following error:

TLS Error Office 365 Exchange Hybrid


So finally, I tried something and it worked. I opened the certificate store, and I was checking the permissions on my certificate private key, the certificate I am using for the TLS connection.

TLS Error Office 365 Exchange Hybrid2

I can see the following permissions on the private key:

TLS Error Office 365 Exchange Hybrid3


So I added the Network Service and I gave it READ access. After that everything worked just fine. Try to give EVERYONE Read access if things are not working yet.

Hope this will help someone, leave a note if it did 🙂

Azure Sync Case – Conflict between UPN and SMTP address

I was working heavily on AD Sync Tool with all its versions including the AAD Connect tool. And I came across an issue that will cause you trouble soon. So in this blog post I will share my experience with you.


My environment was consisting of one AD domain with Exchange 2010 on premise and a single sign on experience. The AD domain and SMTP domain is the same (

I have a manager called John Smith:

  • UPN :
  • SMTP address :

This manager came to me asking me to add a secondary SMTP address for him Since that SMTP address is available, I agreed.

Azure AAD Sync UPN vs SMTP 12121

Life goes on. John is a big manager and he used his nice clean new SMTP in all his communications. He printed the new email address to his business cards.

So far, we have all our mailboxes are on premise and we do not have any Office 365 implementation.

The Problem

After 10 years, the corporate decided to start using Azure services and they decided to start with AD Sync next month.

Meanwhile, a new employee is hired with name John William. The IT department assigned him the following:

  • UPN :
  • SamAccountName : Contoso\John
  • SMTP Address :

Everything is fine so far and there is no single conflict.

Azure AAD Sync UPN vs SMTP 1212122

Now when we started to Sync users to Azure AAD, John Smith user is no longer synchronizing to Azure AD and the AD sync tool is giving an error for that user.

“Unable to update this object because the following attributes associated with this object have values that may already be associated with another object in your local directory services:[ProxyAddresses smtp:].Correct or remove the duplicate values in your local directory.”

After opening a case with Microsoft, we reviewed our AD Sync configuration and after opening the AAD Connect sync tool, we confirmed that we are using ObjectGUID as the anchor attribute and not email address.

Microsoft confirms that when an on premise mail enabled user is synched to Azure AD, he is assigned a secondary SMTP in the form of his UPN. So in this case, when John William is being synched to Azure AD, Azure will try to stamp him with a secondary SMTP in the form of which causes a conflict with John Smith user.

So although there is no conflict on premise, Azure introduces this type of conflict and throwing sync error.  Microsoft answered us : “THIS IS BY DESIGN DEAL WITH IT”

How big this problem is

Now, we contacted John Smith and asked him to give away his lovely SMTP address. What do you think his reaction is ? He is using that email address since years and he cannot give it away. Further more, if he agreed to give it away, and if Azure assigned it to John William user, then people sending emails to will be sending to John William and this is by itself a security and privacy issue.

So we went to John William and ask him to change his UPN and SamAccountName. This solves the problem for a while, until a new employee come again with the name John Robert, and the IT found that Contoso\John is not used in the enterprise and so they assigned it to John Robert.

Now suddenly the Sync Tool will throw a sync error and the same loop happens over and over.

Imagine you have many users who used to have clean SMTP addresses and you have to go through them one by one and do this 🙂

Compare your on Premise AD users and Azure users – One by One !

If you are synchronizing your on premise Active Directory with Azure Active Directory, then you know for sure that maintaining a healthy synchronization is not an easy thing to do.

I want to give you my experience in synchronizing a single domain AD to Azure AD, using ObjectGuid as anchor attribute.

The Challenge

I am using Azure AD Connect to synchronize AD objects to Azure AD. The Azure AD Connect tool is nice, and it gives you the health of the synchronization process and a nice error messages if there is a problem synchronizing one of your on premise AD objects. I used the AD connect tool for a month, and everything was fine. No errors and everything looks clean.

I then noticed that couple of users are having problem with Office 365 and i discovered that they have never synched to Azure AD in the first place. I did Get-MSOLUser on them and no results are returned. I had to go to Azure AD Connect and force Full AD Import to sort out this issue.

I become so worried about this, and I started to compare the number of AD users and Azure AD users to at least ensure matching numbers. Count AD users and Azure AD users and compare them is not enough, as there is no guarantee that the same objects represent these numbers.

I then came with an idea. I wanted to take each AD user, collect his ObjectGuid, and then compute his ImmutableID, go to Azure AD, and search for that ImmutableID, and finally linked the AD user with his Azure AD copy. We will do that for all AD users. Finally, we can identify AD users that does not have Azure copies, and Azure AD users that are not mapped to AD users.

This will guarantee that each one of your AD users are mapped to Azure AD users. Along this journey, the script will generate couple of information:

  • AD users not in Azure
  • Azure users not in AD
  • Azure users with Synch Errors
  • Filtered AD users from Synchronization
  • Total number of AD users
  • Total number of Azure users
  • Last AD Sync time

Compare you on Premise AD users and Azure users 11

Conditions to use the script

  1. This script assumes you are using ObjectGuid as anchor attribute
  2. If you have multiple domains, forests or you are doing filters to scope AD users by OU or attribute, then you must write your script block to populate the $ADusers_Raw variable, by searching inside the script for (Raw data collection from AD) region. You can find examples there to do that. By default the script will do Get-ADuser to retrieve all users.

Compare you on Premise AD users and Azure users 2

Script Visuals

The script provides many visuals to help you see what is going on:

  • The script code is divided into regions that you can expand separately for better script browsing.
  • Progress bars will appear during running the script to give you a feel and sense of what is going on during the run time of the script.
  • Summary information is displayed on the PowerShell console window with summary statistics.
  • Results are written at the end of the script in two locations:
    • The PowerShell console window.
    • Couple of files that are generated on the script running directory.


The script connects to Active Directory and gets a list of users (Get-ADUser) , and then connects to Azure Active Directory AAD and gets all azure users (Get-MSOLUser). A comparison then is performed by mapping each AD user with his Azure user, and identify un-synchronized AD users that do not have a mapped/synched Azure copy.

This script assumes that ObjectGUID is used as the anchor attribute to link/map AD user and Azure AAD user. If you are using any other attribute, then this script is not for your case.

The script needs to get the list of AD users that you are synchronizing to Azure AD. Microsoft Sync tool gives you the ability to filter by OU or by attribute. Another tricky part of the script is when you are synchronizing multiple domains or perhaps multiple forests. For all those different cases, it is your job to populate the variable called ($ADusers_Raw) located under (Raw data collection from AD) region in this script.
By default, the script will do Get-ADUser to populate this variable. In your case, you may need to write your own script block to collect all AD user objects that you are synching to azure.

Script Parameters

.PARAMETER ScriptFilesPath
Path to store the script output files. For example C:\ , or ‘.\’ to represent the current directory.

As the script needs to connect to Azure, an internet connectivity is required. Use this option if you have an internet proxy that prompts for credentials.

.PARAMETER CreateGlobalObject
The switch when used, the script will generate an extra csv file that contains a unified view of each user with properties from the on premise AD and Azure AD.


.\Compare-CorpAzureIDs.ps1 -ScriptFilesPath .\

.\Compare-CorpAzureIDs.ps1 -ScriptFilesPath .\ -Proxy:$true

.\Compare-CorpAzureIDs.ps1 -ScriptFilesPath .\ -CreateGlobalObject

Download the script

You can download the script from here.

Azure GUID to ImmutableID and vise versa Desktop App

When working with Azure AD, and you deploy one of the Azure AD Synchronization tools like AD Connect, by default objectGUID is used as your ImmutableID. When objects are synced to Azure AD, objects in the cloud will have a converted value of the objectGUID in a base-64 version called ImmutableID.

Let us say that a user called John exist in your AD, his objectGUID is something like this:

Objectguid to immutableID1

The user objectGUID is converted  to base-64 and stored in AD Sync tool metaverse as (sourceAnchor) , and in Azure AD as ImmutableID:


So sometime you want a tool that converts from objectGUID to ImmutableID and the other way.

So I created a simple desktop application, that you click on , and use it to easily convert between Azure ImmutableID and AD objectGUID

Azure Identity Converter Desktop App

The application is so small (500k) as you can see below:


Just double click it and the app will open:


Now you can simply enter an AD GUID and it will compute the ImmutableID (Azure ID for that GUID)


Or you can enter an Azure ImmutableID and it will compute the object GUID in your AD:


Download the APP

You can download the APP from here. For those who are afraid to run untrusted application, be rest that the tool does not install anything, it only prompts for values and no admin rights are needed:)

Exchange Email Moderation Super Cool Script – Must Have

Hi everyone,

Email Moderation is one of my best features in mail flow restrictions in Exchange. You can assign one or more moderators to groups, so that if any one of them approves the email being sent, then this will release the email to that moderated group.

You can view email moderation information from Exchange GUI admin tools, but for dynamic moderated groups, you shall use PowerShell to view and configure email moderation. Usually, dynamic groups that has country or office filter criteria will contain lots of people and you want them to be moderated.

No Dashboard for moderation info

The first issue people have with email moderation is how to get a report with all email moderated groups and their moderators, and bypass moderation recipients. There is no dashboard that shows all this information in one place.

Disabled or Orphan Moderators

The second issue that Email Administrators will face is moderation list maintenance. Suppose that GroupA has one moderator called John. John decided to leave the company, and his account is now disabled. Now GroupA has no moderators. It has Moderation status set to true, but no moderators. Some cleanup job need to be performed frequently to check for the health and existence of the moderators.

Single Moderator Issue

Moreover, the best recommendation is to have at least two moderators for each moderated group, so that if one of them is not available or on a leave, the other one can moderate that group. You may want to have a regular checks to detect moderated groups with one moderators only.


I have created a PowerShell Script that you can run, and it will do the following:

  1. Generate CSV file that lists all moderated groups in your environment with the following Info:
    1. Group Name
    2. Dynamic mailing group or not
    3. Moderators list
    4. Bypass moderation list
    5. Managed By list.
    6. Email Address
    7. Alert column if a single moderator is detected.
    8. Health Field to indicate if one of the moderators is disabled or does not have mailbox anymore.
    9. Empty Moderator List warning
  2. Three Log Files will be generated. One for information, one for empty moderator groups, and one listing groups with disabled mailbox moderators.


Download the Script here

You can download the script from here Get-CorpModerationInfo

Exchange Online EOP and Send of Behalf

Some one asked me recently about an interesting scenario on which emails are send on behalf of another party, and how Exchange Online Protection (EOP) will act in this case.

There are two FROM values in the SMTP world:

  • RFC 5321  (MailFrom)
  • RFC 5322 (From)

Outlook displays the RDC 5322 From address to end users, and this is the address that is used in the user’s safe sender list.

EOP inspects both values for blocked and allowed senders and domains. Exchange Online Protection EOP and outlook handle safe sender lists differently.

In most cases, those two values are the same which is normal. Things become interesting when someone is sending emails on behalf of another party. Let us take a simple example:

Mailfrom vs MAIL RFC

  • Contoso corporation is trying to send email to their customers and they contracted with third party to send their news emails.
  • The contractor company sends the news email on behalf of contoso.
  • The email that was sent has the following values:
    • RFC 5321 MailFrom:
    • RFC 5322 From:
  • One of the customers who are using EOP receives the news email, and in Outlook, he can see that the sender is
  • The user added this address to the safe list senders.
  • Because EOP inspect both RFC from addresses, the next time an email was sent by the contractor, EOP will white-list that email respecting the user’s safe list.

Usually the RFC5321 address is the one used by EOP to do SPF checks and send NDR or bounced messages.

Reference article:

Office 365 – This may indicate invalid parameters in your hybrid configuration settings.

I came across a case in which an organization wants to implement Office 365 Hybrid Exchange, and they have a tenant with the following licenses (subscriptions): (EOP,EOA,DLP).

They initiated the Exchange Hybrid wizard and the wizard reported the following error:

INFO : Session=Tenant Cmdlet=Set-OnPremisesOrganization -HybridDomains {} -InboundConnector 'Inbound from 866c6111-4067-46e1-bb8b-df0637594a40' -OutboundConnector 'Outbound to 866c6111-4067-46e1-bb8b-df0637594a40' -OrganizationRelationship 'O365 to On-premises - 866c6111-4067-46e1-bb8b-df0637594a40' -OrganizationName 'contoso' -Identity '866c6f7c-4067-46e1-bb8b-df0637594a40' START
INFO : Session=Tenant Cmdlet=Set-OnPremisesOrganization FINISH Time=781.2552ms
ERROR : Subtask Configure execution failed: Configure Mail Flow
Execution of the Set-OnPremisesOrganization cmdlet has thrown an exception. This may indicate invalid parameters in your hybrid configuration settings.
A parameter cannot be found that matches parameter name 'HybridDomains'.
at System.Management.Automation.PowerShell.CoreInvokeRemoteHelper[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output, PSInvocationSettings settings)
at System.Management.Automation.PowerShell.CoreInvoke[TInput,TOutput](PSDataCollection`1 input, PSDataCollection`1 output, PSInvocationSettings settings)
at System.Management.Automation.PowerShell.CoreInvoke[TOutput](IEnumerable input, PSDataCollection`1 output, PSInvocationSettings settings)
at System.Management.Automation.PowerShell.Invoke(IEnumerable input, PSInvocationSettings settings)
at Microsoft.Exchange.Management.Hybrid.RemotePowershellSession.RunCommand(String cmdlet, SessionParameters parameters, Boolean ignoreNotFoundErrors)


We have verified that the user running the Hybrid wizard is Global Administrator on the tenant, and the strange thing is that the error specified that the Set–OnPremisesOrganization command does not have a parameter called HypridDomains. If you check TechNet, you can see this is not true. So how come?


After spending could of days with Microsoft support,  it seems that we have to add at least one E3 Office 365 licenses to the tenant in order for the PowerShell commands to work properly.

Exchange 2013 Certificate Revocation Failed

Hi everyone,

I want to share with you my personal experience in troubleshooting an interesting problem where Exchange 2013 management interface shows the status of a certificate that I had imported as (Revocation Status Failed).

So why this is happening? When Exchange 2013 tries to enumerate certificates on the computer store for you in the Exchange Admin Center, it will try to check the revocation status for each certificate to make sure the certificate is Valid. To do that, it will try to download the CRL (Certificate Revocation List) file from the internet by looking at the certificate  (CRL Distribution Points) attribute of that certificate.

CRL Certificate Exch2013

This CRL file download is happening in the background when the server is restarted and using the SYSTEM account. So the SYSTEM account is trying to download something from the internet in the background, and for sure it will use the proxy settings in the IE that is configured for SYSTEM account, which is auto detect proxy settings.

Since the server is not configured to use DHCP, then the auto discover process will go to DNS and search for ,  for example (, and since I have such record in my DNS pointing to my proxy, then the SYSTEM account is trying to connect to my proxy, perhaps authenticate and then tries to download the CRL file.

This means also that each time the SYSTEM account in the Exchange 2013 needs to connect to internet, it will do that via my proxy which is something I do not like. I would rather like to have a direct connection from Exchange 2013 to the internet, especially if we are talking about hybrid configuration and Office 365.

How to solve this issue?

I started to think, if i could log on to the computer using SYSTEM account, open the IE and remove the Auto-detect proxy setting, then the problem would be solved and i will have a direct internet connectivity that will eliminate any complexity or authentication requirements on my proxy.

So i went to one of my favorite sites [Windows SysInternals] ,  and i have downloaded the PsExec tool, and copied it to the C:\ drive of my Exchange server. This tool has the option to initiate an executable remotely or locally using local system account.

The idea is that I want to run CMD using SYSTEM account interactively and then open IE from there. Once IE is opened in front of me using SYSTEM account, i can then remove the proxy auto-detect chec kbox from there. To do that, I logged on as a local administrator to one of my Exchange 2013 where i have PsExec copied on the C drive, and then I run:

psexec -i -d -s cmd

CRL Certificate Exch2013 2

This will open a new CMD window for me. From that window, I can type WhoAmI and I can see that the CMD window is running under the SYSTEM account.

CRL Certificate Exch2013 3

Now, I will open IE using SYSTEM context.

CRL Certificate Exch2013 4

and from there I will remove the auto-detect proxy settings, so that SYSTEM will not use proxy when connecting to the internet to fetch the CRL of my certificate.

CRL Certificate Exch2013 5

Get Software Update Size from Configuration Manager 20120/R2 [Report/PowerShell]

Hi everyone,

I worked in an environment where distribution points are centralized and we had to plan our software update distribution via config manager 2012/R2 very carefuly, by making sure that a Software Update Group will not have over 300 MB of content.

If you browse the Config Manager Software Updates node, you can filter and sort the updates that you are interested in, but you cannot see the Size of those updates right away. You have to go to each update, check its properties, and see what content files are part of that specific update, and then calculate the size.

I created a SQL query that can filter updates by a time frame, and will report back the list of output updates along with the size of each update, so you can easily plan your Software Update Group membership.

SQL query

(SUM(FileSize)/1024)/1  as 'Size in KB',
Case (UI.IsDeployed)
	When 0 Then 'No' 
	Else 'Yes' 
	End as 'Deployed',
	When 0 Then 'No' 
	Else 'Yes' 
	End as 'Expired',
	When 2 Then 'Low'
	When 6 Then 'Moderate'
	When 8 Then 'Important' 
	When 10 Then 'Critical' 
	Else 'NA' End as 'Severity',
	When 0 Then 'No' 
	Else 'Yes' 
	End as 'Superseded'

v_UpdateContents JOIN v_UpdateCIs 
	ON v_UpdateCIs.CI_ID = v_UpdateContents.CI_ID

JOIN CI_ContentFiles 
	ON v_UpdateContents.content_id = CI_ContentFiles.Content_ID

	ON CI.CI_ID = v_UpdateContents.CI_ID

LEFT JOIN v_LocalizedCIProperties_SiteLoc AS LOC 

LEFT JOIN v_CICategoryInfo TYP
	TYP.CategoryTypeName = 'UpdateClassification' 

	ON inf.CI_ID=ci.CI_ID  

LEFT JOIN v_UpdateInfo UI 
	ON UI.CI_ID = ci.CI_ID   

WHERE UI.DatePosted BETWEEN '1/1/2015' AND '2/1/2015'

Case (UI.IsDeployed)
	When 0 Then 'No' 
	Else 'Yes' 
	When 0 Then 'No' 
	Else 'Yes' 
	When 2 Then 'Low'
	When 6 Then 'Moderate'
	When 8 Then 'Important' 
	When 10 Then 'Critical' 
	Else 'NA' End,
	When 0 Then 'No' 
	Else 'Yes' 

ORDER BY DisplayName

Ofcourse, you have to change the line WHERE UI.DatePosted BETWEEN ‘1/1/2015’  AND ‘2/1/2015’ as you want.

PowerShell Script

I then integrated this query to a PowerShell Script that will take the following variables:

  1. StartDate : Filter updates from this date. Format is mm/dd/yyyy
  2. EndDate : Filter updates up to this date. Format is mm/dd/yyyy
  3. Path_to_CSV : path to the CSV generated output.
  4. SQLConnectionString : to connect to your Config Manager Database.

 The script will have two output:

  • CSV File containing the output data.
  • Grid View

CM Update Size

Download the script

You can download the script from Microsoft Script Gallery here.


The main SQL view i used is v_UpdateCIs, which is according to Microsoft:

Lists all of the software updates configuration items, by CI_ID and CI_UniqueID. The information in this view is a subset of information from the v_ConfigurationItems view, retrieving all records where the configuration type is Software Updates or Software Updates Bundle (CIType=1 or 8), including article ID, bulletin ID, severity, date created, whether the update is deployed, and so on.

Second view is v_UpdateContents Which is confusing at first, because it contains items with content by CI_ID. This view will list Software Bundles and no individual updates. Software Bundle can contain many content files (ContentCI_ID column). This is where things become tricky. Check this TechNet Article.

Now we will use the ContentCI_ID column as a join factor to CI_ContentFiles table to get more information about those content files that a software bundle contains.

From there, it is only about keep joining views to get the actual size of those individual content files, and then a GROUP BY clause to group individual content files to the Software Bundle they belong to.

Configuration Manager 2012 R2 Reporting Services and SSL Trusting and Binding Issues

Hi everyone, i was configuring Reporting Services for Configuration Manager 2012 R2 that day, and i got two annoying issues when it comes to SSL binding.

I want to share with you the two issues and how I solved them.

Duplicate SSL binding, or Unknown appearing in the https URL

So you have a certificate in the personal store in the reporting server with name, you already configured a certificate binding via the (Web Service URL) in the Reporting Services Configuration Manager. Now, you want to change the URL name, and thus the certificate, so you went to the certificate store and deleted the certificate for before you use the Reporting Services Configuration Manager to unbind it. Now every time you try to add another certificate, the Reporting Services Configuration Manager keep showing the old name that exists in the old certificate, or worse, display an Unknown URL like .

Also the following event appears in the event viewer.

Event ID 110, Source: “Report Server Windows Service”, Details: “The value for UrlRoot in RSReportServer.config is not valid. The default value will be used instead”.

Configuration Manager 2012 R2 Reporting Services and SSL 1

To investigate more, i went to the configuration file on the reporting server located here: C:\Program Files\Microsoft SQL Server\MSRS11.MSSQLSERVER\Reporting Services\ReportServer\rsreportserver.config.  On the file, you can find all binding including the one for the old name


Now deleting the part containing solves the issue. Here is the part that i deleted:


When you attempt to connect to the data source in Report Builder 3.0 with CM 2012, you receive the following error message: A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: SSL Provider, error: 0 – The certificate chain was issued by an authority that is not trusted.)

So you are running Configuration Manager (20120 r2 in my case), and you have a reporting service (in another machine in my case), and you want to build a report, so you have your SQL report builder (Version 3 in my case), and when you try to do certain actions, you will get ugly errors about certificate chain not trusted.

Configuration Manager 2012 R2 Reporting Services and SSL 2

Reason is simple: SCCM SQL server is using self signed certificate for this operation. You have either to trust this certificate on computers you are running the SQL report builder from, or issue a trusted Web Server certificate and assign it to the SCCM SQL server.

The best way to find out where this self signed certificate is located, is by going to your SCCM SQL server, open the SQL Server Configuration Manager> SQL Server Network Configuration>Protocols for… > Right click properties> Certificate tab and then click view.

Configuration Manager 2012 R2 Reporting Services and SSL 3

Simpliest way is to export this certificate and import it to the trusted root certifications store on the computer on which you will be running the SQL Report Builder

Tip: a good reference that I recently read after posting this blog post can be found here. Check it out.

Certificate Enrollment Web & Policy Service (CES & CEP)

I talked previously here about Certificate Enrollment Web Service CES and Certificate Enrollment Policy Web Service CEP , and in this blog post, I want to share my experience in deploying these services on Windows 2012 R2, using Kerberos Windows Integration as the authentication method.

You can refer to the below Microsoft TechNet pages for step by step details and information:

I guess my previous blog post and these TechNet articles will give you all the information you need to know how to deploy CES and CEP. What is missing is sense of experience and couple of screen shots.

Assumption, you have Microsoft Enterprise CA on your network called CA-1, and it has a common name (Corporate Contoso Issuing CA). You have two Windows 2012 R2 Servers that will be used to install CES and CEP. One is called CES-1 and the other is CEP-1.

We want internal domain joined computers to enroll for certificates using Windows Integration.


Installing CES

First of all, check the installation requirement here, then log on to a new Windows 2012 R2 server using a powerful account

  • Enterprise Admins group.
  • Must have Request Certificates permissions on the target certification authority (CA).

Before you start to install the CES role on CES-1 server, create an SSL certificate with Server Authentication purpose, and put it in the personal computer store on the CES-1 computer.

Now, from Server Manager, follow the below steps to install the CES role:



  • After you finish installing the role using Server Manager, you need to do the Post-deployment Configuration.


  • You have to write down your internal enterprise CA server name. Do not check the (Configure the Certificate Enrollment Web Service for renewal-only mode).


  • Select Windows Integrated Authentication

CES 105

  • On the Service Account page, it is recommended to use custom account and not leave the default. This account is simply the account that will run the application pool.106
  • The account should be:
    • Member of the local IIS_IUSERS group.
    • Has Request Certificate permission on the CA server.
    • Has delegation to do stuff in the CA.
    • Has SPN for HTTP for the URL

To do this, go to AD, and create an account called Svc_CES for example, add do the following:

  1. Add it to the local IIS_IUSERS local group on CES-1 server.
  2. Go to the CA server, open the Certification Authority console, check the properties of the CA, and on the Security tab, make sure the account has Request Certificate right.
  3. Open the Account property in AD, go to the delegation tab, use (Trust this user for delegation to specified services only/Kerberos Only), and choose (HOST and rpcss) when targeting the CA-1 server


 Finally register the SPN. Open CMD with domain administration right and register an SPN as per the following:

setspn -s http/ contoso\svc_CES
  • Now select the SSL certificate with subject name that will be assigned by the installation wizard to IIS web site.


Once the installation is done, go to IIS, and check to see that there is a web site created for CES, and the svc_CES is running the application pool for it.


Important note

My CA common name is (CorporateIssuing CA IV), and it contains spaces. Sometimes, you may need couple of tweaks to make sure the URL format in the IIS is good. Fixes are mentioned here. In my case, although my CA common name contains spaces, I found that everything is working fine and I do not to do any of the mentioned fixes. May be it is because I am running Windows 2012 R2 🙂

Installing CEP

Before you start installing CEP on a different domain joined Windows 2012 R2 server called CEP-1, make sure to request and install an SSL certificate in the computer personal store of this computer.

Now, go to CEP-1 server that should be domain joined and Windows 2012 R2, open Server Manager and start the add roles wizard.


You will be asked to choose an Authentication method (We will use Integrated authentication) in this case.


You then will be asked to choose an SSL certificate, choose the one we installed on this server previously.

Final Steps:

You have to do two things:

  • Configure a friendly name for the Certificate Enrollment Policy Web Service
  • Configure GPO to point targets to the new Policy Enrollment URL.

Details about how to do this can be found here.


[Hash Techniques] : Message Authentication Code MAC, HMAC and Password Salting

In this blog post, I will talk about the authenticity problem in the digital word, and I will start with the basics. It is easier for people to understand encryption (confidentiality), but it becomes tricky when we talk about integrity and authenticity. While Integrity is making sure the data is not modified since the last time we looked at, authenticity means that the recipient may reasonably be certain that a message was truly created by its purported author. Integrity and Authenticity serve different purposes, but they are related to a certain extend. Let us discuss a simple message exchange between Alice and Bob.

Confidentiality via Encryption

Let us suppose Alice and Bob are exchanging a secret message (m) over an open channel. “Eve” on the other hand is listening to the channel. Using an encryption and a shared secret key, both Alice and Bob can exchange their messages without Eve knowing the content, thus confidentiality is ensured.

But there is another problem, Eve can do more than listening to the message if he can have a small control over the channel. In this way, Eve can change the message that Alice sent, so that Bob will receive a different message. The integrity of the message is compromised in this case.
Actually, if Eve has control of the channel, he can do another nasty things. He can learn the message (m), record it and then resend it to Bob, or even delete the message completely so that Bob will not receive anything.


Hash functions alone does not equal integrity 

One solution to the integrity problem is that Alice could compute the hash of the message, and send both the message and the hash to Bob. Bob can then read the message, and then recompute the hash of the message and compare it with the hash value received from Alice. The problem here is that Eve could interrupt the message that Alice sent, create a new message and then send the new message and the hash of the new message to Bob. Bob then will do the same computation and he would think that the message was sent by Alice.

Message Authentication Code MAC

Consider that Bob just received a message. Why should Bob believe the message came from Alice? This means that the message is completely useless.  Eve as we talked before, could send a new message to Bob with a hash value to trick him.

To resolve this problem, Authentication is introduced. Like encryption, authentication uses a secret key that Alice and Bob both know. We will call this the authentication key (Ka). When Alice sends the message m, the following occurs:

  1. Alice and Bob share a secret authentication key Ka.
  2. Alice computes a message authentication code, or MAC as a function of both the message m and the authentication key Ka.
  3. Alice then sends both the message m and MAC to Bob.
  4. Bob will receive both message m and the MAC.
  5. Bob re-computes what the MAC value should be using his own copy of the authentication code Ka, and the received message m.
  6. Bob checks if the received MAC value equals his computation of MAC.

In this way, there is now way that Eve could change the message and send his own hash value, because he does not know what it takes to compute an authentic MAC, as he has no knowledge of the shared secret Ka.


Now Eve wants to modify the message m to a different message m2. Bob will then compute the MAC value as a function of (m2, Ka), and compare it to the received MAC value. But a good MAC function will not give the same result for two different messages, so Bob will recognize that the message is not correct.
Eve can still do nasty things. For example, Eve can replay or resend the same messages to Bob or even change the order for messages. To sort this issue, a sequence numbering can be applied to each message, so that Bob can verify that order and uniqueness of incoming messages.

MAC modes

So how MAC is computed? MAC is a function of a shared secret Ka, and the input message m.  Both parties should share a secret authentication key before starting to use MAC. MAC can be computed via encryption or hashing as we will see next.

Via Encryption (CBC-MAC)

Encryption can be used to compute MAC value, like when using CBC encryption. In this block cipher encryption mode, the message is encrypted using CBC mode, and then we throw away all but the last block of cipher text.

Via Hashing (HMAC)

It is so trivial to use hash function to compute the MAC. To do this, you perform the following computation:

h(Ka XOR a || h(Ka XOR b || m))

  • XOR = Exclusive OR
  • || = Concatenation
  • h = hashing
  • a,b = padding constants


Salting vs MAC

Hashing can be used in many different ways. You can use it with an extra data and computation to get different usages. Here are two examples:

  • Add a shared secret key to the message and you can use Hash as MAC to preserve integrity and authenticity.
  • Add some extra (not secret) data to the message before you hash it, and you make your hash function more resistant to rainbow attacks [salting technique]

So we have talked about MAC in the blog post, so I will be talking about Salting technique here.

Password salting is a way of making password hashing more secure by adding a random string of characters to passwords before their hash is calculated, which makes them harder to reverse.

Important points :

  • In MAC, the shared key is secret while the message is not.
  • In Password Salting, the password is the secret while the additional data (Salt) is not secret
  • No two different passwords should use the same Salt value.

We use salting when storing passwords. Passwords are stored using hash techniques and never using encryption. The reason is that encryption is two direction operation, while hashing is one way operation. The most famous attack against hashing is a technique called rainbow tables, where all words and phrases in the dictionary are hashed, and a look-up is made to compare the stored hash value with the rainbow table content for a match. So if someone has his password as (Password), then most probably the hash value of this password exist in the rainbow table, and thus can be racked.

 Unsalted passwords chosen by humans tend to be vulnerable to dictionary attacks since they have to be both short and meaningful enough to be memorized. Since salts do not have to be memorized by humans they can make the size of the rainbow table required for a successful attack prohibitively large without placing a burden on the users.

Final Thoughts

MAC can be said to provide authenticity and integrity to some extend. TLS is a practical example of where MAC is being used in real life.

Salting is used when storing password hash values to make it more resistant to attacks using rainbow tables. You should always use different salt for each password hash.

You should never use the same key for authentication and encryption unless you know what you are doing. For example, do not use the same key for encryption and for the MAC shared key.

In PKI world, digital signature is the equivalent of MAC. But instead of having the same shared secret between Alice and Bob, in digital signature, Alice computes the MAC using her private key, while Bob re-computes the MAC using Alice’s public key. While MAC is similar to digital signature, MAC is faster, but on the other hand, does not provide non-repudiation as digital signatures do.

Hash Function – Simplified in cool slides

I was presenting the concept of hash function to some developers who have little knowledge about cryptography, and it was very challenging to simplify the concept in a visual way. So I decided to use an extra ordinary example to accomplish this job.

Here we go !!


Facebook wants to buy WhatsApp, and they want to send the agreement over the internet, but they want the agreement to be confidential.

hash vs -MAC  1

Now both Facebook and WhatsApp have a shared secret key Key(K), that no one else know about. So Facebook will encrypt the agreement using an encryption algorithm using the shared secret key Key(K).

hash vs -MAC  2

WhatsApp on the other side, will decrypt  the message using the same shared secret key, and everyone is happy. Since the same key is used for encryption and decryption, we will call this (Symmetric Encryption)

hash vs -MAC  4

Now, what if some third party tries to change some bits during the transmission of the encrypted agreement? This third party will not able to see the content of the agreement, because it does not know about the encryption key Key(K), but it can change couple of bits.

hash vs -MAC  5

So now when WhatsApp tries to decrypt the modified the message,  they may end up with a funny output 🙂 Now WhatsApp thinks that the offer is 229 Billion.

hash vs -MAC  6


So how to protect the integrity of the agreement during transmission?

hash vs -MAC  7

The answer is Hash Functions. Hash functions are taking any size of data, and produce a unique fixed size output. It is impossible to take the output of the hash function and reproduce the message again. This is why we call it One-Way function.

hash vs -MAC  8

The other property of hash function is collision free (almost free). This means that it is so hard to generate two different messages that produce the same hash output. This also means, that no matter how many time you hash a message, the output will be always the same.

hash vs -MAC  9

Any simple change in the input message will produce a complete different hash output.

hash vs -MAC  10

So now Facebook will do things differently. it will start with encryption to ensure confidentiality.

hash vs -MAC  11

It will also compute the hash  of the message to ensure integrity.

hash vs -MAC  13

Both are to be sent to WhatsApp.

hash vs -MAC  14

Now WhatsApp will decrypt the message using the shared secret key, and now to ensure that the message was not changed in transmission, it will also compute the hash of the message received, and compare the value with the hash value sent by Facebook. If both unique values are equal, then everything is okay.

hash vs -MAC  15

I hoped you enjoyed the cool presentation. Keep in mind, that there is a lot to be said here. For example, you should use MAC techniques to authenticate the sender in addition to just hash function.

Download the slides

Feel free to use the slides. Download them : Hash Function Simplified

Cryptography Fundamentals

Hi everyone,

It becomes so challenging to describe cryptography for normal people or even to your IT colleagues. I was asked to present couple of PowerPoint slides for 20 minutes to introduce this topic to couple of developers, so that they will have sense of responsibility to enhance their code security.

Crypto Slide

Download PowerPoint Slides

Please find my slides here

See Also

Deploy Offline Root CA in Windows 2012 R2 – SHA-2 Ready

Contoso Corporation decided to deploy an offline Root Certification Authority. The security team started to prepare for deploying the offline root CA. The idea is to create an Enterprise PKI infrastructure that uses advance cryptography and supports SHA-2

Prepare the Windows Machine

Since the root CA will be offline most of the time, it is a recommended to deploy it on a virtual machine.

Machine Specifications:

  • 2 GB RAM.
  • Normal processing power.
  • 40 GB C drive at a minimum.
  • Network Card

Note: the network card is needed only during the installation of the Root CA and taking backup. After you finish deploying the Root CA, disconnect the network card and shut down the machine.

Windows Specifications:

  • Windows 2012 R2 (Standard or Datacenter) [Preferable not Windows Core].
  • Machine name : ContosoRootCA.
  • Create the following folders under the C:\ Drive
    • C:\CA Database Files\CertDB
    • C:\CA Database Files\CertLog

Lock Down the Windows Machine

It is so important to secure the Root CA server. Here is a TechNet article talking about the same subject and applies to all Windows Server versions. Below is my recommendation:

  • Apply Windows patches to the Root CA machine.
  • Disable CD-ROM Autoplay.
  • Disable LM and NTLMv1 authentication protocols.
  • Enable and configure inbound and outbound firewall rules  using the built-in Windows Firewall.
  • Enable Audit Object Auditing using the local security policy [This is needed to audit certain CA actions]

Enterprise PKI Root CA 3

  • I recommend also to configure the Password Policy using local group policy. In the Local Security editor,configure the password policy as per the below figure:Enterprise PKI Root CA 5
  • Rename the local administrator account and the guest account, then disable the guest account, and assign a complex password to the local administrator account.You can use the local group policy to help you and enforce some of those tasks:

Enterprise PKI Root CA 6

  • After you finish installing and configuring this Root CA server, disconnect the network card, and power the machine off. I recommend also to export the VM (the OVF file if you are using VMware) on a removable media or backup tape and store it in a very secure location.


First thing to do is to install the Active Directory Certificate Services and their management tools. I prefer doing the whole installation in PowerShell as this give me the ability to rebuild the whole environment easier.

Since we are working with Windows 2012 R2, then the PowerShell ISE editor is available for us. I prefer open it and run all the PowerShell commands from it.

To open the PowerShell ISE, go to the Windows Search, and type PowerShell ISE

Enterprise PKI Root CA 1

Make sure that the PowerSehll execution policy is set to RemoteSigned by running

Set-ExecutionPolicy RemoteSigned

Then type:

Install-WindowsFeature Adcs-cert-Authority -IncludeManagementTools
Install-WindowsFeature RSAT-ADCS-Mgmt

Enterprise PKI Root CA 2


The CAPolicy.inf file provides Certificate Services configuration information, which is read during initial CA installation and whenever the CA certificate is renewed. The CAPolicy.inf file defines settings specific to root CAs, as well as settings that affect all CAs in the CA hierarchy.

By default, the CAPolicy.inf file does not exist when  Microsoft Windows Server  is installed. It should be manually created in the Windows operating system folder (%windir% folder). When Certificate Services are installed, the operating system applies any settings defined in the CAPolicy.inf file.

While still in PowerShell ISE Editor, type the following to create a file called CAPolicy with extension .inf :

notepad c:\windows\capolicy.inf

Copy and paste the below into the capolify.inf file you had created.

Signature="$Windows NT$"


Notice="Corporate Policy Statement"





  • <Renewalkeylength> : Only used when you renew your CA certificate.
  • <RenewalValidityPeriodUnits> and <RenewalValidityPeriod> : Specify the validity of the CA certificate.
  • <CRLPeriod> and <CRLPeriodUnits> : Specify the frequency of the CA Revocation publication, which is “which is how often you will be bringing up the offline root CA in order to regenerate the CRL“.
  • [CRLDistributionPoint] : Leave blank for Root CA.
  • [AuthorityInformationAccess]: Leave blank for Root CA.
  • [LoadDefaultTemplates = 0]: Prevents CA for publishing and start using default template and cause undesired enrollment.
  • [AlternateSignatureAlgorithm = 0] : configures the CA to support the PKCS#1 V2.1 signature format for both the CA certificate and certificate requests. Because it is not widely supported, we will keep it as 0.
  • [PolicyStatementExtension]: is the location the points to your policies [Certificate Practice Statement CPS and Certificate Policy CS]

Note:  CAPolicy.inf Usage is defined here and here.

Install Certification Authority Services

Run the following PowerShell Command:

 Install-AdcsCertificationAuthority -CAType StandaloneRootCA `
 -CACommonName "Contoso Corporate Root CA II" `
 -CADistinguishedNameSuffix "OU=PKI,O= Contoso,C=US" `
 -KeyLength 2048 `
 -HashAlgorithmName SHA256 `
 -CryptoProviderName "RSA#Microsoft Software Key Storage Provider" `
 -DatabaseDirectory "C:\CA Database Files\CertDB" `
 -LogDirectory "C:\CA Database Files\CertLog" `
 -ValidityPeriod Years `
 -ValidityPeriodUnits 20

Enterprise PKI Root CA 7

Configuring Certification Authority Services


Open CMD and run Certutil -CAInfo , and verify that the CA type is  Stand-alone Root CA.

Enterprise PKI Root CA 8

Run Certutil -getreg and verify the database and log locations

Enterprise PKI Root CA 9

Map Namespace of AD

Because the offline root CA is not connected to the domain and does not automatically publish the CRL to Active Directory, you must set a key in the registry to assign a variable that will point to your Active Directory namespace. To do this, at a command prompt, type the following command and then stop and start the CA service:

certutil.exe –setreg ca\DSConfigDN CN=Configuration,DC=contoso,DC=com

Where DC=contoso,DC=com is the namespace of the forest root domain. This setting is primarily required for CRLs and CA certificates (AIA) that are published in Active Directory.

This registry value sets the %6 replacement token that is required for the CRL location attribute. You have to restart the certificate services in order for changes to take effect.

CA Time Interval (Frequency) settings

Note that during the installation, we specified the validity period for the CA certificate (20 years).This is because there is no parent CA from which the validity period can be specified. Because this CA will issue future certificate ,there must be a way to specify the validity period for issues certificates. So run the following command from the CMD on the root CA to specify the validity period for the second tier issued certificates.

certutil -setreg ca\ValidityPeriodUnits 10
certutil -setreg ca\ValidityPeriod "Years"
net stop certsvc & net start certsvc

Now, we have to define the CRL interval, CRL Delta interval, and the CLROverLap period by running the following on the Root CA:

certutil.exe –setreg CA\CRLPeriodUnits 26
certutil.exe –setreg CA\CRLPeriod “Weeks”
certutil.exe –setreg CA\CRLDeltaPeriodUnits 0
certutil.exe –setreg CA\CRLDeltaPeriod “Days”
certutil.exe –setreg CA\CRLOverlapPeriodUnits 12
certutil.exe –setreg CA\CRLOverlapPeriod “Hours”

Notice that we have disabled Delta CRL Period, because it is not recommended to enable delta CRLs on the Root CA. To learn more about the best practices of setting these values, please check my previous blog post here.

Configure CDP

CDP stands for Certificate Revocation List Distribution Points, and it points the consumer of your digital certificates where to locate the CRL for your Root CA.

It is recommended to publish the CRL in both Active Active Directory and also on a http web site. Make sure you have a web site accessible using , so that we can through the CRL files under that virtual directory.

First of all, use PowerShell to run the following command to empty the default CDP locations

$crllist = Get-CACrlDistributionPoint; foreach ($crl in $crllist) {Remove-CACrlDistributionPoint $crl.uri -Force};

Then use CMD to configure the new CDP locations [not from PowerShell]:

certutil -setreg CA\CRLPublicationURLs "1:%WINDIR%\system32\CertSrv\CertEnroll\%3%8%9.crl\n10:ldap:///CN=%7%8,CN=%2,CN=CDP,CN=Public Key Services,CN=Services,%6%10\n2:"

The numbers appearing in the previous command represent option variables that you add up. For example, (\n2) means “Include in the CDP extensions of issued certificates“, while (\n10) is the combination of option 8 and option 2, so it means “Include in the CDP extensions of issued certificates” and “Include in all CRLs. Specifies where to publish in the Active Directory when publishing manually“. The below table list those variables:

0 :No Options Defined
1 :Publish CRLs to this location
2 :Include in the CDP extensions of issued certificates
4 :Include in CRLS. Clients use this to find Delta CRL Locations
8 :Include in all CRLs. Specifies where to publish in the Active Directory when publishing manually
64 : Publish Delta CRLs to this location
128 :Include in the IDP extension of issued CRLs

Where as the variables appearing with (%) are another set of variables that you can find more information about here.

Now run certutil.exe -crl command from CMD to generate the new CRL, and drops it as configured, into “C:\Windows\System32\CertSrv\CertEnroll” directory.

Configure AIA

Authority Information Access AIA locations are URLs that are added to a certificate in its authority information access extension. These URLs can be used by an application or service to retrieve the issuing CA certificate. These CA certificates are then used to validate the certificate signature and to build a path to a trusted certificate.

First of all, use PowerShell to run the following command to empty the default AIA locations

$aialist = Get-CAAuthorityInformationAccess; foreach ($aia in $aialist) {Remove-CAAuthorityInformationAccess $aia.uri -Force};

Then use CMD to configure the new AIA locations [not from PowerShell]:

certutil -setreg CA\CACertPublicationURLs "1:%WINDIR%\system32\CertSrv\CertEnroll\%1_%3%4.crt\n2:ldap:///CN=%7,CN=AIA,CN=Public Key Services,CN=Services,%6%11\n2:"

Auditing Certification Authority Services

To enable Auditing on the Root CA server for Certificate Services operation, run the following command from CMD:

certutil -setreg CA\AuditFilter 127

Publish to Active Directory

Now, if you go to C:\Windows\System32\CertSrv\CertEnroll , you will see two files:

  • Root CA Public Key (Certificate)
  • Root CA Revocation List

Enterprise PKI Root CA 10

Copy those two files to your corporate domain controller (C:\) drive for example, and log on to your domain controller with an account that is member of both Domain Admins and Enterprise Admins. Open an elevated PowerShell and enter the following commands, using the file names for your instance. This will publish the offline root CA information to AD, just as if it were an online CA. By doing this all domain joined clients will automatically trust your root CA. If you have standalone computers, then you can import the .crt file into their trusted certificate store.

certutil.exe –dspublish –f “ContosoRootCA_Contoso Corporate Root CA II.crt” RootCA
certutil –f –dspublish “Contoso Corporate Root CA II.crl”
certutil.exe –addstore –f root “ContosoRootCA_Contoso Corporate Root CA II.crt”

The first command adds the offline root CA public certificate to the AD DS, Configuration naming context. The second and third commands add the root CA and the root CRL to the relevant certificate stores on the subordinate CA.

See Also