The WordPress.com stats helper monkeys prepared a 2012 annual report for this blog.
Here’s an excerpt:
600 people reached the top of Mt. Everest in 2012. This blog got about 11,000 views in 2012. If every person who reached the top of Mt. Everest viewed this blog, it would have taken 18 years to get that many views.
The posts we’ve provided around Configuration Manager 2012 Internet Based Client Management (IBCM) are proving to be very popular with lots of comments and questions coming in. A common request is a way of provisioning certificates for clients when domain auto-enrollment is not possible. This would be the case for workgroup machines, multi-forest deployment, and in scenarios where group policy processing doesn’t take place (remote machines accessing the infrastructure over VPN might be a candidate).
Our approach to this at the moment is to break the deployment process up a little, but still drive as much automation as possible. So, when we want to deploy a client in this scenario, the first thing we need to do is generate a certificate for it. We’re likely to want to do this in bulk, and as one-off jobs, so we use a standard batch script which accepts a parameter of the computername to generate a certificate request and export the resulting cert.
rem Create an INF request file with the specified subjectname
echo Generating *.inf file for certificate request for server %subjectname%
echo ;—————CertificateRequestTemplate.inf————– >> %subjectname%.inf
echo [NewRequest] >> %subjectname%.inf
echo Subject=”cn=%subjectname%” >> %subjectname%.inf
echo Exportable=TRUE >> %subjectname%.inf
echo KeyLength=2048 >> %subjectname%.inf
echo KeySpec=1 ;key exchange >> %subjectname%.inf
echo KeyUsage=0xA0 >> %subjectname%.inf
echo MachineKeySet=TRUE >> %subjectname%.inf
echo [RequestAttributes] >> %subjectname%.inf
echo CertificateTemplate=”ConfigMgrClientCertificate” ; this is for Client Authentication >> %subjectname%.inf
echo SAN=”DNS=%subjectname%” >> %subjectname%.inf
rem Create the a binary request file from the INF
echo Generating certificate request for server %subjectname%
CertReq -New -f %subjectname%.inf %subjectname%.req
echo Retrieving certificate for server %subjectname%…
CertReq -Submit -q -f -config CAServerName.FQDN.CO.UK\CA-NAME-CA %subjectname%.req %subjectname%.cer
Echo Importing certificate into Local Computer Store…
certreq -accept %subjectname%.cer
Echo Exporting certificate with private key…
Certutil -f -p agoodpassword –exportpf x %subjectname% .\clientcerts\%1.pfx
Echo Cleaning up…
certutil -delstore “MY” %subjectname%
echo Certificate generation for server %subjectname% complete!
This script will create a ConfigMgr client cert with the name of the machine you are going to deploy.
To use this script against a list of machines you want to deploy you would:
For /f %I in (mylistofmachines.txt) do ApajoveCertGen.cmd %I
This will result in a certificate for each machine name being generated and stored in the \clientcerts folder.
So, we now have a load of certificates. We will need the Trusted Root certificate exporting too, this can be obtained from the computer TR store and exported as a .cer file.
How to deploy the client?
We need a folder structure with a copy of the ConfigMgr client binaries.
Our clientcerts folder is in here too, as is the batch file for certificate and client install, which looks like this:
@Echo Adding Trusted Root Certificate
certutil -addstore -f “ROOT” “%~dp0MyTrustedRoot.cer”
@echo Import Client Certificate
Certutil -p agoodpassword –importpf x “%~dp0clientcerts\%computername%.pfx”
@echo Install ConfigMgr Client
“%~dp0\client\ccmsetup.exe” /source:%~dp0clientcerts\client /mp:myserver.fqdn.co.uk /usePKICert /NOCRLCheck SMSSITECODE=ZZZ CCMHOSTNAME=sccmserver.fqdn.co.uk CCMHTTPSSTATE=31
When executed this imports the Trusted Root cert, imports the client cert we created above and then installs the ConfigMgr client. You’ll likely want to pass additional parameters to the client installation, but this is a good place to start.
The past six months have been quite hectic and my blog output has suffered somewhat. The primary reason for this is our establishment of Apajove. Apajove is the name of our new UK System Center consultancy. Since July 2011 we have been working hard to establish the company and delivery some excellent technical projects for our customers across various scales and sectors.
Apajove is growing as our workload increases, particularly with the release of the 2012 iteration of System Center. We are working with a few companies as early adopters of the System Center Suite, some of our findings from those projects are appearing here and at the Apajove blog.
There’s more detail over at Apajove.com. I will also be blogging there from here-on along with Andy, Shaun and Ben, and the others joining our technical team.
Under SCCM 2007, Native Mode was a bit of a pain. You couldn’t mix and match http and https enabled clients in one site, so even where you didn’t need the HTTPS level security, you had to have it and there was always a client with a certificate issue somewhere.
So, with Configuration Manager 2012 we’re moving on significantly. Native mode is no more and everything got much simpler. A site can now serve HTTP and HTTPS based clients, the site and site systems also individually understand if a client is Internet or Intranet based and can be configured to respond to one or the other or both.
Here the site is configured:
This week we’ve deployed a few hundred SCCM 2012 RC2 clients as a test bed.
The majority of the clients we’re managing at this customer are purely Internet based with no access into the core network at any time. We’ve having to manually provision them with the requisite certificates, more of which in another post, following which the client is installed using some of the nice new switches we have on the ccmsetup command line:
ccmsetup /usePKICert /NOCRLCheck /mp:https://ServerPublicFQDN.co.uk SMSSITECODE=AAA CCMHOSTNAME= ServerPublicFQDN.co.uk
usePKICert tells the client to load the certificate
/NOCRLCheck tells the client not to try to find a CRList for the client download (this is for the client download from the MP only, CRL checking will be enabled for clientà site communications unless specifically disabled in the site properties dialogue box above.)
CCMHOSTNAME just tells the machine where its internet based MP is.
When the client is installed the control panel applet knows how the client is accessing the infrastructure:
This one is on the internet and is happy about it.
Our client has joined a collection and gets an app, so we can see end-to-end that it’s working.
The app downloads and is installed. The DataTransferService log confirms the https connection (not that it could be working any other way, but it’s nice to see!)
We did a few other cool things with the solution. Here’s a screenshot of the console with the clients reporting in:
We deployed System Center Endpoint Protection:
SCEP is pretty cool now. The SCEP agent is policy-based, so as the client performs its first policy check upon installation, it is force-fed the SCEP client. No need to join a collection or submit inventory or any of those delays, straight in with the anti-malware! (It’s a shame it thinks VNC is a virus though).
So in rambling conclusion… SCCM 2012 RC2 IBCM = good. SCEP = good, everything = good.
So, Beta 2 of SCCM shipped to the web earlier today. I was in the keynote presentation at MMS when this was announced, so when better to kick the new releases tyres…
I am installing Beta 2 in our UK datacentre whilst I’m in MMS sessions in Las Vegas, so this will mainly be a quick screenshot runthrough with initial observations. More to come later…
Welcome splashscreen, looks good, plenty of options.
Have to install dot net 3.5 and 4.0 then we can proceed
For now we’re going to use a single site configuration, in live a Central Admin Site would be required:
We still have the familiar update download for external components:
BUT… There’s only 13 of them now (an improvement over the 89 required for CM07:
NB, the updates include SQL Express and dotnet, so it does take a little while…
Hey, the database has lost its default SMS_ prefix:
No more Native Mode, site now supports both HTTP and HTTPS.
This is cool. We can enable/disable the DP and MP rtole for the site during setup and specify the HTTP/S protocol.
A few pre-reqs to fix
Odd final screen… But we begin!
And we’re ready to go. Next we’ll actually try to get it to do something!
Having taken a quick look at AVIcode yesterday, let’s take the gloves off and get a bit more hands on…
In this example we’ll be implementing AVIcode monitoring of a project management application ProLog. Sadly ProLog is something Orinoko developed themselves and the code quality leaves something to be desired. We have forced all staff to use this application for everything they do, so the Service Manager queue is building up!
In the Authoring Pane of the OpsMgr console I use the Add Monitoring Wizard to create a new Enterprise ASP.NET Application:
The wizard locates all the ASP apps on all of the servers I’m monitoring (this is an option in the AVIcode setup, if you wish you can limit discovery to certain machines).
The rest of this process is covered very well by Simon over on System Center Central here so I’ll skip on to what we get once it’s all up and running…
Each of our web pages gets its own dashboard under the Management Pack created for the .NET Application:
We also get a visual view of the application in a Distributed App:
I’m using the ProLog application to run a report of all the projects we’re working on currently.
I log in and select Reports – Overview
Nothing much happens, then eventually I receive a spurious error in Internet Explorer.
Over in OpsMgr I see data corresponding to this:
Behind the warning is significant data (still in the OpsMgr console):
You may notice that this is a Slowest Nodes alert showing that the page took 76672ms to render. 62626ms of this on the ReportExecution2005.asmx function. This is information I can then use to troubleshoot why this function of the application isn’t performing as it should.
Along with this data I get an excellent dashboard overview of application performance:
And if this isn’t enough, I can get deep into the code in the AVIcode Intercept Studio web console:
The top-level dashboard for each app collates relevant information for monitored processes. I can then drill into extraordinary detail around each transaction carried out against the application:
And drill deeper still into each individual event:
AVIcode is a massive product, with plenty of complexity and capability. Integrated with Operations Manager it provides the OpsMgr guys deep performance and alert knowledge on the performance of .NET applications and delivers even deeper intelligence for the .NET guys to optimise and troubleshoot poor performance and application failures all monitored in real-time. Great fun!
Here in the Orinoko ivory tower we’re too busy mixing our metaphors to let the grass grow under our feet. And so to AVIcode.
According to the product homepage “AVIcode delivers market-leading .NET application performance monitoring capabilities to help ensure the availability of business-critical applications and services, regardless of where they are deployed. End-user experience and application performance monitoring are critical in virtual datacenters and cloud environments…”
I couldn’t agree more. In case the marketing is a bit of a blur for you AVIcode does a number of very cool things:
- It provides extraordinarily detailed and intelligent monitoring of web applications.
- Jump-to-code debugging of .Net applications
- Lots of graphs. Lots and lots of graphs.
All of this stuff is fully, and very elegantly, integrated into the Operations Manager console. To illustrate the concept, join me for a cocktail…
I am monitoring the OpsMgr Web Console to make sure that it’s always available for my busy admins. I have a good view of the health of the application from a datacentre and IIS perspective.
I do not, however, have any view of the end-user experience.
In the recent past, we would have provided an end-user perspective by deploying an OpsMgr agent to a machine outside of the datacentre and had it carry out a synthetic web session against the application. This works great, and can give us some really useful telemetry, but it’s still a little limited.
AVIcode provides a much more elegant solution to client-side monitoring by injecting some client-side code into page requests which then deliver telemetry data back to the monitoring infrastructure. This gives us real-time views of the performance of the application from the end-user’s perspective and can alert us to performance problems and failures and provide detailed information around the cause of those failures. It creates a Distributed App to illustrate this:
So if a user is getting an error. We can be alerted to this…
…and given code-level information around the cause of the problem
Next time we’ll have fun with graphs…
As anyone who has the indescribable pleasure of working or living with me will unerringly attest, I am a fully paid up member of the new computing paradigm club. Every industry development is greeted with glee at the Quirkshop and I will gladly flit from vendor to vendor pursuing computing excellence in whatever form it takes.
Admittedly I have been a VDI sceptic in my time, I’ve been pretty much universally Microsoft focused for my entire IT career, have never dabbled with vegetarianism, never spent a year in a Kibbutz, don’t understand dance music, can’t watch anything with the word “celebrity” in its title and think that most green vegetables are an affront to humanity.
The above brings me jarringly to the reason for my breathless excitement. Here at Orinoko we’ve been using version one of Microsoft’s first cloud offering, BPOS, since we started the company and we have just migrated into the next version of this solution, Office 365 (a beta currently). Now, as mentioned above, I’m big into all this “cloud” stuff. I may have suggested on occasions that clouds consist of vapour, but that was just rum fuelled banter.
Office 365 gives us access to Lync, including Lync-to-Lync voice, which is very cool. It gives us very highly available Exchange 2010 and SharePoint 2010 too. As a small business, to run these systems on-premise would be costly in every regard, so to my luddite eyes the cloud solution is like voodoo.
Bolstered by my positive experience with Office 365 I have dipped my toe into Azure infrastructure services. Frankly I find the whole thing baffling, it just works.
Why didn’t we do this a hundred years ago? I spent a brief and misguided few weeks working for a Microsoft Small Business Server partner many years ago. The sort of system we would spend a fortnight implementing for a few thousand pounds can be had for literally cents on the hour for compute and single digit pounds per-user-per-month. Admittedly I can see how the costs might rack up (no pun intended), but this stuff just seems like magic.
Finally, Intune. Now, as a Systems Management Guy ™ I realise that Intune is lacking in certain features we currently demand from our management solutions. In particular Software Distribution. BUT. If you currently have nothing in place for systems management, or if you have machines that live outside of your corporate LAN for most of their lives and you want to keep them patched, and secured and be assured that they’re not suffering from basic performance issues. And if you want to manage the licenses for the software already deployed on them, Intune is nothing short of fantastic.
And the monthly per-device charge includes an upgrade to Windows 7 Enterprise!
I have seen the future and it’s vaporised! Some of this stuff has a little way to go, but if the cloud model didn’t fit your organisation the last you looked at it, it’s time to look again.
The Orinoko Datacentre is running low on RAM. More RAM is the obvious answer, but the nodes we’re running on are looking a little weedy now, so we may be better off just replacing the whole thing, but in the meantime how to squeeze a little more of a return from our investment?
The answer is Server 2008 R2 SP1. The reason this is the answer is that our main limiting factor is RAM. We do, admittedly, also have some issues with storage performance, disk queue lengths are substantially longer than I would like for the hosts and our ailing lab NAS doesn’t support jumbo frames, poor thing. But, RAM availability is our main issue.
As you’re all doubtless aware, SP1 (which is a Release Candidate at the moment) has an excellent new feature “Dynamic Memory”. Briefly, in case you’ve been living in a cave, Dynamic Memory allows a VM to request and release memory to the host as load changes. This potentially allows you to over-commit a host, something we could do with at the moment. For Orinoko this is a likely good fit as we run more virtual desktop OSEs than server OSEs on account of our application packaging function. Our workstations need to have a minimum 1GB allocated, but many are unused for a length of time and many more could likely run with between 512 and 768MB RAM freeing up a chunk for allocation elsewhere.
So, deep breath, although we use Office 365 for our critical business systems, some of this stuff is live…
I shut down as many VMs as I can and put Node1 into maintenance mode in VMM. This evacuates the rest of the running load onto node 2, then:
Although it says it may take an hour or more, it only takes 45 minutes in this instance.
Straight into Hyper-V Manager to see the Dynamic Memory bits on one of our App-V Sequencing machines:
Now I take Node 1 out of maintenance mode (MM). I put node 2 into MM, which evacuates the running loads over to the newly service packed node 1. Install SP1 to Node 2 and I’m nearly done.
Virtual Machine Manager 2008 R2 SP1 (RC1)
We use Virtual Machine Manager to handle our datacentre for Live Migration, Library services and for PRO integration with OpsMgr. VMM 2008 R2 needs a Service Pack to expose the new memory gubbins:
Only little things. We are running both FCS and FEP (Forefront Client Security and the newer Forefront Endpoint Protection) the older version, FCS, isn’t supported:
So must be replaced with the new version. Apart from that, it all went very smoothly.
I have a Reporting Services Point, I have R3, so where are my reports?
Answer = You have to import them. Perhaps slightly illogically this is via the same interface you used to copy the reports from ConfigMgr to SRS in the first place:
In the import wizard, rather than the default of Import existing Reports select the other option:
The cab file is installed in InstallDIR\Reports\Power Management\MicrosoftReportPack.cab
Hurray, more reports!