Archive for January, 2011|Monthly archive page
Having taken a quick look at AVIcode yesterday, let’s take the gloves off and get a bit more hands on…
In this example we’ll be implementing AVIcode monitoring of a project management application ProLog. Sadly ProLog is something Orinoko developed themselves and the code quality leaves something to be desired. We have forced all staff to use this application for everything they do, so the Service Manager queue is building up!
In the Authoring Pane of the OpsMgr console I use the Add Monitoring Wizard to create a new Enterprise ASP.NET Application:
The wizard locates all the ASP apps on all of the servers I’m monitoring (this is an option in the AVIcode setup, if you wish you can limit discovery to certain machines).
The rest of this process is covered very well by Simon over on System Center Central here so I’ll skip on to what we get once it’s all up and running…
Each of our web pages gets its own dashboard under the Management Pack created for the .NET Application:
We also get a visual view of the application in a Distributed App:
I’m using the ProLog application to run a report of all the projects we’re working on currently.
I log in and select Reports – Overview
Nothing much happens, then eventually I receive a spurious error in Internet Explorer.
Over in OpsMgr I see data corresponding to this:
Behind the warning is significant data (still in the OpsMgr console):
You may notice that this is a Slowest Nodes alert showing that the page took 76672ms to render. 62626ms of this on the ReportExecution2005.asmx function. This is information I can then use to troubleshoot why this function of the application isn’t performing as it should.
Along with this data I get an excellent dashboard overview of application performance:
And if this isn’t enough, I can get deep into the code in the AVIcode Intercept Studio web console:
The top-level dashboard for each app collates relevant information for monitored processes. I can then drill into extraordinary detail around each transaction carried out against the application:
And drill deeper still into each individual event:
AVIcode is a massive product, with plenty of complexity and capability. Integrated with Operations Manager it provides the OpsMgr guys deep performance and alert knowledge on the performance of .NET applications and delivers even deeper intelligence for the .NET guys to optimise and troubleshoot poor performance and application failures all monitored in real-time. Great fun!
Here in the Orinoko ivory tower we’re too busy mixing our metaphors to let the grass grow under our feet. And so to AVIcode.
According to the product homepage “AVIcode delivers market-leading .NET application performance monitoring capabilities to help ensure the availability of business-critical applications and services, regardless of where they are deployed. End-user experience and application performance monitoring are critical in virtual datacenters and cloud environments…”
I couldn’t agree more. In case the marketing is a bit of a blur for you AVIcode does a number of very cool things:
- It provides extraordinarily detailed and intelligent monitoring of web applications.
- Jump-to-code debugging of .Net applications
- Lots of graphs. Lots and lots of graphs.
All of this stuff is fully, and very elegantly, integrated into the Operations Manager console. To illustrate the concept, join me for a cocktail…
I am monitoring the OpsMgr Web Console to make sure that it’s always available for my busy admins. I have a good view of the health of the application from a datacentre and IIS perspective.
I do not, however, have any view of the end-user experience.
In the recent past, we would have provided an end-user perspective by deploying an OpsMgr agent to a machine outside of the datacentre and had it carry out a synthetic web session against the application. This works great, and can give us some really useful telemetry, but it’s still a little limited.
AVIcode provides a much more elegant solution to client-side monitoring by injecting some client-side code into page requests which then deliver telemetry data back to the monitoring infrastructure. This gives us real-time views of the performance of the application from the end-user’s perspective and can alert us to performance problems and failures and provide detailed information around the cause of those failures. It creates a Distributed App to illustrate this:
So if a user is getting an error. We can be alerted to this…
…and given code-level information around the cause of the problem
Next time we’ll have fun with graphs…
As anyone who has the indescribable pleasure of working or living with me will unerringly attest, I am a fully paid up member of the new computing paradigm club. Every industry development is greeted with glee at the Quirkshop and I will gladly flit from vendor to vendor pursuing computing excellence in whatever form it takes.
Admittedly I have been a VDI sceptic in my time, I’ve been pretty much universally Microsoft focused for my entire IT career, have never dabbled with vegetarianism, never spent a year in a Kibbutz, don’t understand dance music, can’t watch anything with the word “celebrity” in its title and think that most green vegetables are an affront to humanity.
The above brings me jarringly to the reason for my breathless excitement. Here at Orinoko we’ve been using version one of Microsoft’s first cloud offering, BPOS, since we started the company and we have just migrated into the next version of this solution, Office 365 (a beta currently). Now, as mentioned above, I’m big into all this “cloud” stuff. I may have suggested on occasions that clouds consist of vapour, but that was just rum fuelled banter.
Office 365 gives us access to Lync, including Lync-to-Lync voice, which is very cool. It gives us very highly available Exchange 2010 and SharePoint 2010 too. As a small business, to run these systems on-premise would be costly in every regard, so to my luddite eyes the cloud solution is like voodoo.
Bolstered by my positive experience with Office 365 I have dipped my toe into Azure infrastructure services. Frankly I find the whole thing baffling, it just works.
Why didn’t we do this a hundred years ago? I spent a brief and misguided few weeks working for a Microsoft Small Business Server partner many years ago. The sort of system we would spend a fortnight implementing for a few thousand pounds can be had for literally cents on the hour for compute and single digit pounds per-user-per-month. Admittedly I can see how the costs might rack up (no pun intended), but this stuff just seems like magic.
Finally, Intune. Now, as a Systems Management Guy ™ I realise that Intune is lacking in certain features we currently demand from our management solutions. In particular Software Distribution. BUT. If you currently have nothing in place for systems management, or if you have machines that live outside of your corporate LAN for most of their lives and you want to keep them patched, and secured and be assured that they’re not suffering from basic performance issues. And if you want to manage the licenses for the software already deployed on them, Intune is nothing short of fantastic.
And the monthly per-device charge includes an upgrade to Windows 7 Enterprise!
I have seen the future and it’s vaporised! Some of this stuff has a little way to go, but if the cloud model didn’t fit your organisation the last you looked at it, it’s time to look again.
The Orinoko Datacentre is running low on RAM. More RAM is the obvious answer, but the nodes we’re running on are looking a little weedy now, so we may be better off just replacing the whole thing, but in the meantime how to squeeze a little more of a return from our investment?
The answer is Server 2008 R2 SP1. The reason this is the answer is that our main limiting factor is RAM. We do, admittedly, also have some issues with storage performance, disk queue lengths are substantially longer than I would like for the hosts and our ailing lab NAS doesn’t support jumbo frames, poor thing. But, RAM availability is our main issue.
As you’re all doubtless aware, SP1 (which is a Release Candidate at the moment) has an excellent new feature “Dynamic Memory”. Briefly, in case you’ve been living in a cave, Dynamic Memory allows a VM to request and release memory to the host as load changes. This potentially allows you to over-commit a host, something we could do with at the moment. For Orinoko this is a likely good fit as we run more virtual desktop OSEs than server OSEs on account of our application packaging function. Our workstations need to have a minimum 1GB allocated, but many are unused for a length of time and many more could likely run with between 512 and 768MB RAM freeing up a chunk for allocation elsewhere.
So, deep breath, although we use Office 365 for our critical business systems, some of this stuff is live…
I shut down as many VMs as I can and put Node1 into maintenance mode in VMM. This evacuates the rest of the running load onto node 2, then:
Although it says it may take an hour or more, it only takes 45 minutes in this instance.
Straight into Hyper-V Manager to see the Dynamic Memory bits on one of our App-V Sequencing machines:
Now I take Node 1 out of maintenance mode (MM). I put node 2 into MM, which evacuates the running loads over to the newly service packed node 1. Install SP1 to Node 2 and I’m nearly done.
Virtual Machine Manager 2008 R2 SP1 (RC1)
We use Virtual Machine Manager to handle our datacentre for Live Migration, Library services and for PRO integration with OpsMgr. VMM 2008 R2 needs a Service Pack to expose the new memory gubbins:
Only little things. We are running both FCS and FEP (Forefront Client Security and the newer Forefront Endpoint Protection) the older version, FCS, isn’t supported:
So must be replaced with the new version. Apart from that, it all went very smoothly.