Published applications are a great way to deploy applications to your organizational users. Essentially what you are doing when you publish an application is virtualizing an application. Yes, you heard me right. The application is deployed to a server, virtualized, and then access to just the application is granted to a user. Not convinced that this is something you need yet?
In this article, we will take a close look at why virtual applications may be beneficial. Then we will take a real-world look at some examples where published applications create business success. Concluding this topic with how to overcome some of the challenges that proper monitoring can solve.
Why Published Applications?
It may not be obvious at first why a virtual application should be used over a virtual desktop or even a regular workstation for that matter. The following table highlights the considerations around the decision-making process of each to help ensure you are offering the best option for your business users.
|Administrative Considerations||End-user/Business Strategic Considerations|
|Fewer systems to update: Ability to update only the server that has the application installed instead of all the desktops that are deployed to your organization.||Work from Anywhere: Allow business users to work from anywhere without needing access to their workstation for software upgrades.|
|High Availability: Remove servers from the environment for patches/application upgrades without impacting users.||Access to Applications when needed: On-demand application access anytime from mobile devices, tables, thin clients, laptops, etc.|
|Less overhead – depending on the application a server may be able to run 25 + instances of an application. For example, a server with 8 GB of RAM and 2 vCPU could easily run 25+ concurrent connections of any Microsoft Office application on a single server.||Secure endpoints: For systems that are close to the customer only the necessary applications are available, and not a whole desktop|
|Disaster Recovery: Organizations are looking for ways to achieve business continuity in a disaster. Levering published applications from your disaster recovery site can be installed and configured to allow your technical team the tools they need to bring your environment back online. They can also be used to provide business user access to critical systems to ensure that the organization can continue to generate revenues even during a disaster when some parts of the computer systems may be down.|
Now let’s take some of these considerations and start thinking about how these concepts apply to your organization. Start asking yourself questions that will help you better determine how published applications can help your user-base do their work more successfully. Think about how the business functions. Consider whether you have applications that are complex. If you do, would there be benefit to updating these applications only as many times as they are installed on the servers instead of needing to update these on all the workstations in the enterprise? Or do you have systems that your customers may have access to, and it would be best to provide a locked down application instead of a whole desktop? I suspect that as you start to think about these options you can relate to these and many of the others.
What Types of Cost Savings you can Expect
When deploying your applications via published applications you can expect a huge cost savings. With published applications similar to RDSH you are not deploying a full operating system for each user. In addition, published apps enjoy the saving that comes with concurrency of users and sharing of resources. But where some of the true value comes from is from the time savings with central deployment and update of applications, and from the simplicity of access to organizational users.
Let me explain this further, but in the context of a real-world example from a healthcare enterprise leveraging published applications. The organization has 10,000 employees, and 6,000 of them at some point during their day will need to access your electronic health record (EHR) application. The existing deployment has the EHR application deployed to all the desktops in the enterprise. Maintenance and updates to the application is extremely time consuming, and not every workstation needs the application. This is leading to a lot of wasted time.
To resolve the enterprise is working with the EHR vendor to determine if their application can run as a published application, and what the resource requirements would be on the server side to be able to run this application from the data-center on servers instead of on every workstation in the environment. This helped determine that to support the organizations EHR application they only needed 50 virtual servers in which the application would be installed to support the enterprise needs instead installing and updating the application on every workstation in the environment. Additional savings can be seen when deploying applications to thin clients instead of full PC’s. Using thin clients is simple with published applications.
Also, after doing some initial application deployment and testing to their users, performance was great. In fact, because an application can be published to appear on the start menu or desktop of a windows client the users didn’t notice any real difference. Meaning that the application functioned as if it installed directly on the workstation, even though it really was on a server in the datacenter.
Published Application Server Resources
Below here is just a brief sample table of some applications and what it would take from a resource perspective to run them as a published application on a virtual server. Keep in mind that some organizations may choose to run multiple published applications on server. But to keep this topic simple we cover the resources needed to run a single application on a server with just a few key applications to get started.
|Published Application on a dedicated virtual server||Concurrent Sessions per virtual server||vCPU per server||Memory server||Disk Space per server|
|EHR Application**||40||8||32 GB||100 GB|
|Microsoft Office**||25||4||12 GB||100 GB|
|Adobe**||25||4||12 GB||100 GB|
**Resource allocations are approximate and can vary depending on the version of the application you are running, and based upon the EHR application vendor you choose to move forward with. The assumption is that these applications are running on Windows Server 2016.
Comparison (Published Application server vs Standalone PC vs RDSH)
Now that we can see what the server resources look like for a few virtualized published applications. Let’s take a look at the resources that may be consumed on a virtual server, then compare this to the number of resources you may end up purchasing for a standalone PC. I also included RDSH which we covered recently to make sure you could analyze from that perspective as well. If you are interested in learning more about RDSH the post can be found here.
When reviewing the table below we can see that if you selectively choose the applications you want to publish the server resources required are much lower than either of the other options highlighted here. Keep in mind that even though published applications run from a server in the data center they can be presented to a standalone PC, thin client, or virtual workstation. These options create a lot of flexibility for your enterprise deployments.
|Concurrent Sessions per virtual server||vCPU per server||Memory server||Disk Space per server|
|Published Application Server||25 concurrent sessions||8 cores||32 GB||500 GB **|
|Standalone PC||20 Individual Computers||(8 cores *20 PC’s) = 160 Cores||(12 GB/PC * 20 PC’s) = 240 GB of RAM||(500 GB of Disk Space * 20 PC’s) = 9.7 TB|
|RDSH Server Deployed Workstation ***||20 Concurrent Desktop Sessions||32 Cores||64 GB||500 GB **|
** If you are keeping a copy of user profiles on the server then you may need more storage to accommodate
*** These numbers reflect the deployment of base workstations only. Be sure to baseline the memory and CPU for the applications your users will need to use. This will likely increase the base resources you need.
Monitoring Challenges that come with Published Applications in the Real World
With any deployment, even published applications monitoring should be on your mind. When we start thinking about monitoring right away we start thinking about how we can improve uptime, be proactive about system failure, and the ability to detect performance issues. These are the essentials to successful monitoring and must exist, but what if your monitoring tool could collect information that could help you with internal costing and chargeback? Or what if your tool could help with long-term analytics and trending? For some organizations, these things aren’t value-adds, they are must-haves.
Reporting and Analytics for Business Unit Charge Backs
Let’s talk about that healthcare organization again, and more importantly the fact that they did move forward with publishing their EHR application through a published application instead of installing it on every desktop in the enterprise. Now let’s think about their business structure. Their organization supports all different kinds of medical specialties for providing care to their patients. They have a heart specialty team, GI specialty team, Ear-Nose-Throat specialists, a radiology team, a laboratory for testing, and many more. Each of these specialty areas has their own financial budget and must be able to cover their expenses even for the EHR application that they use every day. This is not a simple task by any means.
So, is it possible to collect the necessary technical information to be able to formulate business chargeback for usage of published applications? With manual tools or even standard server monitoring, this task is extremely challenging.
Let’s start with task manager. While this is a very basic example it demonstrates that indeed by default you can find basic information about server resource usage and user-based usage. The challenge with task manager is that data is only real-time data, and not historical. It would be a huge waste of time to attempt to collect data usage over time with a real-time server solution. This option, therefore, is not a viable solution.
So what can you use? This is where monitoring tools come in, but not all monitoring tools are created equal either. They may be able to track server data such as CPU, Memory, Disk, etc., but from a historical tracking perspective, the toolset reporting and analytics must be able to tell exactly how long a user accessed their published application over a period of time and the resource consumption that occurred over time. This was a huge challenge for our healthcare organization to overcome, because as they started looking at tools either the data existing, but good reporting didn’t exist, or the data wasn’t being collected at all.
Monitoring Tool Checklist
It became very clear to the healthcare organization that in order to successfully roll out an accurate chargeback system to its business units they would need to collect the right technical data, and then in return be able to use analytics to correlate the data samples for their reporting. Here is a checklist of the criteria the healthcare system used to ensure they could perform analytics on their data.
- Published Application Start Time
- Published Application Stop Time
- Process CPU
- GPU Utilization
- Published Application Name
- Active Sessions
- Total Sessions
- Total Users
- Memory Usage
They found that with this data, and a great monitoring tool, the ability to store data for an extended period of time, and a reporting/analytics front end they were able to successfully implement an accurate business charge-back solution for their enterprise.
What will you do?
My recommendation on whether to move forward with published applications is to choose a few key organizational applications and set up a test deployment. I truly believe this is the best way to baseline applications and prove or disprove the administrative technical benefits and the user experience for your enterprise.
When it comes to choosing a monitoring tool, you absolutely want a tool that will ensure you can troubleshoot performance issues, and proactively monitor availability. Even more important though is to find a tool that goes above and beyond with reporting and analytics that can help you complicated data analysis in a simple way.