Neil: Let me add my welcome as well. I'm always very appreciative when people come and tune in on these to listen to what we have to say. Today's a pretty popular subject: virtualization. It touches all the areas in an enterprise today, the financial side of it, the ease of maintenance, the ongoing support, it's something that's very popular. And of course we've been looking at the virtualization side for quite a while.
So before we really talk directly about virtualization, I'm going to do a quick architecture overview or review for you, just so you understand what our core architecture is before we get to virtualization. Because we have a highly redundant architecture you can deploy components with automatic failover, and redundancy, and geo-redundancy and all of that on its own.
So we have two particular components: the system server, which is our centralized application, this is where the message store is and the administrative tools; and then we have a call server, which is sort of the workhorse. It connects to the phone system, answers the call, does all the real work. When it takes a message it moves it to the system server, and we can have multiple call servers for scalability, redundancy, consolidation of multiple sites, things along those lines, and we can have multiple system servers. No matter how many you have - there might be one, two, or three - only one of them is ever active, and then you can have one or two passive system servers. And our active system server real-time updates the passive system server and that way we can have automatic failover, geo-redundancy, things along those lines.
We have other webinars that cover our architecture in much more detail, but this is just a little bit we need to know as we start to look at what we do support in the world of virtualization. So we can have multiple control servers, multiple call servers, and within our own product automatic fail over and geo-redundancy. Now we take that and then we move in to the VMware world, and in the VMware world we start kind of by talking about within VMware every component that we have can be virtualized. So you can take the system servers, and the call servers, and any other application pieces that you might be running, the web applications that typically run on the customer web servers, all of those can be virtualized.
There are a few things we need to talk about that make us a little different in a couple of areas. One is the call server that hooks to the phone system. In a modern phone system that has a SIP or an IP integration, the call server has no need for components or cards in it. But for some of the older phone systems we actually need to put cards in a call server to connect to the older type of phone system, and if you do put cards in to a call server then you can't virtualize that call server. That's one of the design pieces of VMware, that you actually can't dedicate hardware components. So in a case where you really are intent upon virtualizing everything and you have an older PBX, you actually could put the call server in VMware, and use an external box called a Media Gateway that takes the place of the cards. So we're able to pretty much do whatever anybody wants to do in the way of VMware.
Now as we look at kind of how we can do this, there's a number of ways that people can use our system within a VMware environment, and it changes based on what you're trying to accomplish and how you want to support it. So the simplest way is just think about: we take a CX server - any of our servers at all - while they could run on a dedicated hardware server you simply can put them on a host. And with the system server, currently today it requires the ability to map to a USB dongle. We have a dongle that we use for security reasons, and today in the current version of the software we have to be able to see that dongle. So within VMware there's two technologies available to do this: there's direct I/O, and with direct I/O it's simply a matter of you actually install the USB dongle on the physical platform that's running the host that has the systems server, and you use their direct I/O technology to map to it. So basically this is one of the few pieces of hardware that can actually be dedicated to an individual host, which is the USB, and that's technology from VMware. So that's their own way of mapping a USB resource straight to the drive, and we support that, and that's probably the easiest way to do it with a system server. Now remember the call servers don't need anything special it's just the system server.
Now there's another technology that allows you to do the same thing, and that's called AnywhereUSB. And that's a third-party company that actually makes a device that sits on the network, and that network device has slots in it for various things. - in our case we're only interested in the USB piece - and you can plug the USB card into that. And when it sits on the network you actually install the security dongle on the network, and you can map to that dongle form the CX-E system server. So that system server is running virtually on a host and you go in and a little bit of typing there, and now you can see that dongle. Now that dongle is important when we boot our system. So when you turn on the system server and it goes through its routines of booting, at some point in time it's going to say, "Okay, here's the features that this system is allowed to have, and they're all key to this hashtag number. I need to go find that hashtag number on the dongle." So it now knows to look out for the dongle and match the hash number on the dongle that's encrypted in it, and the hash number in the feature file, and then the system boots.
Now after the system boots, if you unplug that USB dongle there's no problem until the next time the system has to reset. If it's not back in there again then the system actually won't come up. So that's how the dongle works itself. Now the dongle's been a bit of a challenge over the years, and there's been - let's just be polite and call it push back from customers. So I think a lot of you will be happy to know that with our next release of software, 8.6, which is scheduled - in fact we're already in Beta - scheduled to come out in the fourth quarter, it removes the requirement for using an external USB dongle.
We still will support that because some of our customers have their CX systems on a network segment that's very isolated. But if you put your CX on a network segment that has outside access out to the Internet, then we can do that same kind of checking by doing the "phone home" type application. In other words, when the system boots it has an IP address that goes out, and it says "Is this the valid version number?" And then the system boots up. So it's actually going to be nice for those customers who struggle with having that dongle, and we'll see in a couple of feature sets we're going to look at here in a second, it's going to be very, very handy. So 8.6 will remove the need to have a physical dongle.
So now you can go ahead and you can just go ahead and put each component on a host, and you can put a system server, a call server, anything you want on the host and that would be sort of the most normal way to do it. Now what that means is the system server here is loaded on a specific host, everything's written to the drive on that host. If that system fails then the call servers will keep running but there'll be some loss of functionality. But that is a very typical way to do it.
Now remember, we can have multiple system servers, and multiple system servers can run in this environment. You could have multiple system servers on multiple separate hosts with automatic failover, and we'll kind of do a summary chart at the end that talks about all these features.
But another way to use the VMware functionality to get multiple system servers is to actually install on a host, but map that application - map the CX-E application - to a drive segment out on your SAN. If you have any kind of network storage we can actually have that host use that network storage, it'll actually see that as its basic hard drive. And now you still have to have the dongle, the system boots up, you do all the programming. Everything that you're putting in the system that's unique is written on that segment on the SAN, so then you shut that system down, you go to another host, you load the same version of CX-E on it, it connects to that same segment on the network drive, and it's up and working. And now you actually have two versions of the system server on two separate hosts, but only one can be active at a time. And the way this will work is in the case where you lose the main host, you will go over to the other host that's turned off, you will turn it on, you will have to either move the dongle, or if the dongle is mapped using AnywhereUSB it'll see it when it boots up. When it boots it's an accurate copy of what you had written to the drive seconds before the other machine failed. So this is a very reliable way to have a complete backup ready to go within just a few minutes.
Now there's a couple of things here that we talk about in terms of the dongle that might be a challenge. Now remember with 8.6 it stops being a challenge because you can have that external access. And by the way, the external access - let's say in this case you had a power failure, you lost some of your connectivity, you lost the one host, and when you boot the system you can't actually see the Internet connection out there, because that's part of the server that hasn't come up. There's a grace period on this, so you're still going to be able to run and not have any problems with it.
So this works fine for dual system servers in dual hosts in a single data center, where you can literally just take that dongle and move it if you have to. Now it is possible that you might want to do a remote data center. Install the system server on the same one, go and install it on the remote data center. As long as that remote center has connectivity to the storage area network segment where you installed the CX-E the first time, now you can do the same thing. Now the truth is you're not probably in a physically dispersed environment and going to want to move the dongle, so we sell a second dongle, and the second dongle can actually be installed at the remote location. And now what happens, just like when it's local, if that main one fails an administrator can go start up the services on host two, it will read off the network storage drive, and you'll be back in business again.
So while we offer multiple system servers with automatic failover, this is a way to do multiple system servers by just using the VMware kind of functionality. And we have a lot of people who have done this, we have actually a lot of people who've done every different version that we support. VMware people all have their own opinions on how they want to do it, and we work really hard to make sure we can basically accomplish whatever they want to do.
Now at the point in time when you say, "Okay I'm going to virtualize my CX-E system," the next question that always comes up is, "So what about the resources? What do I need? What do I need in the way of memory and CPU resources, etc etera?" And for us it's very easy: you need the same things in the virtual server that you would need in a physical server, and we have great documentation that talks about that. Now there's not a simple answer because obviously the requirements for a server change based on the size of the system, and what applications you're running. So within our documentation we have some really nice charts and graphs that basically say: at a certain size range, for this functionality running this operating system on the VM partition, here's kind of the class of server that you need. And then within that same document there's a list of what those classes of server are. And if you look at these you know Server Class A is the very simplest - and frankly you couldn't go buy one of those today, it's so underpowered - but that's the minimum you'd need for a Class A Server. And then you move down the list and the very largest of servers with a more complex app in server Class D, still not a monster workhorse, not something that's hard for you to program in, it's not a real resource hog. And this is basically how you'd go about figuring out what resources you need.
Now there's one other discussion we like to have about resources that is something you need to be aware of when you start laying out your VMware environment. So let's say I have an application - a database application - and it's pretty good most of the time. But if let's say two or three people decided to run some sort of report that had to compile statistics at the same time, it would just start using up all kinds of resources on the system. And you've limited the CPU resources and the memory resources to keep that in check, but there's still the actual bus resource that's an issue. So what happens in that case, and everyone's experienced this, is the next person that opens up a different app that's running on that server gets the hourglass, and we all know about the hourglass, we don't like the hourglass, but we're pretty comfortable with waiting a couple of seconds to get into the application we want.
The problem when one of these applications is CX-E: we can't play the hourglass. So if you put us on a host that is very, very capable of fizzing out that bus completely, and if we have a lot of ports, let's say I have a call server with 96 ports running on that host and you have a big database application. If that database application is hogging the data bus, when we go to answer the call we can't. We can't play an hourglass, we can't move anything across to the caller to say, "Hey please hang on we'll get to you when we have the resources." So it just doesn't answer, and that tends to look like a problem.
So there is a little bit of diligence in laying out the architecture to make sure that whatever host we're on either we reduce the number of ports we're putting on it, or we make sure that it's always going to have that data bus availability. If you think of 96 ports on a single physical server that is actually going to run 96 Voice-over-IP calls across that data bus, that's a pretty big task. So it's pretty easy if you're not careful to overload a host from CallXpress' point of view.
So everything we have as I've said, goes on there. The system servers, whether you have one, two, or three; the call servers once again, with outline cards. We have a number of web applications, that web phone managers, our user web client, CXM is our mobile server that lets us contact our mobile applications out in the field, message cache manager for handling high traffic of messages from our web client. All of those are designed to run on the customer web server, so we're not ever asking anybody to put a new web server in. But all of those - by the way, those can all be virtualized as well - they can run on instances of your web server, whether it's Apache or IIS it doesn't matter to us. So that really kind of widens up your ability to virtualize everything that we do.
Now in every kind of version we come out with, we increase our documentation that's specifically for a VMware installation. And while in our early days we just sort of said it worked and used the same specs on our servers, we've done a lot more now in terms of helping people who are really VMware experts understand how it works and how to maintain it. So as well as all the specifications we've got some nice tools in terms of what to look for, we talked about if it gets loaded down it might be down an issue, and so we have the charts in there to talk about what's going on, and when you could expect an issue, and what kind of load are you at, are you very, very comfortable with this, etc etera.
So we're really comfortable in the VMware field and when we talk about how we can install the system, we've seen three or four ways here. I can just take a basic CX-E system and put it in VMware. And if I do just a single kind of server, you know a one server CallXpress system, I don't have automatic failover, I have no need for manual intervention to do anything, and there's probably a little higher labor to deploy and maintain this than the VMware people are used to.
What I now need to do is look at the same thing with multiple servers. If I put in multiple system servers I still get the auto failover and then I can reduce the hardware a little bit, but I still need physical hardware for everything. There's no manual intervention. I get automatic failover if a system server fails, if it's not in VMware. But from a data groups point of view there's also a familiarity issue. Data guys are really comfortable with how to work with everything in their enterprises in VMware. I fully understand why they want CX to be there as well. And then of course I can put it into Neverfail with a single system server and I can get everything. I can get auto fail over, reduce the hardware, put it in the environment people are comfortable with, all of that.
Now the last thing we look at here, I've told you know over the years we've more information in, and it's a much bigger slice of our business this year than just last year, in terms of the number of systems virtualized in VMware. So within our research and development we're working really hard on adding new features. So you will also find in version 8.6 coming out in the fourth quarter, full support for a couple of more features: vMotion, VMware High Availability, and VMware Fault Tolerance. So when those are fully supported - there'll be another column on that chart we just looked - where basically you can get everything from within a VMware environment without using multiple system servers. You'll be able to get that same automatic failover and all of that from just the pure VMware features.
Emily: Okay, great. Thanks Neil. Okay, so if you have a question to type in - we do have a few queued up here - if you're looking for the question pane you may have it minimized so look for that orange arrow in the upper right corner and you'll find where you can type in your question once you expand that window.
Neil the first question, is there a required response time for the system server on a remote SAN?
Neil: Yeah, so basically regardless of how we separate system servers we're looking for the same rough kind of responses that you'd find for a Voice-over-IP network. So 100, 120 seconds one way is what we're looking for. Now the system servers are a little more tolerant because when they're moving that information around, there's no real time need for it. In other words, there's not somebody waiting to get that. The information that goes between those servers is strictly for synchronization. So we have a little more flexibility, but you can get into a situation where the delays are long enough that the synchronization software is going to start throwing some errors up. I have to say in most of today's networks we have not really had problems. There is a physical distance limitation. People forget that even the speed of light, even electrons, they have some distance limitations. So if you have your data center in San Francisco and your backup data center in London, I'd be a little concerned with you being able to get the kind of . . . you can get bandwidth, but whether you can get the speed you need to make that work. But certainly in any decent network in the United States, spread anywhere in the United States, you should be fine.
Emily: Okay, thanks. Next question: do we support Microsoft Hyper-V?
Neil: Hyper-V is interesting because if you'd have asked us this in a webinar last year, we'd have said "No one's asked for it," because they definitely lag behind in the market. But I will admit in the last quarter we've had more requests for Hyper-V support than we've had in the years before. So we're definitely looking at that. I know it works, I'm actually running Hyper-V in my own lab to run some of the CallXpress systems I have - it takes a little manipulation - but official support is something that requires all that testing. You've seen how much information we have on how we go into VMware. We need that same level of comfort for Hyper-V. Unfortunately 8.6, as we saw is getting ready to come out the door, it's gone through feature lock down because it's gone out to Beta. That means we can't add anything more to it. So Hyper-V support won't be in 8.6, but I'd be very surprised if it isn't in the next version.
Emily: Okay, great. Next question, regarding the USB dongle: if the system fails-over with Neverfail and the USB dongle is unreachable on the network using USBAnywhere, will the system failover and function?
Neil: No. If you're using a physical dongle, it has to see the dongle to come back up. Now keep in mind with Never Fail, when you put in two system servers you get two dongles, so there is a local dongle dedicated to that instance of the system server. So you do have an issue where is if you want to put that dongle on a USB adapter, then it will require the network. But if you have two dongles for two system servers I would suggest putting it with direct I/O. Because that way if that system server comes up, that physical dongle is plugged into that same server so you should always be able to see it.
Emily: Neil do you want to wrap us up?
Neil: Yeah, I always like to thank people for coming. It's really difficult in today's world to stay on top of all the things within your enterprise, and everybody - I know - is hammering you for time, for conference calls, and webinars, and all of these kind of pieces of information dump that we do, and so I'm always appreciative when you come to ours. And if you find this information helpful you should let us know: what else would you like to hear? What other topics are we not covering where our product fits in your enterprise and you want to know about our features, or how we connect, or things along those lines. So I'd like to thank everybody for coming.
Emily: Okay, thanks again everyone, and have a great day.