| | September 20159CIOReviewAt the time, I didn't know whether Ruby or Heroku or Google App Engine was best for our company (my colleagues will tell you that I didn't earn my CIO role because I was the smartest technical head in the room). But I'm pretty good at recognizing important trends and Software as a Service was an important trend. If developers could easily acquire whatever computing they needed without asking my group to stand up servers, load OS's, open ports and write firewall rules, then my group might become obsolete. I and my infrastructure team needed to get educated, and we started with industry leading AWS.My top 3 lieutenants and I went to the AWS re:Invent conference in Las Vegas. I attended sessions on costs and change management. There was a huge buzz; AWS was on a roll. We came away from the trip understanding that our roles as systems administrators had to morph into roles as systems architects with a strong emphasis on cloud architecture. We made a conscious effort to get closer to our internal customers and offered to broker services such as Content Delivery Networks to combine purchasing power. We persuaded all the AWS users to consolidate their separate bills into one that we would manage. They didn't give up their administrator rights. We just took over the clerical task of paying and allocating the bill. But the biggest change we made was modeling our chargeback model after AWS. We changed our computing offerings to mimic the line items offered by AWS. For example, we offered small, medium and large Virtual Machines and charged different rates for slow and fast storage. We even matched the AWS pricing. That accomplished 3 goals. One, our internal customers could not claim that moving to the cloud was cheaper. Two, it made us more aware of our operating inefficiencies so we could improve them. And three, it made everyone operate in a "pay-as-you-consume" mode, which provided incentives to reduce costs and increase efficiency.Our next big integration with AWS was providing a Direct Connect to AWS in our leased data center in Ashburn, Virginia. We knew that development, testing and disaster recovery were the "killer apps" for the public cloud. A number of our internal customers were already using AWS for these functions. But we needed to provide a more seamless way for the servers to connect between our internal network and the AWS network. Equinix, in partnership with Amazon, offers an AWS Direct Connect. With AWS Direct Connect, we have private and public IP access to resources in the AWS cloud but can keep them separate. We get the security benefits of private cloud along with the economies of scale of public cloud. Our internal customers can now easily stand up development, testing or disaster recovery servers in AWS while privately connecting to their databases in our cages.There have been lessons learned as we have used our hybrid cloud setup with AWS. Our internal customers have learned that not all applications are cheaper to run in the public cloud. For example, large databases processing millions of records and images daily can be expensive to operate in AWS. There have been some "noisy neighbor" performance lessons too. Not all AWS instances are the same. Spinning up additional servers when performance takes a dip was not as easy as we first thought. Finally, it looks like the AWS downward pricing trend has reached bottom, as they have recently increased pricing of some of their virtual machine offerings. Overall, the AWS hybrid experience has been great. Workloads that vary can be augmented by AWS servers and we know if a disaster takes out our production sites, we can quickly spin up all the web servers we need. Being able to stand up large development and test environments as needed in AWS also saves us capital expense. The Direct Connect makes security and network administration almost as easy as if we housed the servers ourselves. We are big fans of the hybrid cloud model, particularly with AWS.The Direct Connect makes security and network administration almost as easy as if we housed the servers ourselvesJoe Fuller
< Page 8 | Page 10 >