Building a cloud? Prepare for a rough ride.

I have done many difficult things in my life. Built businesses, am a husband to a lovely wife and a father to two young boys. I have been involved with complex technological projects, and seen success and failure on all fronts. But nothing prepares you for the sheer technical challenge of building a public cloud platform.

The easy bit is deciding that you want to do it, the tough bit is getting it done. It is the kind of project that forces networking, server, application and business support teams to work together in ways you have never imagined. As I sit here writing this, I realize how ill prepared I was for certain aspects of this journey. Before finding the partner with whom my platform is now being built, I courted several venture capitalists. I was convinced that a pile of money and motivation is all that is needed to succeed. Use the money, add the right people to the team and build what is needed. No, that is not the way it turned out. I was right on the investment front, but I totally underestimated the human capital required. To get to where we are today required:

a) A skilled and experienced networking team. Guys who build Service Provider and Telco grade networks. They understand routing, switching, peer connectivity and the intricacies of building highly scalable, resilient data networks.

b) Security experts. Not just fire-walling, but all manner of policy compliance, governance and audit specialists. People who understand security not only as a set of technologies, but also the mindset of hackers and consumers.

c) Datacentre experts who understand datacentre operations and can implement and deploy technology in these environments according to best practises.

d) Infrastructure experts that understand the world of cabling, generators, battery backup systems, server power requirements and cooling.

e) A project management team to tie all the other teams together and drive our processes and tasks.

f) A full team of finance and procurement specialists, that bring expertise in procurement, global logistics, taxation, legal expertise and human capital management.

g) Branding, design and communications staff with globally recognised skill in brand development and stakeholder communication.

h) Open source technology generalists, who leverage their skills to be on the cutting edge of open source software deployment and management.

i) Proprietary software guys, who work with the images and process to support all manner of proprietary software.

j) Software developers who build the custom automation code and software pieces required to bring our vision to life.

k) Finally, a competent and visionary management team to understand the vision, accept the changes and drive the technology and people to new heights.

Thousands of man hours of work has now been logged against this project, and we are not live yet. I’ll do a follow up story where I give some of my top tips for building cloud infrastructure, but let me share just one for now…make sure you have a solid understanding of your underlay network, it’s function and layout. Certain technologies are extremely dependant on what you do in your network underlay. The network does not have to be complicated, but it needs to be fast, with a powerful Software Defined Network overlay. You will not believe how complicated some of the network plugins can be. (Here’s a secret, we are not running OpenvSwitch but another SDN technology). Make your network plugin choice early, and make sure your underlay network is designed to work with it and its capabilities. As a rule, there is no migration path from one network plugin to another. Make the wrong choice early on, and you’ll have to buy hardware for a second platform, migrate your workloads and thrash your initial platform, adding it back as capacity for your new cloud after a re-build. Painful.

In the mean time, head over to www.wingu.co.za¬†and sign up for our newsletter. You’ll be added to our list of people who receive a R 250 voucher to try our cloud services when we launch. Yes, we are cool like that ūüôā

Why using local cloud services makes sense

With the launch date for my new cloud computing platform looming large, I am spending more and more time involved in the intimate details of the project. I am also spending more and more time with potential customers and partners. One of the most frequently asked questions from potential partners and customers are why they should use local cloud computing services. The impression is certainly there that the big players have cornered the market and that there is no place left for smaller, local players to bring value. Nothing could be further from the truth. Let’s dissect this a bit from an African/South African context.

The local cloud market is thriving. When I say “local”, I mean companies operating in certain geographies. At the OpenStack Summit in Paris (November 2014) I had the opportunity to meet with many cloud service provides who operate within a specific country or even province. The value of being “local” in terms of language, business culture, currency etc cannot be underestimated. Many of these companies have built thriving businesses, offering products and services that serve the need of their local customers, and serve it better than international players. No one doubts the domination and scale of the large well known players, but they certainly are not everything to everyone. Why is that?

1.) Bill me in my local currency. Currency fluctuation is a real concern if you operate a business in an economy where your local currency fluctuates against the global strong currencies such as the Dollar and Euro. I am currently a Google and Amazon customer, and know what it is like to have your monthly invoice arrive 15% higher than what is was last month, simply because your currency slid by a big percentage against the currency that you are billed in. Add fee overheads for currency conversion on credit cards, and you can face invoices that fluctuate significantly, while you did not consume any more services. Local cloud service providers can build systems that have little or no variables in the form of foreign currency components, leading to fixed, predictable pricing.

2.) Payment using locally accepted payment methods. This sounds like a moot point, but many companies do not provide credit cards to all their employees who can consume cloud services. These cards also may not have the correct limits in place to allow for big bills when cloud services gets consumed in significant volumes. Local players allow you to use local payment gateways, with easily accepted and understood payment methods. For instance, we accept credit cards (of course), debit cards, electronic fund transfers (that clear immediately, regardless of the originating bank) and Bitcoin. All billed in local currency.

3.) Improved latency to the cloud services. Let’s not kid ourselves, Africa is not as well connected as the rest of the world. The big players also have no local datacentres here that provide their services. If we measure latency using PING (latency is half a PING), we can reach certain US east cost services in around 110 milliseconds. Consider that we can access our local datacentre from almost any ISP in around 15-20 milliseconds. That is around 400%+ faster. This becomes key when you start delivering latency sensitive services.

4.) Local support. Yes, I know we all speak English anyway, but that is not the point. One of the upcoming features in our cloud service, is a very advanced software defined networking (SDN) layer. For customers who want to use our public infrastructure to extend services from their MPLS networks, that will require some onsite consulting. Difficult to do if the support staff you want to interact with is a continent away. This is only an issue as we deliver new, cutting edge solutions. The drive is always there to automate and simplify as much of what we do as possible.

5.) Local datacentre access. One of the biggest reasons to use a local cloud provider is that you have the opportunity to collocate equipment with your cloud provider. Many customers that I have spoken to want to outsource certain functions to a cloud provider, but connect certain services that they have control over to these cloud platforms. In certain cases that becomes a lot easier when you can collocate services, running cross-connects to a cloud provider to eliminate any bandwidth costs. A great benefit for more complex and bandwidth intensive solutions.

I am confident that as we start providing commercial services, more and more customers will enjoy that local touch that we provide.

2015, here we go!

So, 2015 arrived and it is time to stare the new year squarely in the eye. I’ll freely admit to using a bit of Punk Rock to get me out of bed this morning, clearly I enjoyed my stay-cation a bit too much ūüôā 2014 was a proper roller-coaster of a year, with loads of changes for me personally and in business. 2015 brings stability and clear focus on delivering my cloud computing project.

Delivering a cloud platform is certainly not a job for the fainthearted. Now that all but one of the supply contracts has been finalized, the focus shifts to the actual technology deployment. Managing the logistics is a bit of a challenge in itself, with my tin arriving in drips and drabs. January will see us deploying the tin into the datacentre, and getting the basic platform ready for all the integration work that needs to be done.

I hope all of you have a great 2015, filled with love, fun and new exciting opportunities and challenges.

OpenStack “State of the nation”

Over the past few days I have had the pleasure of attending the OpenStack Summit in Paris. The 6000 person attendance figure alone tells a story of the massive momentum behind this open source software project. Over a 5 day period thousands of vendors, integrators and developers got together to shape the future of this amazing project.

So, what is OpenStack? It is a collection of open source tools and technologies, augmented by commercial tools, that allows customers to build private, public and hybrid cloud services.

I am currently involved in a project to build a cloud platform that will deliver public cloud services, and I selected OpenStack as the underlying technology to base my platform on. OpenStack is a relatively unknown quantity in South Africa, and one of the questions I always get asked when discussing my plans, is “why not VMWare or Hyper-V?”. Most people assume that the answer will have something to do with cost, or some crusade against big and evil tech empires. The answer is actually quite simple. OpenStack is the only platform today, that allows customers to build the cloud they want, with no vendor lock in. And while there are other open cloud platforms out there, OpenStack has the largest and most vibrant community, with the largest partner eco system. The challenge and opportunities lie in the fact that it is not a pre-packaged product (that is changing with most open source vendors now offering easy to deploy systems for enterprise use) but a framework that allows you to make component selections to build the cloud you need. The Lego of the cloud world ūüôā

This was not always the case. In the earlier releases (Grizzly, Folsom, Havana etc) there was a lot of features missing, and the toolset was difficult to deploy. The latest release is Juno, and the community is working to release Kilo in a few months time. Today, the stack is easy to deploy, with distributions and vendors such as SUSE, Red Hat, Mirantis, Canonical, HP and IBM all having easy to use deployment tools. Vendors such as Canonical and Mirantis take this deployment further, with their FUEL and JuJu tools providing several deployment options, making OpenStack as easy to deploy as traditional virtualization technologies. The partner ecosystem has dramatically expanded, with more and more companies providing focussed add-on’s for the platform, making it easier to deploy, operate and manage this environment.

The layer above the basic cloud platform Infrastructure-as-a-Service layer has also expanded. Platform-as-a-Service tools, container technology and others such as software defined network and network function virtualization are all driving the new applications and services that allows businesses to be more agile with their technology services.

The use cases for the cloud platforms feature three recurring themes. More speed, more agility, less cost. We now live in an era where “credit card” decisions are made, where a manager will swipe a company credit card to buy and instantly access a service if internal IT moves too slow. The way savvy companies counter this, to maintain control while delivering on the new business requirements for faster availability of infrastructure and services, is to deploy clouds internally. I saw several case studies being presented where companies shared their numbers of how fast services can now be deployed and adopted, and how their internal IT user satisfaction scores went up.

It is important to note that virtualization and cloud are not terms to be used interchangeably. Yes, OpenStack contains virtualization (select your hypervisor from KVM, Hyper-V, ESXi or XEN and others), but it provides technology for an “Amazon AWS like” web layer where users can authenticate and select options to be deployed as they need them. Traditional virtualization vendors such as VMWare are also throwing their weight behind OpenStack, integrating their technology with OpenStack to provide a single control plane and great user experience.

What does this mean for South African companies? In short, you now have access to a set of technologies that enables you to make smart choices, delivering IT as a service, providing your users with a great, flexible platform, capable of quickly delivering infrastructure and apps.

If you’d like more info, to see how this can work for your business, leave a comment and I’ll reach out to you.

Education…my guide for modern parents

As a result of the work I have been doing on our cloud project, I am experiencing first hand the lack of skill that we have in certain technology fields locally. One of the reasons for changing our current business and business model, is that we want the ability to take advantage of our own intellectual property and leverage what we know in our business model. But doing that is hard, as it requires a broad range of skills, and we are coming up short finding everything that we need locally.

Education is obviously key in addressing this. It is however clear that the South African school system is not currently up to the task. Now we can blame a whole lot o things for the failure of our public school system, but do not think that the private schools fare any better. We’ll have to figure out how we can use the educational resources out there, to better equip our kids. This matter is very close to my heart, as I am father to two young boys, currently aged 5 and 7. Bottom line is, we cannot leave the important task of education just up to our schools, as parents we need to be involved on a daily basis.

Being a dad, I have started looking around to see what options are available for teaching our kids some technology skills, and helping them develop key skills like math and science. One needs to tread carefully here, so that we do not expose our children to certain technologies prematurely. But there are loads of things we can do to peak their interest. I am not a professional educator, and this guide is based around information for parents living in South Africa, so your mileage may vary. This may sound like it is for boys only, but there are plenty of girly projects too.

For a start, I love doing basic, fun science experiments with my kids. This teaches them about chemistry, physics, math and project management. They learn valuable life skills while doing something fun. You do not have to rack your brain for ideas, I use Experilab as my first stop. These guys have an amazing range of simple experiments that you buy online or at their shop in Pretoria. Even my local pharmacy stocks their experiments. The kits usually contains the chemicals you’ll need, you just need to provide simple household items like bowls, spoons etc. We have made bouncing balls, grown crystals, made chemicals that glow in the dark, simple electric motors and loads more. The online shops also sells all the lab equipment you may need, such as tubes, glass holders, magnifying lenses etc. Each kit can be done in 30-45 minutes, so you do not have long to wait for results.

experilab logo BBalls FLYSCIENCE

I love doing a bit more challenging projects too, especially items that take a bit longer to complete, to also teach the kids the virtue of patience. These usually involve a lot more building, and we involve mom too. Developing the kids’ visual art skills are important too, so once we built something, mom helps with decorating it. For kits that allow a bit more free play, and outdoor time, look at the Dala Junior Tradesman kits. These kits use real bricks and mortar to help kids build houses etc, outdoors.

juniortm_kit_firsthousegarage

Adding to their skills, means letting them get some hands on time with real tools. One afternoon, the boys and I went shopping for real tools. Dad bought them a portable tool chest (get one at Plasticland) and then we added some essential screwdrivers, a hammer, a saw, duct tape, nails, glue, string, spray paint, sanding paper, screws, gloves, safety goggles etc. Next we added some wood and PVC pipe, plus some basic castor and normal wheels (get these from Chamberlain or Builders Warehouse). For projects to build using these tools and materials, I bought a series of books. Here a list from Kalahari.com:

50 things geek dad 1 geek dad 2 geek dad 3

The books provide us with the ideas for our projects. We have built catapults, blow dart guns, coke fountains and loads of other fun toys. Once the building is done, mom helps with decoration. The process I use is quite simple. I let the kids page the books, to select a project, then I vet it for age appropriateness, time and cost. Once we have agreed on a project, we make a list of things we have and things we need to shop. Bare in mind these books use US measurements, so be prepared to adapt dimensions and find alternative products (part of the fun I say!). A shopping expedition gets mounted with our list, and when we get back home, the building starts. These projects may require a day or two to finish, or an entire Saturday, so plan accordingly and set the correct expectation. Be prepared for loads of questions, and try your best to answer them, or Google for info.

The above processes helps with the major and minor motor skills, developing their hand/eye co-ordination. A big trap to avoid are table devices such as iPad’s. This does not mean that we do not let them use tablets, but try to limit the amount of “flat screen time” (tablets and TV) to no more than 45 minutes per day. My kids love Lego blocks, so on the tablets we use companion apps with additional build plans for the blocks we have.

Proficiency with technology will be come key for our children. By this, I do not mean the simple ability to start a computer and use office productivity software (that is important too), but ways to manipulate and interact with technology. Here’s a simple analogy. We need to teach our children work processing as a skill, not Microsoft Word. We need to teach them how to use a spreadsheet, not just Microsoft Excel. They must learn a programming language, and understand some basic electronics. The point here is not to turn everyone into engineers, but to get everyone more comfortable with technology. My house is end-to-end covered by wireless, each person in the household own an iPad, there are several laptop computers, several servers and the home theatre. All the technology is integrated. The home theatre can be controller by smartphones, movies and music can be streamed to any computer, phone or tablet. Everything is connected to the Internet for news, movies etc. In future, more homes will look like this and be even more connected. The technology should not scare us or¬†our children. Technology will become more prevalent in our workplaces so we need a basic understanding of how these devices work, and how to interact with technology. For me personally, proficiency in the basic principles of programming is important, as well as a basic understanding of electronics, and the interaction between electronics and software.

Do not be afraid to shop online, and remember, it is most likely cheaper to buy directly from the overseas vendor. If you want to shop for an educational toy online, and they do not offer either shipping to South Africa, or take South African credit cards as payments, then you can services such as MyUS to handles ordering, payment and shipping for you, via their concierge services. Shipping is usually quite quick, my overseas packages arrive, delivered to my door (if that is your selected shipping option) within 5-10 working days.

myus

My first purchase in the electronics space was a Sparkfun Inventors kit. This can be bought online locally from Netram¬†or directly from SparkFun in the USA. It includes the amazing little Arduino Atmel based microprocessor system, with loads of components such as wires, motors, lights etc. The kit includes everything you need to build 15 projects, teaching kids the basics of electronic circuits and microprocessor programming. The Arduino system is a roaring success story all on its own. Developed by professors at an Italian university as a teaching tool, these modular systems have sparked a whole industry of projects and add-on boards. These systems power anything from Hydroponic growth systems, to 3D printers. There are loads of fun Arduino based kits around, resulting in fun, interactive toys that kids can build themselves, while learning. Simple Arduino projects include a basic light circuit, where a little LED light is turned on an off using software. You can then experiment (the kit shows you how) by seeing what effects a change in the software will have on the simple light circuit. Circuits take about 10 minutes to build, and the Arduino is powered and programmed from your Window, Mac or Linux PC’s USB port.

arduino-uno-r3  robot-arm-kit  sparkfun kit

Next up, we have the revolution that is the Raspberry Pi mini computer. This differs from the Arduino above, in that it is a mini computer system. Using a USB keyboard, mouse and an HDMI capable monitor, the Raspberry Pi is a very low cost computer. So, how cheap is low cost? I bought a Kano computer, which is a Raspberry Pi with all the cables, software and a keyboard and mouse for US 99 when they launched via Kickstarter. You can now pre-order the kits at USD 129, or surf over to Netram and buy the Pi plus all the goodies locally. This computer attaches to a flat screen TV or computer monitor via an HDMI cable. It runs a Linux operating system, optimized for kids, with games and development tools installed. The revolution here is that you have a small, low cost computer that can do loads of useful stuff. The games on the Kano are created using the Scratch programming language developed at the Massachusetts Institute of Technology. This means that not only can the kids play the games, but they can change them using the Scratch tools to create new or different games. There is a wonderful video from TED Talks where Mitch Resnick explains the idea and applications of the Scratch language.

raspberry-pi-model-b  kano  scratch1

I saved the best for last! All of the above sounds great, but what about kids, already in school battling with Mathematics, Physics, Biology, Chemistry, Economics or Entrepreneurship? You need Khan Academy. This amazing, free site only requires a Google or Facebook account, and it unlocks a world of training resources for kids and parents. I’d suggest that parents surf over to the parents and tutors¬†page to get familiar with the system and how it operates. The bottom line is this, let’s assume your youngster is struggling with Geometry, specifically the angles of triangles. You can find the appropriate category from the index, and surf down to a page with a series of worked practise questions. If your child cannot complete the exercises, there is a handy, 5-15 minute video tutorial that they can then watch. They can then try the practise questions again. The system assumes mastery of a subject if you can complete 10 questions in a row, while scoring 100%. Best of all, mom and dad can have an interface where children of different ages (like mine) can have their individual progress and activity level tracked.

I know this is quite a mouthful, and it seems very technology focused. Be open-minded, take a look and see what works for you. All you need is a basic computer (get a CloudGate, they are super cool!) or some free time with your kids, to help them have fun and learn. I’d love to hear your feedback on this, and what your experiences with your kids are when you try this. Happy learning!

That dirty word, “Innovation”

Many industries have overused terms, in automotive, “driver’s car” is one that comes to mind. How on earth can a diesel powered econobox be described as a “driver’s car”? Technically it is correct, the car does belong to a driver, so it is that driver’s car. But does it inspire you to get in it and simply drive for the joy and pleasure that it brings? I seriously doubt it. One of the real “driver’s car” models that got my heart racing was the Honda S2000. Now here is a lightweight, rear drive, manual gearbox car with steering that can only be described as telepathic. Get in, point the nose anywhere and simply drive for the sheer fun and pleasure of the act. Rev the 2.0 litre V-TEC engine to a dizzy 9000rpm and hear it wail like a sports bike.

In the consumer products market “new and improved” is another one. Is it new, or is it improved? Fake hype is generated around something like shampoo that must have 50 competitors on the same shelf.

In the information technology world, “innovation” has become one of those overused terms. No dear reseller, you do not “innovate” when you take the same product that loads of people make, and simply sell it in a new (and probably more expensive) way. True innovation is the act of breaking the mold, thinking without a box, not just outside one. Truly disruptive technologies are scarce locally. Like hen’s teeth some might say. In South Africa we need to start moving beyond the “buy tech, add a markup and sell, repeat” model. We have to learn to distinguish between what is a new spin on an old idea, what is disruptive and what is truly new and exciting.

I am looking forward to see how individuals and businesses use my cloud platform to deliver true innovation with a real South African flavour.

If you want to save money, go all the way…

The journey in building a new cloud platform has been an interesting one to say the least. When asking customers why they consider virtualization, private cloud or hybrid cloud solutions, cost saving is always part of the equation.

But, it amazes me how the technology decisions we make are influenced by vendors, and how few customers can work their way through all the FUD (fear, uncertainty and doubt). Some of the best FUD stories I hear, concern these statements:

  • We are a vendor X shop.
  • We only buy “best-of-breed” technology.
  • We only have vendor X skills.

Right…how does tying yourself into vendor X, thus leaving you without choice, save you money? And, who defines “best-of-breed”? I have it on impeccable authority that one of South Africa’s largest service providers locally, lose money on every single VM they sell via their cloud platform. How is this possible? Given their scale, they should have immense buying power, and their purchasing volume alone should put them in a much more competitive provisioning and costing space. But in thinking that, you’d be wrong.

Their first mistake was going the “we are a vendor X shop” route. Let’s not investigate the options, let’s simply take our shopping basket, and load it full of goodies that vendor X peddles, especially since vendor X claims to be “best of breed”. Dare question the rationale, and that old faithful independent analyst report, ranking vendors in a way where no one loses, but some are more equal than others, gets yanked out. This provides “proof” and is the basis for not even evaluating other technologies. Plus, said Service Provider have a long standing relationship with vendor X, and they do not want to “burn” that relationship and their current discounts, by buying from another player.

Then “we only have vendor X skills”. People, if your techies can only configure VLAN’s and routing on vendor X’s hardware, you have a serious problem on hand. You hired the wrong people! Certain technologies become a standard over time, and networking is a great example. You can buy networking kit from any one of at least 10 vendors, and your brand X skills will translate in maybe 4 hours of playtime. All you have to learn is how the command line or GUI works, as the underlying routing, switching, VLAN’s and link aggregating protocols are all the same. Storage is the same story. A LUN is a LUN, whether implemented on vendor A or vendor B’s kit.

I could carry on for days, but I think my point is made. In cloud, cost and ease of use is king. That is why we investigated everything, including the brand X’s of networking, storage, operating systems and virtualization technology. In the end, you will not find a single vendor X in our platform, we went with choices that suit our business, and where our skillset can easily be translated. It has been tough, we have been wooed, and even ridiculed for our choices, especially by the vendor X’s losing out. In the end we stuck to our guns, made bold choices, and now we’ll see how it all plays out.

And I’ll be making money on every single VM that I sell.

If it floats, flies or is in the cloud, you are better off renting…

The above bit of sagely financial advice was offered to me by a financial professional. Certain assets and items make no financial sense when you buy them, renting is the better option in many cases. Why should technology be any different?

I strongly believe that the days of buying physical servers at Capex cost is a business model that is dead for many enterprises. Why invest all that hard earned money in a dead platform, why not just rent what you need, elastically? Need more, rent more. Need less, rent less. Not only will your expenses match your requirements, but your get better proportional use from those rented assets. ¬†Some recent reports puts the average utilization of servers running virtualization hypervisors in the enterprise datacentre, at between 20% and 40%. This implies that even “enterprise” virtualization is not delivering the value promised.

How do we solve this utilization issue? It needs to be solved as it implies that we are spending money on resources that we do not use. But getting benefit from this model means that we have to have modern application and infrastructure management technologies, so that we can “right size” our resources. Managing tech resources need to move beyond the “is it on or is it off” mindset, coupled with technology silos. No offense, but I do have a giggle when enterprises who get tools like Microsoft’s SCOM for free in their enterprise license agreements, think that these basic tools tell them anything about how the app is performing. No, today we need technology that will map our business rules and processes across infrastructure, showing us impact on business processes if a port on a device, or process on a server misbehaves. The issue here is cost. Most of these platforms need to gather various forms of data, including SNMP, WMI and packet level data. The best systems will even run a small agent on your .Net, SQL and Java systems, instrumenting these down to code level. But, in South African terms, a project like this could be anywhere from R 5 Million to R 10 Million, even for relatively small environments, with around 20 app servers and around 100 servers in total.

Solving this issue has been my mission. It is one of the reasons why our cloud platform can be called “enterprise grade”. Let me explain. The systems used to monitor the packet level data are dedicated hardware devices, capable of some serious data collection and analysis. However, when buying this technology, companies have to not only think about their data rates today, but also try and guess what the data rates will be 3-5 years down the line. Typically these assets get “sweat” a long time, so invariably, an enterprise buys a bigger box than what they need. Secondly, the tech to instrument your code gets sold in certain license batches, so you end up having to buy another 10 licenses, even if you only want to roll out another two servers, taking your total to 12. Having a cloud platform enabled that has this tech built in, makes it super easy for enterprises and software developers to have this technology “baked in” to their infrastructure. Now we get to a point, where we can deliver the following info:

  • How fast is my application for the end user using it, with total response time in milliseconds instrumented from the end user device, right down all the tiers of my application and infrastructure.
  • If my response is below par (my SLA requires a 400ms response time, but I am delivering a 900ms time), where is the delay? Network, server, app, code etc?
  • In multi-tiered applications, where we have a web front-end connected, to an app server, which in turn talks to a database, we can see the delay and details for performance between servers. So, a slow app may be slow because the connection between the web servers and app tier is slow, as a result of a bad configuration on a load balancer.
  • A new update was pushed for a .Net or Java based app, and now, certain modules of the app is slow. We can pinpoint these, and help developers debug and fix performance issues, as we can see exactly which piece of the app and code is causing an issue.
  • We can tie memory, CPU and storage system performance together, and see how changes in resource quantities (add more RAM, add more vCPU) is positively or negatively affecting app performance. You can also see if a bigger server is needed, or if two or three smaller servers, running with a load balancer will work better.
  • The network performance can be instrumented and modelled to the n-th degree. Is adding more capacity going to improve my performance, or will switching to a lower latency fibre optic link from my ISP improve my performance? Is accessing the service via Internet ok, or do I need to think about a dedicated point-to-point link to the cloud, or can I simply extend my MPLS service?

Understanding the impact of resource and their behaviour is key. With the right tools, you can rent just what you need. The right sizing job for CIO/CTO level managers just got so much easier…

I have a vinyl fetish…

Not the handcuffs and boots type, the 33 1/3 rpm type.

I remember listening to records at home as a youngster, and spending my pocket money on records in the 80’s. Oh, I was so quick to jump ship to CD, and then of course to buy digital music online (my favourite sites are HDtracks.com and LinnRecords.com). My online purchases really started adding up when I bought a Maverick Audio TubeMagic D1 Plus edition standalone digital-to-analogue converter with a high-end valve powered headphone amp. Listening to music in 24 Bit, 96KHz format just took digital to the next level for me. Finally being able to drive my various sets of Sennheiser, Grado and Beyerdynamic headphones was magical. Of course, hooking the Maverick up to my Yamaha RX-A2010 amplifier and Tannoy speakers just showed how nicely a vacuum tube plays with digital formats. But, the magic of vinyl was awakened when I visited my father, and was presented with his almost brand new (but 20 years old) NAD 533 (Rega P2 OEM version) turntable. I listened to it on his Tannoy/Onkyo setup and was amazed. Not only was I given the turntable, but also around 80 great condition vinyl albums. These all made the trek back to centurion with me.

The turntable was serviced, and a new motor and belts fitted. The old unknown cartridge was dumped, and a brand new Audio Technica AT-95e fitted. The Rega RB-250 arm had its tracking and anti-skate set properly, and the table was mounted on a super heavy, isolated stand. Finally, the table was leveled and is being kept level. I marveled at the sounds from my speakers! I happen to now own the same recordings on vinyl and digital, and the difference is amazing. I own a copy of an Andres Segovia classical guitar record, and have the same recording, bought via iTunes. With the iPad connected to the Maverick for decode duties, the digital 256 kbps AAC file sounds dull and lifeless. Play the same recording from vinyl, and Andres Segovia is in my living room, playing the guitar. I was hooked!

Of course, this was only the beginning. My table is a decent entry level system, well maintained and fitted with components to bring our the best in it. But, it is far from top end. So, a purchase was made, and I bought two Lenco L75 turntables (1968 and 1969 models) to have them restored in heavy plinths, a la Jean Nantais. I bought 10 layer Birch plywood, and a brilliant Jelco SA-250ST tonearm to replace the busted standard Lenco item. During the process, I bought more vinyl, so a dedicated, 450 LP rack was designed, built and fitted by my mate Charles Theron. For the second Lenco, I have my eye on a 12″ transcriptor specification SME tonearm…drool…

I realized that a home theater amp will not do such a setup justice, so I have set out building a derivative of the now iconic 47 Labs Gaincard amplifier. Powered by LM3886 chips, it sounds wonderful. Mine is fronted by an AD815 based pre-amplifier, and finally, there is the dedicated Moving Magnet Phono Stage that I built, to augment the Rega Phono Mini USB that I use to rip my vinyls to high definition FLAC files to preserve them.

Next up is the vinyl cleaner I have to build, and following that, as set of speakers based of 12″ or 15″ tannoy dual concentric drivers…It just goes on and on… The investment in tools (Dremel etc), material and components have been substantial, but so is the fun i have been having with my records.

To say that the bug has bitten me, is a complete understatement. I am lucky, that I can be as passionate about my hobbies, as I am about my high tech business.

Peace and love.

The bravery of being out of range…

Doing OpenStack is hard. Doing it right is even harder. Doing it in a way that mimics the major functionality of competing public Infrastructure-as-a-Service providers is so tough, that I believe what we are launching will be a first in Africa, with some features a first in the Southern Hemisphere. Part of the challenge is understanding that OpenStack is not a technology, but a framework. A very complex Lego set where you slot things in and make then work in a way to suit your organisations business requirement. For the past 10 months I have done little but spend every moment possible understanding what I want to do, and how I want to do it. And I am not done…

So, is it correct for me to look down on enterprises making “easy” choices using easy to install software packages? Probably not. In truth, I do not look down on them, as much as I stare in wonder at how they manage to misuse so much of the vast resources they have at their disposal. Instead of doing the hard thing and building what is perfect for the business, they choose far simpler productized platforms, rolling out far more costly equipment and solutions, to solve problems in a “standardized” way. The reality is that they do not adapt technology to their businesses, their businesses has to adapt to their technology choice’s rules and limitations. Not ideal at all.

Now, I have to express a serious amount of ignorance on my side regarding the inner workings, descision making processes and budget allocations of enterprise IT departments. Reason is simple. I have never spent a day being employed in a end-user internal IT department. In an IT career spanning 22 years, I have only been employed 3 times, all of it working for technology resellers. I did less than a year in a fairly big business, then less than a year at a global multinational and finally 5 1/2 years in a company that grew from around 20 people to around 400 people in the time I was there. The balance of 22 years was spent being self employed with varying degrees of success. I have had roaring successes and spectacular failures. The time I have been flumoxed the worst was when I failed (in my opinion) in environments where technology descisions are taking by people who really have no business running IT departments.

But I digress…I think the biggest reason for doing things the “easy” way, is the fact that enterprise employees don’t spend their own money. Made a 150 Million blooper? No problem, wipe it under the rug and try again. Blame the vendor and then the partner. Apply the first rule of corporate politics, CYA (cover your ass) and duck for cover.

Things are different when you are spending your own money, you tend to think harder about why you spent it, and who you will be giving it too. Getting return on that hard earned cash is paramount, and in a big way, enterprise guys can easily duck financial responsibility for failures. Selecting a framework is giving me the opportunity to make technology work for my business, not make my business work the way a vendor demands.