And a-changing …

So, I’ve been asked to take on a new role with effect from this month at my employer, Intellect. We need to bring some more focus to the Electronics agenda following our merger with the ETN, an Electronics Technology Network; and our recent work in reframing the role and contribution of the existing members of the Intellect Electronics group. The work I was involved with in the run up to the ESCO report launch, laid the foundations for a more active and strategic view of the Electronics industry in the UK and this will be an area of focus for the next 12-18 months or so. Take a look at the reports that were published on June 27th and be prepared to be surprised by the extent of the industry in the UK and its ambitions for growth which has generated interest in the Government. The agenda in Electronics is all about niche market opportunities and paving the way with new technologies and processes which show the way towards scale manufacturing. There is plenty of work around, but its a scrabble to find it. We hope to make a difference in this area at Intellect.

I have also been asked to raise the contribution of Software in the relevant programmes at Intellect. Which I am taking to mean creating a deeper understanding of the shift to IT as a Service, the growth of new types of application in existing markets and the new types of technology being employed in software development. An interesting and strategic challenge for us in the Trade Association. This is a good area for me to leverage work done in the ICT Knowledge Transfer Network which rolls on, unabated, until next March at least! It’s still half time for me! There my responsibilities include a new Autumn Webinar Series launching in September, of which more soon, and a major new TSB competition – which remains under wraps for the moment. We are also still raking over the coals of Software Engineering and Government Computing group activities, to see where there is still heat and an opportunity to contribute. Areas of Open Data, Cloud Standards and “Big Data” look to be of interest.

But concerns about Skills and capacity are common to both Software Engineering and Electronics. At the recent UK Space Conference, I was invited to a panel session where I pointed out there are not enough engineers to go round in any of these fields, let alone all 3! What do we have to do to ramp up interest and enthusiasm in our younger population, beyond making sure that there are jobs for them when they complete their training? It’s been refreshing to see Government interest and concern in this field, but they need our help to determine what steps to take. The problems will not be fixed in a short while though.

A funny season this, July and August, it used to be very quiet but now it is assumed that things are moving on, even if there’s no-one to do the work and no-one to be invited to participate. At least the sporting world has been delivering on its promise!

Still, hey-ho, just another 148 (shopping) days to Christmas!

The World keeps a’Changing!

So, my world is changing in interesting ways and I thought I would take to the airwaves to let you know what’s up.

Firstly, I have been appointed Director, Business to Business (B2B) for Intellect, the UK Technology Trade Association. This is a welcome move bringing me properly into the Intellect fold, after more than 7.5 years of being an employee on contract to the Government’s Innovation Programme for Knowledge Transfer Network projects. The good news is that I have brought the agenda from the KTNs into the mainstream and will now get the chance to operate at an Industry level on Cloud, Big Data, IoT and their application into vertical sectors. I’ll retain my interest in Government too, as the Intellect representative to the G-Cloud Programme Delivery Board.

What does this mean for my work in the ICT KTN? Well, half my time will remain targeted at the Cloud and Government IT programme there too, this has morphed into three key areas: Open Data; Software Engineering and Cloud Standards. I have also maintained a key interest in promoting the G-Cloud supply side too, we’re doing work with leading suppliers aimed at raising visibility of the G-Cloud programme and capabilities on offer. My involvement in Open Data stems from the paper, wot I wrote with Jonathan Raper, Placr in August. This has provided a foundation and agenda for KTN work in this area. The paper is available now on request from me, but should shortly be on the ICT KTN website. We’re planning to work with Hadley Beeman at LinkedGov, and our colleagues in the Open Data Institute, via our Technology Strategy Board connections.

I’m also working on a Software Engineering agenda within the KTN, addressing the soon to be published TSB ICT Strategy and its Software Engineering objectives. We’ve established a Software Engineering Working Group to look at the key areas of interest of the TSB. More on that later. This interest extends to the Multicore KTP programme, a TSB investment in knowledge transfer in the small; and the Energy Efficient Computing Special Interest Group and Competition which we was announced in October, for which we have now run 3 collaboration events. There’s £1.25M available for some imaginative proposals in the area of software engineering and hardware/software integration. See here for more details.

What I am leaving behind, is the KTN work on Multicore itself. We have had a stonking year with an excellent webinar series and a large event in Bristol in September. This investment served to identify the major choices for those incorporating multicore in products, either focus on System on Chip for high performance, e.g. mobile devices with multiple use requirements (comms, graphics, processing, etc.); or on Homogeneous many core chips for specific use requirements, e.g. x86 in servers, GPU in graphics devices. In both cases the industry is developing common sense strategies to optimise performance (e.g. a single thread per cpu) for larger applications much of this is being done in the operating system. There are risks ahead, e.g. memory contention; challenges in testing and verification; but there are not many areas in which the KTN can practically help, now that awareness has been raised and sources of genuine inspiration identified. E.g. XMOS, University of Bristol’s Engineering Programme and TVS, network of experts no embedded systems testing. We may yet invest some more in Multicore in 2013, but it is early days to decide this, the KTN priorities will not be agreed till February/March. And, by the way, the ICT KTN will now run until March 2014 at the same resourcing levels as 2012/13. Either way, additional investment in this area will be decided by the ICT KTN Software Engineering Working Group in partnership with the TSB.

Another interesting development which may well have longer term consequences is the European Commission’s recent announcement of an European Cloud Strategy. This is a three pronged approach to bringing Cloud computing into the mainstream of EC activity. The prongs include chartering ETSI to build a roadmap towards recommended technical standards, they major today in Telecoms and equipment interoperability – plug tests. Also chartering ENISA the European Standards Assurance governing agency, to look at voluntary certification requirements for improving trust in cloud services on offer. Together these proposal are quite a controversial offer, see the concerns of Liam Maxwell, the Deputy Government CIO here. Some careful oversight will be required to avoid strangling the market through over zealous controls. I will be maintaining my involvement in the ISO SC38 Cloud Standards community for both the ICT KTN and the Cloud Industry Forum, which is a small industry group committed to improving the quality and transparency of cloud service provision. I intend to get involved in the ETSI review of standards too. Note too the publication last week of the UK Government’s Open Standards Principles a policy to be applied across Government procurement.

The second area of interest is in the nature of Cloud Service Contracts. Specifically the terms and conditions and service level agreements on offer. This is a complex area currently more akin to the Wild West than a business to customer relationship. The Cloud Industry Forum is working on some model contracts to help out, but there are concerns all the way from contracts which at least balance the rights of the consumer, all the way to major concerns about privacy which are being addressed in the forthcoming Data Protection Regulation. Either way, we have a long list of challenges in the establishment of fair contracts, agreed definitions (e.g. how do you define multi-tenanted?), and privacy concerns.

The third area of interest lies in the worthy idea of sharing services between public sector organisations across the European Union. This is called the European Cloud Partnership and I have to say, it is ambitious in an area where we can’t agree how to share information or services between departments in one nation, let alone streamline a procurement process to purchase services from beyond that nation. Nevertheless, it is a worthy objective. Perhaps the Commission itself could set an example in its own procurement processes?

Finally, I took the opportunity of an invite to lecture Engineering Doctorate students at York in May, to reflect on my 40 years in the ICT industry. A potted version was published in Techweek Europe in September. Enjoy here!

Failures in the new Digital Infrastructure

System failure in the wild at Westfield Shopping Centre, acknowledgement to Martin Clinton Just a quick post to get me back in the groove, has it really been 6 months? As we increase the size and complexity of these digital infrastructures to deliver the services upon which we depend, outages like that at O2 will have bigger and bigger effects in society. Simply handling the complaints well does not do the business. Witness the challenges at RBS recently,  from this report it looks like poor control in systems development and implementation; Amazon Web Services have occasional outages owing to usual factors, but with disproportionate effects, see Netflix’ rather honest appraisal of failings of their own system design can be found via this report which reveals challenges in designing distributed systems to cope with all failure modes.

The point is that we need to

  • be able to put such failures in perspective. We will always suffer power outages, lightning strikes and flooding, as well as the effects of human error and equipment failure. When you compare the number of failures suffered by Amazon in a year with the aggregated number of systems that they are operating, the failure rate is probably much less than the best data centres in operation.
  • help service providers understand the proven strategies to minimise the risk and impact of such failures – they cannot be completely avoided, only mitigated. For example, Amazon have a very sophisticated set of services to allow their customers to manage failures and deploy their systems across different service centres, but are these services understood and used properly by their customers? While Netflix is pretty slick at deploying its services, including its novel Chaos Monkey service aimed at disrupting live services to demonstrate resilience,  it still has stuff to learn about failure modes and their effects, hence their need for more technical staff!

There are other dimensions to this challenge, of course, the Large Scale Complex IT Systems research project in the UK is looking at the various factors involved in our increasingly layered, distributed systems design. But there are no easy answers, just better understanding of the challenge. We still need those talented engineers that Netflix and the other Cloud leaders are looking for.

Who’s putting the “I” in CIO?

Just read an intriguing comment in the Civil Society IT Survey for 2012. I quote “… the IT department isn’t going to deliver the back end because it is becoming viable to outsource it, it isn’t going to deliver the front end because you’re going to bring your own hardware… it will focus on delivering the asset that is the information.” The quote is from an IT manager in a mid sized UK Charity.

What makes this an interesting observation? Firstly, it strikes me that our CIOs and IT Directors have been spending far too much of their time concentrating on keeping the lights on in the data centre and ensuring that we have a sense of security surrounding the users and equipment used to access information in those data centres. Traditionally this has led to an 80:20 split for IT leaders, 80% on keeping the IT systems operational and 20% on delivering new services and capabilities requested by the business. If you are in that category don’t be too depressed, at least 80% of IT Managers are there too (no data, just the pareto principle at work)!

However, there were a couple of truisms which struck me forcefully in the quote from someone who clearly “gets” the implications of consumerisation and the cloud. Firstly, that there is an earnest desire in our colleagues to use their own equipment for work and play, wherever they are, and that providing access to work information systems is a capability to be added, not the rationale for handing out more equipment to be carried around. With the further complication of software applications which are “company standard” and yet not wanted, particularly by the younger members of staff. Why can’t we use the software packages we prefer?

Secondly, the provision of services as standard from the cloud is shaking the foundations of the independent software vendor world. It is clear that this charity is not going to invest in building a “me too” system when an appropriate service is available on demand. This immediately brings to question what services must be run in house and how much capacity is requires to serve the business.

To give a real example here. Jon Jenkins (@jonjenk) the Amazon CIO reported to the Amazon Web Services summit last year that he had shifted the whole of Amazon.Com to Amazon Web Services in November 2010 and as a result he now spent only 30% of his time worrying about IT delivery and 70% of his time on delivering new services. In addition, he reported that his CFO patted him on the back from time to time with the news that the monthly costs had gone down, as AWS reduced its prices. Don’t forget, Amazon.Com is probably the biggest e-commerce platform on the Internet with massive peaks of activity from hundreds of millions of customers ahead of Christmas. After all, this annual challenge and the dilemma of how to cope was the imperative that drove the creation of AWS in the first place.

However, as the quote at the head of this piece points out, managing the information systems and employee access is only 2 parts of the job. The third piece is Information.

Managing the Information piece has been largely abandoned in many places over the years. I don’t mean that it has not been delivered as a functional requirement. Rather I mean that we typically have multiple systems, running multiple business applications with multiple copies of the same data that cannot comfortably be shared or updated in a coherent manner across the business. This leads us into substantial amounts of effort every time someone in the business has a bright idea which should be offered “to our customers” or indeed, the customers themselves request that they not be offered anything by the business. A legal requirement in the UK and European Union. It has also led to an explosion of storage in data centres containing eye watering amounts of redundant data!

Now in my work with big business, especially in the public sector, this problem manifests itself as multiple systems, purchased by line of business managers, each of which has its own database and duplication of data, systems and software licensing costs. At the recent Crown Procurement Conference run by the Cabinet Office, Phil Pavitt, CIO for HMRC, stated that he was on a mission to reduce the 900 or so systems run in his, very large, department to a handful running on a few “computers”. Since there are only 60M citizens in the UK and all of them are known in that department, one would hope that a focus on information will really repay the effort in terms of a coherent information architecture and derived services, reduced costs and increased efficiency downstream. This is a worthy goal for all of us Information Technologists. After all, the I for Information has always been in the label on the tin!

Obstacles to Cloud Adoption, a starter for 10?

Dear Colleague, As part of its planning process, the Technology Strategy Board (TSB) is entering into a brief consultation to better understand the reasons that some organisations are deciding not to adopt cloud computing services to meet their information technology needs. To this end, we have prepared a short survey aimed at highlighting the key issues, risks and consequences which are driving these business decisions to be administered by the Information and Communications Technology Knowledge Transfer Network (ICT KTN).

In preparation for this, I have completed an analysis to provide context and some definitions. This is attached as a table below. However, what we are really interested in uncovering are the primary criteria for decisions being made and where opportunities may lie for innovation in addressing the concerns raised.

So we have developed a short survey which should not take longer than 20 minutes. We will be planning follow up workshops early in December and will advise on venue and logistics shortly. Please reserve Tuesday, December 6th and Wednesday, December 7th as possible dates should you be interested.

Responses to the survey should be submitted by close of play 2nd December 2011.

Problem Statement:

The adoption of Cloud based services will accelerate productivity and growth in the UK through the increased intensity of IT services employed directly in the line of business. These services can be obtained dynamically, usually at a low price for near immediate availability. The services are targeted at line of business managers who are not interested in the provision of IT systems and software, but rather the availability of services which improve their operational effectiveness and business processes. Examples include the use of Customer Relationship Management services from Salesforce; the availability of Office tools from Google and Microsoft 365; and applications which automate the access to company products and services delivered via smartphones and tablet computers. With the advent of Platform services such as Force.com, there are new possibilities to develop and deliver customer cloud services for use within the organisation as well as  beyond to the organisations’ customers.

However, the increase in dependence upon third party services beyond the organisation’s control raises concerns related to governance, security, accessibility, dependability and operational management in an highly scalable environment. What are these concerns expressed by the end users for whom Cloud services are deemed important? How do these concerns affect the productivity of the organisation? What are the impacts of them in terms of speed of adoption by end user organisations?

This investigation is seeking to identify and qualify the list of end user concerns which are impacting cloud adoption which can provide a basis for testing the feasibility of actions to be taken which will counter the concerns and mitigate their impact on adoption. Ultimately, if there are areas in which investment can make a difference, we would like to identify them and quantify the expected changes which such investment should make.

Classes of Obstacle Description Impact Possible Resolution
Governance Challenges posed by widespread use of 3rd party products and services beyond the control and influence of the organisation. IT infrastructure not under organisational control. Fails to meet external business compliance tests. May lose business accreditation and thus business. New models for risk management in devolved IT systems. 3rd party compliance with relevant standards. Careful selection of services which may or may not be outsourced.
Data Management The Data Protection Act and its European equivalent place specific responsibilities upon organisations which maintain personal data. Outsourcing this responsibility is a major risk area. There are some specific issues related to geographical storage of data. These typically depend upon the type of data and its regulation. Organisations guilty of breaking the law with remedies administered by the Information Commissioners Office. Reputation risk and potential liability to affected parties. Improved understanding of the role of line managers in ensuring data protection requirements are met.Clearer guidelines on geographical requirements for specific classes of data (e.g. Government, Health, etc.)

Greater transparency of operations of Cloud service providers (e.g. declarations made for membership of the Cloud Industry Forum).

Security Placing company services and data beyond the physical boundaries of the organisation changes the risk profile and its management. Decisions have to be made as to the trustworthiness of suppliers and their capabilities. Specific measures may need to be taken to secure data resident beyond the organisation’s control while at rest and while in transit. Extra measures need to be taken to protect against loss of service, service provider and data. Failure of a 3rd party service provider is beyond the control of the end user and hence a major operational risk. Loss of services at any level will impact customer experience, organisational integrity and efficiency. In the worst case an organisation may fail as a result. Measures must be taken to assure the end user of the integrity and operational robustness of the service provider. Data employed must be secured against loss. Alternative service providers may be identified and maybe even used in parallel. Clear fall back strategies must be adopted and tested for critical business processes.
Financial Management The instant response, “pay as you go” model, Cloud service model is an attractive option in an industry used to waiting months for new capacity. However, the switch from a capital model of expenditure to an expensed model is not without challenge. How should financial systems adapt to this change of spending pattern? What new approaches and controls are required? There are two clear and present dangers in this category. Firstly, spend on ICT is absorbed into line of business below the radar, using credit cards and expenses. Thus visibility of IT spend (and governance) is lost. Secondly, the easy availability of Cloud services may lead to an undisciplined increase of IT spend, again without visibility and control. New frameworks are required to ensure that these classes of IT spend are identified and tracked. There are policy impacts on Governance, Data Protection and Energy Use/Carbon Emissions to be accounted for too. Clear policies are required for compliance with the organisation management system. Finally, a more consistent set of contract practices should be used by service providers.
Network Access The ability to employ 3rd party services within the organisation is critically dependent upon internet access and sufficient bandwidth to maintain service. While this is not an unique requirement for any business using IT services to meet customer needs, the outsourcing of a wider range of services may increase this dependency and potential vulnerability. An inability to access an organisation’s services will typically result in loss of business, or loss of reputation and increase in customer dissatisfaction. If an organisation is dependent upon such IT services, the consequence may be a partial or total loss of productivity. Ultimately, the failure of the organisation. Strategies for assuring access bandwidth and capacity are well understood in the industry today. Placing key business services in key parts of the communications infrastructure has been a part of the consumer service model for some time. Considering these choices and redundant alternatives may well become vital for a Cloud based organisation.
Service Availability An huge advantage of Cloud computing is the ability to provide scalable access to services using shared infrastructures often on demand, without reservation. This allows unexpected peaks of demand to be met without substantial cost. This is a crucial added benefit of Cloud based services, but failure to adapt to peaks of consumer demand will simply result in a loss of business and/or reputation. Poorly designed applications which cannot scale will also incur this penalty. Strategies for designing robust deployments which can scale and meet patterns of demand (up and down) are key here. Selection of multiple service providers to increase robustness may also be relevant. See Scaling below for more information.
Application Scaling Where organisations choose to deploy their own services to the cloud. It is crucial that they design them in such a way that they can  scale up and down in a controlled fashion. The design of highly scalable applications is a new art form in the software field, and combined with the requirements for robustness and resilience required in using large scale commodity infrastructures this is a new frontier for design & testing. Applications which fail to scale to meet consumer demand will lose business and/or reputation. Applications which fail to manage resources deployed in a controlled fashion will lose money for the owning organisation. This is fast becoming a critical point of concern for operating cloud service providers with failures being highlighted in the media every week. There are emerging strategies for scalability but these are new and the tools and techniques are not yet robust against adoption. Combined with the need to plan a resilient and dynamic implementation from infrastructure and platform service providers, this is a major cause of uncertainty for prospective cloud service providers.
Trust At the heart of the use of the Internet lies the proposition that the prospective buyer knows who the service provider is and can build a trustworthy relationship with them. However, the tools and methods for assuring Identity tend to be one way – from user to service provider and not yet authoritative. In addition the ability to understand who the supplier is, where they are based, how they manage their business is not yet mature. At its worst, doing business with a service provider which is not trustworthy is a major risk to a business. The ability to discover key information may be compromised in several ways and personal, customer and business information can be abused rapidly in the internet. However, not doing business using the Cloud and Internet may result in a loss of custom and eventually the business. A rock and a hard place! The careful selection and maintenance of cloud service providers is a crucial requirement. The control of information supplied to service providers is also important to restricting the level at which individuals and organisations may be compromised. New frameworks for Identity Assurance are proposed that build upon established models in the marketplace. It is to be hoped that these will make a difference.
Standards A facet of a commoditising market is the emergence of standards which govern the delivery of services. These standards enable competition and give confidence to the user that they have a choice in supply which will drive down price and drive up efficiency. This without the risk of lock-in and the price gouging which may result. In the worst case, where no standards exist, the user is completely dependent upon third party providers to deliver their services. This opens the possibility of high prices, increasing costs and an inability to switch to alternative suppliers. Changes may also have to be paid for as custom requirements, furthering the locked in situation. There are 25 standards bodies currently engaged in Cloud Computing. Yet the key areas of user interest are few:- Interoperability, portability, open APIs to name a few. In fact, many of these needs can already be met by a cluster of industry suppliers (e.g. AWS, VMWare, Microsoft, Eucalyptus and others). There is also an OpenStack initiative which will address some of these needs too! More visibility is required.
IT Skills The emergence of Cloud Computing and IT as a Service marks a maturing of the IT industry and a shift towards commoditization. With Service Providers claiming large economies of scale, the case for building a service is being rapidly eroded. Where does that leave the IT professionals in the organisation? Their skills have typically been focused on making IT work. Now the challenge is procuring IT services in a systematic and well judged manner. At worst, the lure of Cloud Services can lead to an injudicious reduction of IT staffing and skills. The folly of this is that the adoption of Cloud Services is a critical time to ensure alignment with business and service architecture models with the most important aspects managed to meet the business requirements. At best, the IT department can be marginalised and bypassed as line of business staff buy their own. Neither state is desirable. What is needed is a dispassionate analysis of the key processes, services and skills required by the business. Followed by redeployment, retraining and realignment of resources to manage the transition and then future business state. This is not a regular add on to department needs. It is a major change project. If successful the IT team switch from 80:20 keeping the lights on to 20:80 innovating on behalf of the business!

The Standards Man Cometh …

The title makes reference to the classic Flanders & Swan song, “The Gasman Cometh” in which a series of tradesmen ensure through systematic incompetence that the cycle of domestic repair work is neverending and “it all makes work for the working man to do!” This thought struck me as I sat in the International Cloud Symposium at CA’s Ditton Manor last week. It was a joint event hosted by CA and OASIS, aimed at bringing together the world’s standards authorities and government officials to consider what standards are required to accelerate cloud adoption. Representatives of 25, yes “twenty-five” standards bodies with an interest in Cloud attended and we heard from the work of World Economic Forum and ENISA on their analysis of the benefits and obstacles towards cloud adoption, followed by a set of sessions each themed to address key areas. These included Governance; Security; Identity; Privacy; Legal, etc.. The conference was well attended, towards 100 people including National Institute for Science and Technology, European Commission and many leading vendors. There were several excellent sessions engaging Government representatives to present their views and situations with quite honest and objective appraisals. It was equally bold that the organisers allowed tweeting to #intcloudsym to be displayed during interactive sessions too. Always a potential minefield, particularly in large assemblies. However, the session which really caught my attention was Wednesday afternoon’s session on Public Sector Clouds: Constraints & Requirements facilitated by Bob Marcus who is a champion of Open Standards, see Cloud-Standards.Org, working with NIST for the Federal Government. The key take away from representatives from US, UK, China, Singapore, India and the European Commission, is that they are roaring away into the distance with little attention to the rear view mirror. When prompted (by me) what they would ask of the 25 standards organisations present within the next two years their answers were improvements on:- Portability (*2), Interoperation, Geographical Storage and some form of trust scale of measurement for suppliers in terms of security. The one exception was a request for harmonization of standards across Eastern and Western markets. The bottom line for me? The Cloud marketplace is close to firing on all cylinders. It’s not just the private sector, Governments from around the world are forging a path forwards to deliver their services and while, yes, there are some concerns. You can bet your life that no-one is going to wait 2 years for a solution. Yes, we would all like interoperability and, in the words of an unnamed US CIO: “I will adopt cloud when I can be assured that I can switch to a new supplier in a week”. The UK Government would add: “and at no cost to the taxpayer!” While these are perhaps ambitious goals, they set the tone looking forwards. What price a standard within two years which will solve that problem? In the meantime people will make the procurement decisions in the same way as usual and happily purchase services from someone who will apparently meet their immediate needs, while perhaps waiting for a better solution to drive by. I was left contemplating the nature of the standards industry as a whole. Could it be that standards are really best delivered at the technical level as building blocks, and perhaps should they be derived from the best in class services on offer? I heard one influential opinion, from the twittersphere, suggest that standards bodies would be better off looking for best in class services, obtaining an open source software reference implementation and documenting that as a standard. In a fast moving marketplace, which computing usually is, standards mostly take the back seat if they are not required up front for operation. A good example of this process of adoption is the AMQP story now being adopted in OASIS as a standard for enterprise messaging, based on a successful track record in operation. This is in fact a UK success story with Chris Swan, now at UBS, originally highlighting the need for a solution in Financial Services systems and Alexis Richardson through RabbitMQ, now a subsidiary of VMWare, taking up the baton. Perhaps the future for Cloud Standards depends more upon the reference implementation of EC2, S3 and other services being offered up. After all, the leading cloud service providers make full use of open source implementations of key components in their systems, e.g. Xen, Linux, etc.. We’ve seen some good work in the open virtual machine market, there is already a large amount of interoperability at that level between the key infrastructure players, and now Open Stack looks promising – albeit an industrial organisation based around a few vendors. Another promising approach is being following by the Open Data Centre Alliance this time a large number of end users are holding the reins and aggregating at their own needs to influence the supply community. It would be good to return the favour and short circuit the international standards industry with its geological time scales for change! After all, Cloud waits for no man …

Multicore to Manycore, A good idea or two?

I enjoyed an email conversation with an old inspiration of mine Tom DeMarco earlier in the week. Facilitated by a mutual colleague, Tom and I discussed the potential for success with multicore in a world dominated by serial computer programmes and single thread assumptions. During the brief exchange Tom reminded me of the old days of objects and object oriented programming, e.g. Smalltalk and things of that ilk. The essential premise being that objects interacted through a predictable set of commands and communicated through simple modes of message on demand. While initially envisaged as a means for interaction with an human user, the ability to preserve state in an object and interact dynamically provides a new means for computing. This could therefore become a strong foundation of an adaptive and inherently parallel world of computing. The thought struck me powerfully as a more effective means of parallelising our world and ideally adaptable to new devices with 100s if not 1000s of processors available.

Then I stopped to think more about the world of services and remembered the model and architecture being implemented at Amazon Web Services today. Not object oriented in the sense described above, services must be invoked for a purpose, but very much service oriented which has come to provide a mechanism that allows services interoperate together to deliver information to the user in a synchronous way. Perhaps this is the architecture we should be adopting for future manycore systems? The Cloud architecture. Surely the problems are similar in nature, if not in scale. Who out there is able to think this through further?

Meanwhile I am retiring to re-read the Smalltalk-80 Bluebook, a foundation document for our early interests in Object-Orientation at the HP Office Productivity Division located at Wokingham during the early 80’s. It was there that we discovered Smalltalk and the work of Xerox Palo Alto Research Centre, it influenced us to develop a prototype of an Advanced Office System to replace the HP Mail system then entering service on the HP 3000 and other platforms. It is strange to think that I cycled past the Xerox PARC offices every day on my commute to 1501 Page Mill Road where I started my HP career in Palo Alto. The building was small and always looked empty. I discovered in the years that followed that XEROX PARC had a major new centre to the West of Foothill Expressway in the foothills. This place was the cradle of so many modern computing innovations in the 70s and early 80s. Windows, icons, mice and pointer displays; local area networking; laser printers and the power book (perhaps now instantiated as  the macbook/iPad) all had their roots in PARC. Most of which has now seen the light of day through the exciting growth and innovation at Apple – an early adopter of these ideas. Radical indeed for a 1970’s home pc hobby business.

The UK Government ICT Strategy – Whither G-Cloud?

The UK Government ICT Strategy, 2011 – Whatever happened to the G-The Coalition Government published it’s ICT Strategy on March 30th 2011. This document brought up to date the plans for Central Government to address the Efficiency & Reform Agenda put in place following the change in regime in May 2011. The Government inherited an ambitious legacy plan published in March 2010 which centred around the Government Cloud, or G-Cloud. This plan was in three phases: data centre consolidation, shared services across government departments and an Application Store for civil servants to gain access to services available to them in their role. Note that there was no explicit plan for a government cloud, or infrastructure as a service. Rather there was work done on an architecture which could be converged upon as new services were introduced, thus allowing sharing of infrastructure across government in the future.

The updated ICT Strategy capitalises on this work but is focused more directly on savings which can be made in the short term. These savings are concentrated in the area of data centre consolidation. Central Government’s existing estate amounts to a modest number of servers, spread across a large number of locations and typically under-utilised by modern standards. Therefore the first step in implementation of the ICT Strategy is to consolidate this estate and increase efficiency. In doing this changes will be required in the standard risk assessment policies operated by Government Departments. The policy framework is owned by CESG in Cheltenham, and describes the rules against which risks must be assessed and comply. The role of the Senior Information Risk Officer in assessing risk will change as data is stored in larger shared, shared facilities. The standard rules will need to adapt to best industry practice, which may eventually incorporate public cloud provision. One specific concern will relate to the location of storage. Government Data is often required to be stored in the UK by law. Most major cloud service providers are multiple national, they offer replication for information security. Replicas can often be stored in other legislative domains.

Application Store is next up, this is a portal which offers dynamic provision of services to authenticated users. These services are expected to be provided at the current lowest cost across Government, with new services and applications made available to the civil servant through a dynamic marketplace accessed via the Apps Store portal. Delivery of the Apps Store will depend upon convergence in identity and authentication services across government, effective service catalogue management, including service integration, and virtualisation to allow dynamic provisioning. However, the ability to dynamically scale services, achieve convergence of successful services and effective management of service delivery costs offers much value to Government. The potential is that the Apps Store mechanism will expand across all branches of government, Central and Local, potentially to the citizen too, as the provision of services expands either “in-house” or in the marketplace for privately offered services operating on open data sources. This would provide an excellent mechanism for small and medium sized businesses to offer their services into government. Thus accelerating the pace of change and innovation to meet the goals of the Efficiency and Reform agenda.

Finally, attention is needed to the procurement process in order to facilitate the inclusion of SME businesses in the provision of services to government. This ICT Strategy will certainly open doors for smaller businesses to introduce their services, but the current heavyweight procurement processes will prevent them doing so without change. More work to be done there!

For more information see http://www.cabinetoffice.gov.uk/resource-library/uk-government-ict-strategy-resources

Multicore – Beyond comprehension?

I attended an interesting session at the BCS last night. My colleague Peter Dzwig, Chair of the BCS Distributed and Scalable Computing Specialist Group, and a partner in Concertant, LLP, gave an update on the state of the development of multi-core computing. The bottom line, for those who would read ahead, is that our colleagues in the chip design industry are packing as many transistors as possible onto dies and plan to release “chips” with 10s and 100s of processors available for programming in the next couple of years. In fact this is not new news. The innovation cycle in the chip design business is between 10 and 15 years long. With R&D demonstrating feasibility of small dimension features (now down to 45 nanometre in production and 22 nm in process) ahead of packing and process innovation leading to the new designs appearing in your server. The difficulty is that the software side of the equation has not kept pace with this level and pace of innovation. It is indeed true that the number of transistors being packed onto dies is keeping pace with Moore’s law and yet the software we use to liberate this power is lagging behind the curve. In fact, our contemporary approach is  to treat a core as a virtualisable entity and load it up with a number of stacks in the hope that this will achieve a measure of scalability. The challenge of this approach is that the shared memory on a multicore chip is limited and our hunger for memory into the gigabyte size, rapidly falls short of desire and thus the “scaled out” applications do not run at speed, and may not run at all! Alternative strategies call for operating systems to allocate tasks heterogeneously to the cores available to give us a measure of parallelism. The difficulty there being that outside servers, the OS in the market today do not do this very well – or at least visibly, and that a quad core processor installed in your workstation is unlikely to provide a linear measure of speed up. Unless you have some purpose built applications which can take advantage of the power available.

Now, this is not a new rendering of the problem. In fact the auspicious IEEE Computer Society, of which I am also a member, announced in its Outlook 2011 edition of Computer, that there was an increasingly urgent need for new approaches and tools for programming these scalable computing devices. In fact, you might also make the same assertion about programming for the Cloud! Much of the same challenge exists when we seek to execute programmes in parallel to improve throughput and/or performance. Where is the understanding that can help us begin to grasp this nettle and move forwards?

I would point in two directions for at least some insight. Firstly, I attended the Amazon Web Services Summit 2011 in London last week. We were addressed by the mighty Werner Vogels, @werner, CTO of Amazon.com, amongst many others. Werner drew our attention to the evolving nature of the services available on the AWS cloud platform. These services are typically simple components with which one can build a resilient and scalable implementation of the application and services to be delivered. The guiding principle of their design is that they are simple. They have eschewed the niche opportunities that exist in myriads in our industry to ensure that the services developed meet the objectives of the developer and those responsible for operation. An industrial design model for the future perhaps. Proof of the pudding lies in the fact that Amazon.com switched off its last dedicated server for its online services last November. The services provided are now entirely served by AWS and the peaks of utilisation have been comfortably managed thus far by the infrastructure. They estimate that this has saved them about 79% of their peak infrastructure costs – had they built their own! Do recall that this amounts to hundreds of millions of customers generating billions of transactions much of which happens at the same time in December!

Another source of insight, comes from the diehard parallel programming scholars, from where Peter Dzwig originally hails. This is a forgotten community of innovators and sceptics, whom have carried the vision and light from earlier movements (e.g. Transputer; Functional Programming, etc.) into the modern world of C++, Java, Fortran and Excel programming. These elders in the field understand the fundamental nature of threads and their executability. Of Data Flow and Rules based programming. Either way, it is clear that our existing passion for serial lines of logic cannot be met in this increasingly parallel world. We need to get back to basics and think about transformations and entry and exit criteria to allow our computing infrastructures to be fully utilised.

Along these lines the ICT KTN is running a couple of events in the next few months which seek to examine this. The first is in Edinburgh on Tuesday, June 28th. See below!

Scalable Computing, a new Dawn? This event to be held at Heriot-Watt University, on Tuesday June 28th starting at 12:15pm, will include a free lunch, a glass of wine, and much talk of things parallel, from Professors Greg Michaelson and Phil Trinder from the University; and colleagues from Freescale, CriticalBlue and Cloudsoft. The objective is to review the advancing world of multi-core and distributed, Cloud, computing from the perspective of tools and techniques for parallel computing. It’s free and further information and registration details can be found here.

Cloud, Government, Standards and Sustainability

Welcome to my first post to a new blog for a new project. On Friday, April 1st, 2011, after luncheon (and thus after April Fools risks) we launch a new Knowledge Transfer Network (KTN). The new KTN is an upgrade of the previous Digital Communications and Digital Systems KTNs funded by the UK Government through its innovation agency, the Technology Strategy Board. The _connect websites for the KTNs can be found here.

My role in the new KTN is to lead in the area of Enterprise computing. Topics included in this list to start with are:- IT as a ServiceGovernment Computing (Platform/Service) and Scalability. Scalability is the process of taking systematic advantage of scalable resources in Cloud or Multi-core infrastructures. Along the way I will also keep an eye on developments in the Green IT space, via colleagues in the BCS Data Centre Specialist Group and their excellent work as part of a global network of innovators including Green Grid and an international standards body for defining metrics.

The challenge here is two-fold. Firstly, making more efficient use of power deployed in delivering computing services and, secondly, improved used of computing services to minimise expenditure of carbon. Both of these looking forwards to the 20:20:20 goals carbon emission targets set by the UK Government. I think that these targets are a genuinely useful filter to apply to advances in the industry as a whole. If you combine the target to double compute power each year, while reducing the carbon emissions to achieve a 20% reduction of power consumed and emissions generated by 2020, that should do the trick – as a start.

However, the main purpose of this first post is to reflect on the last 18 months of work in the Cloud and Government areas and offer some pointers to the opportunities and challenges that lie ahead. Most of my work has been in the context of the Digital Systems KTN now coming to a close. We have looked widely at the concepts of cloud computing, potential for implementation and some genuine user case studies. All of which can be found on the KTN website. As of April 1st, I would contend that most attendees at events to which I have contributed recently are ready to explore adoption, or are already on their way.

Examples include:

  • organisations using the cloud to deliver their software as a service;
  • organisations using cloud services to supplement or replace existing in-house business processes;
  • organisations looking to introduce cloud services to underpin other users. This includes Infrastructure service providers and the emerging platform service providers.

The balance towards adoption is demonstrably tipping and we will move forwards by focusing on practical activities to allow potential adopters to meet and learn from other users and their suppliers.

Nowhere is the potential more rich than in Government or the market for small and medium size enterprises (SMEs). A year ago the last UK Government concluded work on the G-Cloud, an innovative strategy for changing the way that government services may be delivered using Cloud computing at its core. The documents from that project have been made available here and there is much of interest for students of government computing around the world.

The challenge in both cases is to find that judicious balance of bought-in services versus unique contributions which make the adoption of cloud anything but a trivial exercise from the IT side. There are risks in Security (though maybe not what the usual assumptions about this are); Supplier Dependability; Network Access and Scalability. There are also complications on service level agreements and billing. All of which we will highlight in our events and activities to help ensure better advice and understanding in user and supplier.

Related to this is my interest in open standards, which in Cloud computing terms are usually employed to describe standard application programming interfaces (APIs) which may be used to allow multiple sources of supply and thus avoid “lock-in”. It’s early days here but the work underway at Cloud-Standards.Org, the Open Grid Forum (OGF) and Open Stack are all aimed at moving the industry forwards in this area. A European funded initiative, Siena, is carrying the torch for the European Commission, which is waking up to the challenge and opportunity.

Scalability is a work in progress and growing in importance and understanding at the same time. In the next few months we will be working to bring insight to this area with colleagues from the industry leaders and from our academic friends. My colleagues at the BCS Distributed and Scalable Computing Specialist Group are keenly interested in this topic, but growing their new community too.

Underpinning all of this is the Knowledge Transfer agenda. The idea of bringing together those with an opportunity for innovation and those with the ability to help deliver business success. We will continue to use the tools that we know work well around the nation. Face to face events; speaking slots at industry conferences; webinars for wider access and reference and case studies demonstrating success “in the wild”. It is the members of the KTN community which are key to this. Please bring us your thoughts, interests, challenges and get involved. Membership of the ICT KTN is free and you can register for all the KTNs on the TSB _connect platform, an innovators network where you will meet like minded professionals from all walks or work life.