The Alchemy Store: reflections on AWS product announcements

Published on
March 27, 2018
by
No items found.
The Alchemy Store: reflections on AWS product announcements

At the end of last year, Amazon Web Services announced a dizzying range of new products, which were presented to the London investor community during a recent blizzard. The background symbolism of the city transfixed by a once-in-a-generation snowfall was nicely equivocal: chilly disruption, but also stunning transformation, creating new possibilities.

Entrepreneurs and investors accordingly need to keep an eye on the AWS cloud, not just to avoid being Amazoned, but also, more positively, to be quick to see new possibilities for entirely new categories to be created using its primitives. Abstracting over the sheer quantity of new products, AWS evidently has significant ambitions to supply technology creators - their customers - with an ever wider array of services.

These can be condensed into a few overriding trends: development inside AWS, serverless and containerization, machine learning and voice interfaces. How these trends are connected with one another indicates the company's majestic emergent intelligence. This dawning intelligence is more general than narrow: the compositional connectivity between these new products, combined with the old, helps to explain both their genesis and the value they provide as a set to builders, who get everything they want as well as what they didn't know they wanted all in one place. AWS is able to create retention not by locking-in but by an ever-increasing scale-economized aggregation of utility. This utility is not just in the services themselves, but also the network between businesses in the same cloud infrastructure. Faced with this blast of powerful new products across every major computing category, startups can still create massive value. They will do this most easily by building at a higher level of abstraction than the AWS primitives, but also possibly by fundamental innovation at a lower level, as long as they treat AWS as part of the weather and plan accordingly.

Development inside AWS

Historically, AWS tended to be somewhere you ran software written elsewhere. Even if you actually developed on a rented instance, the most important user interface to that process, the integrated development environment (IDE) would be provided by a third party. Now, AWS offer their own IDE, Cloud9, leveraging the tech from a Dutch startup of the same name bought last year, which runs in the browser and supports more than 40 programming languages, charging only for the resources used to run and store the code. This IDE caps a now full stack of extant development services, from the compute and storage services that will run the code in production, to source control and devops tools (CodeCommit, CodeBuild, CodeDeploy), with a terminal for accessing those resources inside Cloud9. The trend is towards AWS completely owning the development process: the more blocks fall into place, the more marginal differences between a specialized tool outside of AWS and the AWS version become, when set against the financial and procedural efficiencies generated by staying put.

"Inside AWS, we're excited to lower the costs and barriers to machine learning and AI so organizations of all sizes can take advantage of these advanced techniques." Thus spoke Bezos, in his shareholder letter last year; one of the most interesting ways in which this was borne out was at the platform layer, with the machine learning development tool SageMaker, which provides end-to-end ML workflow without infrastructure. Created in part by legendary researcher Alex Smola to democratise ML on the premise that "[y]ou can write a paper about something, but if you don't build it, nobody will use your beautiful algorithm", it provides hosted Jupyter notebooks, preloaded with standard libraries, one-click distributed training with automatic cluster tear-down, built-in infrastructure-optimized algorithms and, delightfully, automatic hyperparameter optimization. Not an IDE, it allows developer-alchemists to tinker with elements of a machine learning process at GUI level, while remaining fully integrated into the elastic, secure and scalable, one-click deploy AWS environment.

Should data science tooling companies just give up, then? Not necessarily. AWS still holds tight to the philosophy of the "primitive" computational building blocks, which Bezos famously found in Steve Grand's Creation, a book about the design of an evolutionary system to generate artificial intelligence. What this means is that there are, in theory, certain levels of software complexity to which AWS will not ascend. In the data science workflow, for instance, the crucial data preparation phase, which a 2016 survey found to take up to 80% of a practitioner's time lies outside the purview of AWS tooling. 60% of the total time is spent in data wrangling, i.e. cleaning, reshaping and otherwise organizing the data. If these stages are not found in SageMaker's 'end-to-end' process, because they violate this primitive principle, AWS is more than happy to allow companies to be built on its infrastructure and marketplace to solve this more "civilized" problem. Dataiku (a Paris-based startup that Mosaic met early in its life) and Trifacta are just two examples of rapidly growing startups taking this approach in this particular space.

Serverless

We heard the word 'serverless' a lot. Of course there are servers, you just can't see them; just as there is nothing nebulous about a data centre. Practically, from the developer's perspective, the formerly semi-abstracted cloud server is now abstracted away entirely into blissful ignorance, leaving a deliciously pure focus on building the actual functionality. AWS' first offering here was Lambda, introduced in 2014, whose popularity indicates the serverless model's many potential benefits to productivity, scalability and the bottom line (not just because of the pay-as-you-go billing, but also because of the lack of OS costs). In addition to Cloud9, which you could think of as a serverless IDE for building serverless applications, Amazon introduced the Serverless Application Repository on top of Lambda, an open source pattern phrasebook for serverless apps which developers contribute to as well as extract from. There are also private repos for intra-organizational function reuse. Although the ostensible purpose of this repository was to encourage development and consumption of serverless apps, given the endemic reinventing of wheels in software engineering, one could speculate and see this as a first experiment in solving this problem more generally.

We also met AWS' first serverless database, with the elastic provisioning model long used in pure compute. Such a database could be very interesting for use cases where application usage varies widely.

Containerization

A similar model for provisioning containers without managing servers or clusters arrived as Fargate. In tandem, the announcement of the Elastic Container Service for Kubernetes (EKS) extends AWS' container orchestration capabilities to cover hugely popular open-source Kubernetes, notoriously tricky to configure and manage. Strategically, this is telling: Amazon had initially tried to compete with the Google-created infrastructure-agnostic framework directly with its locked-in ECS service. With this move in combination with the Fargate tooling, AWS have instead just made it much more likely that developers will choose to deploy their Kubernetes clusters within their ecosystem, who can migrate apps currently running either on-prem or in cloud without changing any code. Welcome to the Hotel Amazonia - you can check out any time you like, but you can never leave.

Machine learning

Entrepreneurs and investors may grow weary of the phrase 'machine learning', but for AWS it really means something: massive investment at all three core AWS abstraction layers, infrastructure, platform and application, as well as at the IoT edge. Amazon's overall AI flywheel benefits from making its tools available to customers, in cash and in data.

At the infrastructural layer, we now have EC2 P3 instances, featuring up to 8 NVIDIA Tesla V100 GPUs, up to 1 petaflop mixed precision and 300GB/s GPU-to-GPU communication; optimized for distributed deep learning, I was simply told "you can't train faster than this". For trained models, there are also new compute-optimized CPU instances, in particular the C5, offering 25-50% price/performance improvements on C4.

The main platform development is SageMaker, discussed above.

At the application layer, AWS is making leading edge ML functionality available to people entirely innocent of ML training. For instance, the existing Rekognition service can now process video as well as images. Providing image or video to an API, the service identifies, objects, people, text, scenes, activities, content types, and comes with built-in celebrity recognition.

At the edge, Greengrass does ML inference in-device, running models trained elsewhere. For example, a sensor-laden mining truck autonomously runs a Greengrass gateway, which interacts with each of the sensors and makes predictions about the mine environment by running the sense data through its pre-trained model; at a certain point in the mine the gateway connects to wifi and decants predictions to the cloud, where a decision can be made.

Voice

Amazon's capability to process voice at scale and in real time is founded upon this multidimensional progress in applied machine learning. The same technology that is driving Amazon's aggressive push to realize voice as the universal interface for its products via Echo/Alexa is available to AWS customers, and the data from millions of Echo customers rapidly improves these services.

At the hardware level, Alexa for Business is a play to run the 'smart workplace': booking rooms, arranging calls, managing calendars by voice but also programmable with private skills for interacting directly with stock control, ERP or sales tools. The marketplace for third-party Alexa skill development, whether for consumers or businesses, is nascent but could conceivably develop into something like Apple's App Store.

Lex is a conversational interface (bot) service that doesn't require Alexa hardware, for voice and text. Transcribe is telephony-optimized speech-to-text, which timestamps every word enabling text-to-audio search; it's the inverse service to the Polly speech synthesis service released at the previous ReInvent showcase. Coupled with these voice services are new neural translation services. Compositions of these primitives suggest themselves unbidden. Ben Thompson pointed out that even if these new language services are not quite good as Google's benchmark product, "they are certainly good enough to reduce the potential allure of switching". This aggregation of vast numbers of "good enough" primitives, which will rapidly improve as more users and data begin to use them, is essential to Amazon's potential dominance in many areas.

Conclusion

"Developers are alchemists and our job is to do everything we can to get them to do their alchemy", was a Bezos mantra around the launch of AWS fifteen years ago; replacing base metal with bare metal in the philosopher's stone formula, the analogy might be truer now than ever. In machine learning, much more than in traditional software engineering, we are dealing with a pre-theoretical science, where the best model is still the result of tricks, hacks and trial-and-error in an vast high-dimensional search space, this time not of chemicals, alembics and temperatures but of architectures, epochs and hyperparameters. (Indeed, a much-discussed talk at last year's NIPS conference urged the community "Let's take machine learning from alchemy to electricity".) Configuring and maintaining Kubernetes clusters, too, is a occult art. But for all that AWS offers the alchemists, the question remains as to why they should choose AWS over Google, whose TPUs are custom ASICs designed specially for ML, or another competitor, for another specialized use case. Leaving aside the technical comparisons one might make, AWS has the advantage for business-builders by virtue of network effects, as the satellite imagery company DigitalGlobe's CTO says: "It's like Willie Sutton saying he robs banks because that's where the money is...We use AWS for machine learning because that's where our customers are." No single AWS product needs necessarily to be the best in class on launch for the emergent stickiness of the ecosystem to dominate. But what AWS has done is not just for the deep technical practitioner, it has also democratized many leading edge technologies, making AI itself a primitive building block as a service for the non-expert customer.

And if AWS does hegemonize, what will it mean for startups? Amazon historically has been less acquisitive than the other big tech companies, and its secretive Lab26 has been compared to Xerox Parc; however, behind many of the new products I've discussed is an acquisition (Cloud9, Evi, Annapurna). But as I alluded to above, AWS doesn't need to do everything, and its genetic principles are to avoid high-level software. In the rush for the philosopher's stone of AI, it is selling the glassware.