Archive for the Cloud Computing Category

Amazon’s First Annual Ph.D. Symposium: Program

Posted in Cloud Computing on November 21, 2013 by swaminathans

As I had called out earlier, Amazon is conducting its first annual Ph.D. symposium where we invite Ph.D. students around the world to come and present their research topics in the form of talks and posters on three major topics:  Cloud computing, Operations Research and Data Mining.

We had a great list of submissions from Ph.D. candidates  based out of top schools in USA, Canada and Europe. This made our selection process hard and competitive.Our acceptance rate was in the single digit percentage which is extremely low for a research symposium carried out for the first team.

This enabled us to generate a great program that stretched over a single day with three parallel tracks: Cloud Computing or AWS, Machine Learning/Computer Vision and Operations Research. Credit goes to Jay Little from Amazon (and his team) for putting together such a great program for its symposium.

I wanted to highlight a few talks that were selected in these tracks.

Cloud Computing 

Justin Debrabant, Brown University.

Beyond Main Memory: Anti-Caching in Main Memory Database Systems.

The traditional wisdom for building disk-based relational database management systems (DBMS) is to organize data in heavily-encoded blocks stored on disk, with a main memory block cache. In order to improve performance given high disk latency, these systems use a multi-threaded architecture with dynamic record-level locking that allows multiple transactions to access the database at the same time. Previous research has shown that this results in substantial over- head for on-line transaction processing (OLTP) applications.

The next generation DBMSs seek to overcome these limitations with architecture based on main memory resident data. To overcome the restriction that all data fit in main memory, we propose a new technique, called anti-caching, where cold data is moved to disk in a transactionally-safe manner as the database grows in size. Because data initially resides in memory, an anti-caching architecture reverses the traditional storage hierarchy of disk-based systems. Main memory is now the primary storage device.

We implemented a prototype of our anti-caching proposal in a high-performance, main memory OLTP DBMS and performed a series of experiments across a range of database sizes, workload skews, and read/write mixes. We compared its performance with an open-source, disk-based DBMS optionally fronted by a distributed main memory cache. Our results show that for higher skewed workloads the anti-caching architecture has a performance advantage over either of the other architectures tested of up to 9× for a data size 8× larger than memory.

In addition, this talk will encompass ongoing work on adapting OLTP systems to use non-volatile RAM (NVRAM). With latency profiles inline with current DRAM speeds and persistent writes, NVRAM technologies have potential to significantly impact current OLTP architectures. We will discuss several possible architectures for NVRAM-based systems including an extension to the anti-caching architecture originally designed with disk as a backend. Finally, we will present early benchmark results attained using a NVM emulator constructed by our collaborators at Intel.

Rahul Potharaju, Purdue University

Network failures are a significant contributor to system downtime and service unavailability. To track troubleshooting at various levels (e.g., network, server, software), operators typically deploy trouble ticket systems which log all the steps from opening a ticket (e.g., customer complaint) till its resolution. Unfortunately, while tickets carry valuable information for network management, analyzing them to do problem inference is extremely difficult — fixed fields are often inaccurate or incomplete, and the free-form text is mostly written in natural language. To analyze this free-form text, we have built NetSieve [NSDI’13], a system that aims to automatically analyze ticket text written in natural language to infer the problem symptoms, troubleshooting activities and resolution actions.

In this talk, I will present the design philosophy behind NetSieve and the lessons we learnt by applying our system on several large datasets. The goal of my talk is to motivate the research community to focus on these unexplored yet important datasets as they contain actionable data to understand and improve system design. For instance, the lessons learnt by applying these techniques to vulnerability databases can be applied towards improving the security of deployed systems, understanding the common pitfalls of writing secure programs etc. Similarly, lessons learnt from customer review repositories (e.g., Amazon) can be applied towards understanding problems faced by a majority of customers and improving future product designs.

Shrutarshi Basu, Cornell University

Merlin : A Unified Framework for Network Management

Merlin is a new network, high-level management framework. With Merlin, administrators express network policy using programs in a declarative language based on logical predicates and regular expressions. The Merlin compiler automatically partitions these programs into components that can be placed on a variety of devices including switches, middleboxes, and end hosts. It uses a constraint solver based on parameterizable heuristics to determine allocations of network-wide resources such as paths and bandwidth. To facilitate management of federated networks, Merlin provides mechanisms for delegating management of sub-policies to tenants, along with tools for verifying that delegated sub-policies do not violate global constraints. Overall, Merlin simplifies the task of network administration by providing high-level abstractions for specifying network policy and scalable infrastructure for enforcing them.

Machine Learning/Computer Vision/Natural Language Processing:

Bin Zhao, CMU

With the widespread availability of low-cost devices capable of photo shooting and high-volume video record- ing, we are facing explosion of both image and video data. The sheer volume of such visual data poses both challenges and opportunities in machine learning and computer vision research.

In image classification, most of previous research has focused on small to medium-scale data sets, containing objects from dozens of categories. However, we could easily access images spreading thousands of categories. Unfortunately, despite the well-known advantages and recent advancements of multi-class classification tech- niques in machine learning, complexity concerns have driven most research on such super large-scale data set back to simple methods such as nearest neighbor search, one-vs-one or one-vs-rest approach. However, facing image classification problem with such huge task space, it is no surprise that these classical algorithms, often favored for their simplicity, will be brought to their knees not only because of the training time and storage cost they incur, but also because of the conceptual awkwardness of such algorithms in massive multi-class paradigms. Therefore, it is our goal to directly address the bigness of image data, not only the large number of training images and high-dimensional image features, but also the large task space. Specifically, we present algorithms capable of efficiently and effectively training classifiers that could differentiate tens of thousands of image classes.

Similar to images, one of the major difficulties in video analysis is also the huge amount of data, in the sense that videos could be hours long or even endless. However, it is often true that only a small portion of video contains important information. Consequently, algorithms that could automatically detect unusual events within streaming or archival video would significantly improve the efficiency of video analysis and save valuable human attention for only the most salient contents. Moreover, given lengthy recorded videos, such as those captured by digital cameras on mobile phones, or surveillance cameras, most users do not have the time or energy to edit the video such that only the most salient and interesting part of the original video is kept. To this end, we also develop algorithm for automatic video summarization.

Therefore, the goal of our research can be summarized as follows: We aim to design machine learning algorithms to automatically analyze and understand large-scale image and video data. Specifically, we design algorithms to address the bigness in image categorization – not only in the form of large number of data points and/or high-dimensional features, but also the large task space. We aim to scale our algorithm to image collections with tens of thousands of classes. We also propose algorithm to address the bigness of video stream, with hours in temporal length, or even endless, and automatically distill such videos to identify interesting events and summarize its contents.

Ying Liu, MIT:

Gaussian Graphical Models (GGMs) or Gauss Markov random fields are widely used in many applications, and the trade-off between the modeling capacity and the efficiency of learning and inference has been an important research problem. In this paper, we study the family of GGMs with small feedback vertex sets (FVSs), where an FVS is a set of nodes whose removal breaks all the cycles. Exact inference such as computing the marginal distributions and the partition function has complexity $O(k^{2}n)$  using message-passing algorithms, where k  is the size of the FVS, and n  is the total number of nodes. We propose efficient structure learning algorithms for two cases: 1) All nodes are observed, which is useful in modeling social or flight networks where the FVS nodes often correspond to a small number of high-degree nodes, or hubs, while the rest of the networks is modeled by a tree. Regardless of the maximum degree, without knowing the full graph structure, we can exactly compute the maximum likelihood estimate in $O(kn^2+n^2\log n)$  if the FVS is known or in polynomial time if the FVS is unknown but has bounded size. 2) The FVS nodes are latent variables, where structure learning is equivalent to decomposing a inverse covariance matrix (exactly or approximately) into the sum of a tree-structured matrix and a low-rank matrix. By incorporating efficient inference into the learning steps, we can obtain a learning algorithm using alternating low-rank correction with complexity $O(kn^{2}+n^{2}\log n)$  per iteration. We also perform experiments using both synthetic data as well as real data of flight delays to demonstrate the modeling capacity with FVSs of various sizes.

Hossein Azari Soufiani, Harvard:

Rank data can be modeled using Random Utility Theory by assigning a real-valued score to each alternative from a parameterized distribution, and then ranking the alternatives according to scores. Our studies have enabled the estimation of parameters for random utility models (RUMs) with exponential families as noise. We also extend RUM models by defining an RUM mixture model with multiple preference types. We identify conditions that enable fast inference through MC-EM, providing identifiability of the models and approximate log-concavity of the likelihood function.

Operations Research

Aly Megahed, Georgia Tech

Topics in Supply Chain Planning in Deterministic and Stochastic Environments.

The problem of supply chain planning became vital to the business practices of most manufacturing and service organizations in today’s global marketplace. In this talk, we present three unified supply chain planning problems.

For the first problem, motivated by the planning challenges faced by of one of the world’s largest manufacturers of wind turbines, we present a comprehensive tactical supply chain planning model for wind turbine manufacturing. The model is multi-period, multi-echelon, and multi-commodity. It explicitly incorporates delay dependent backorder penalties with any general cost structure for the first time. We present numerical results that show the significant impact of this capability. We also show the effect of backorder penalties on different supply chain costs.

We extend the first problem to include uncertainty, and develop a two-stage stochastic programming model for this second problem. The considered supply uncertainty combines random yield and stochastic lead times, and is thus the most general form of such uncertainty to date. Theoretical and computational results regarding the impact of supplier uncertainty/unreliability on procurement decisions are derived.

In our third problem, we extend the first problem by including procurement with time-aggregated quantity discounts, where the discounts are given on aggregated order quantities while the orders are placed on periodic basis (e.g., total annual orders that were placed on a monthly basis). We develop a Mixed Integer Programming (MIP) model for the problem. The model is computationally very challenging to solve. Leading commercial solvers such as CPLEX take hours to generate good feasible solutions for many tested instances. We develop a pre-solve algorithm that reduces the number of binary variables and a MIP based loval search heuristic that quickly generates good feasible solutions. Preliminary results show that our solution methodology is promising.

Rattachut Tangsucheeva, Penn State

Why Firms Go Bankrupt? A Dynamics Perspective

Managing modern supply chains involves dealing with complex dynamics of materials, information, and cash flows around the globe, and is a key determinant of business success today. One of the well-recognized challenges in supply chain management is the inventory bullwhip effect in which demand forecast errors are amplified as it propagates upstream from a retailer, in part because of lags and errors in information flows. Adverse effects of such bullwhip effect include excessive inventory and wasteful swings in manufacturing production. In this research we postulate that there can be a corresponding bullwhip in the cash flow across the supply chain and it can be one of many reasons why firms run out of cash and become bankrupt. Furthermore, we explore ways to predict its impact by using a cash flow forecasting model and identify a potential strategy to engineer a solution for controlling working capital by using portfolio optimization theory.

Sarah Ahmadian, Waterloo

Mobile Facility Location

I will talk about the Mobile Facility Location. This problem is motivated by a model where a manufacturer wants to set up distribution centers so that it can send products to these centers (in one shipment) and the products needed at each retail center can be shipped from the closest distribution center.

IEEE Network Magazine: Cloud Computing Special Issue

Posted in Cloud Computing on January 9, 2011 by swaminathans

I’m co-editing a special issue on Cloud computing for IEEE network magazine. For folks interested, take a look at the CFP below. Submission deadline is Jan 15, 2011.

Call For Papers

Final submissions due 15 January 2011

Cloud Computing is a recent trend in information technology and scientific computing that moves computing and data away from desktop and portable PCs into large Data Centers. Cloud computing is based on a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud Computing opens new perspectives in internetworking technologies, raising new issues in the architecture, design and implementation of existing networks and data centers. The relevant research has just recently gained momentum and the space of potential ideas and solutions is still far from being widely explored.

This special issue of the IEEE Network Magazine will feature articles that discuss networking aspects of cloud computing. Specifically, it aims at delivering the state-of-the-art research on current cloud computing networking topics, and at promoting the networking discipline by bringing to the attention of the community novel problems that must be investigated. Areas of interest include, but are not limited to:

  • Data center architecture
    • Server interconnection
    • Routers (switching technology) for data center deployment
    • Centralization of network administration/routing
  • Energy-efficient cloud networking
    • Low energy routing
    • Green data centers
  • Measurement-based network management
    • Traffic engineering
    • Network anomaly detection
    • Usage-based pricing
  • Security issues in clouds
    • Secure routing
    • Security threats and countermeasures
    • Virtual network security
  • Virtual Networking
    • Virtualized network resource management
    • Virtual cloud storage
  • Futuristic topics
    • Interclouds
    • Cloud computing support for mobile users

Guest Editors

Swami Sivasubramanian
Amazon, USA

Dimitrios Katsaros
University of Thessaly, Greece

George Pallis
University of Cyprus, Cyprus

Athena Vakali
Aristotle University of Thessaloniki, Greece

Datacenter Networks and move towards scalable network architectures

Posted in Cloud Computing on November 2, 2010 by swaminathans

Last week, I was super excited to attend James’s talk on “Datacenter networks are in my way” in our internal talk series called Principals of Amazon. As always, James’s talk always is illuminating. I highly encourage everyone to read James’s post and the slides.

A few takeaways from James’s talk worth calling out:

– Contrary to popular belief, power is not the leading driver for datacenter operational cost. It is actually the server cost (which is about 57%).

– The above leads to the conclusion that techniques like shutting down servers when the server is not being used, while interesting, is not a big return for the investment. Instead, you are better off doing the exact opposite: utilize your existing server investment to the fullest.

– Traditional DC networks are usually oversubscribed and live in a very vertical world where all network components are done by a single vendor and are also built to be more like mainframe with “scale up” (get bigger boxes) model instead of “scale out” model. This is bad for sustainability and reliability.

– To enable higher server utilization, you need your datacenter networks to support full connectivity between hosts and not be oversubscribed.

The above takeaways tell us that we need to build DC networks such that they can be easily scaled (moving from oversubscribed to undersubscribed). To scale the DC networks, we need to build out a scale out DC network architecture and systems like OpenFlow enable that. It is interesting to see that just like what we learnt in distributed systems and datastores is applicable to datacenter networks also: Scale out (horizontal scaling) is in the long run better than scale up (vertical scaling).

Taking on a new role in AWS Database Services

Posted in Cloud Computing, NoSQL on October 12, 2010 by swaminathans

As many of you know me and my work, I’ve always been passionate about building large scale distributed systems. I’m glad to have had the opportunity to work in teams that built great systems like Amazon Dynamo, Amazon CloudFront, and Amazon RDS. Moreover, I had the opportunity to learn from some exceptionally smart people like Werner Vogels, Al Vermuellen and James Hamilton.

I am personally thrilled to see the momentum Dynamo has created in the NoSQL world and was personally excited to talk about in the NoSQL meetups. Similarly, I have been amazed at the rapid adoption of RDS and its Multi-AZ features.

So far, in AWS, I’ve been working as a Principal Engineer (aka Architect) in charge of the architecture and implementation of these systems. Recently, I took on a new role to manage and lead the non-relational database services team in AWS in a leadership role. I’ll be leading the SimpleDB team and couple of other internal teams and looking at ways to build scalable data access primitives. I’m super excited to to join this incredibly smart team and deliver some great systems.

On this note, I’m actively hiring for my teams, see here. If you’re interested in joining a team that is building what will be the blueprint for scalable datastores, then please send me a note ( I’m looking for highly talented engineers.

Just Launched: Amazon RDS Multi-AZ

Posted in Cloud Computing, Distributed Systems on May 18, 2010 by swaminathans

Today, my colleagues and I launched a major feature in Amazon RDS called Multi-AZ DBInstances. A Multi-AZ DB Instance performs synchronous replication of data across replicas located in different availability zones.

What benefits does Multi-AZ DB Instance provide?

–          Higher Durability: Your data is synchronously replicated across different availability zones. This guarantees higher levels of durability even in the wake of a disk, instance or availability zone failures.

–          Higher Availability: With Multi-AZ instances, we perform master-slave replication and when we detect that the master is unavailable, we automatically failover to the slave replica. This guarantees higher levels of availability. Contrary to MySQL’s asynchronous replication, since the data is synchronously replicated, when the replica fails over to the secondary replica you will experience no data loss.

You can learn more about all the resilient goodness of RDS Multi-AZ deployments here.

Building Scalable Social Gaming Platform using Clouds

Posted in Cloud Computing, Distributed Systems on February 18, 2010 by swaminathans

This week, I ran into many social gaming related posts.  First thing that surprised me was the social games are not played predominantly by teens and the average social gamer is around 40s.  This illustrates why these social games attract tens of millions of players every day.

When I read the high scalability article about How Zynga scaled Farmville to Harvest 75 Million Players a month, I was intrigued by their scaling challenges and requirements:

* Read-write ratio: Interactive games are write heavy unless traditional web applications. Seems intuitive when you think about the fact every move is recorded in a datastore.
* Users are disturbed by high latencies and variability in latencies:  Note that I called high latencies and latency variations as two different entities. As we noted in Dynamo, Amazon also cares about the variability in latencies (percentiles) and build our services to make sure we can constantly keep the variability in control.
* Dealing with latency and failure characteristics of external dependencies: These applications need to deal with external platforms like Facebook which may or may not be available all the time.

I like the way Zynga approached this problem:

* For solving heavy writes, looks like they have partitioned heavily. Seems reasonable – however, I’m curious to see how they handled “hot spots” (wherein the most active users are the ones constantly generating more data) and whether simple hash-based partitioning is good enough to spread the hot spots.

* For handling latency variations, they went for isolating each component and built graceful degradation at each layer. This is a common practice in building large scale systems . The thing what I would be curious to see is how “gracefully” does their datastore degrades and also which datastore they are using for that.

* Finally, to deal with failures of external dependancies and still meet latency SLAs, looks like they cache the responses of external dependancies.

These are very good lessons for building scalable systems.

An interesting followup to this, I saw that Rightscale (which is apparently helping Zynga run on top of AWS) is using its expertise to offer a social gaming platform for other aspiring Zyngas out there! Seems like an exciting internet industry at its really nascent stages.

Amazon RDS

Posted in Cloud Computing, Distributed Systems on October 27, 2009 by swaminathans

Today, my team launched Amazon RDS, a new AWS service that offers relational database in the AWS Cloud. Amazon RDS provides a managed relational database in the cloud and the service does the heavy lifting done by DBAs such as provisioning, monitoring database health, managing automated backups, point-in-time recovery, creating/restoring snapshots, adding storage space on the fly, changing compute power for your database, etc.., all through simple Web service calls. Get started in few minutes!

A Word about Persistence: One size does not fit all.

The scalability, reliability and performance of a platform is heavily dependent on how it manages their data. In Amazon, we believe that when it comes to persistence: one size does not fit all. Pick the right data management platform based on your application’s needs. For people, who require a highly available and highly durable key-value store, we have Amazon S3. For people requiring raw disk in the cloud – we have Amazon EBS. For, people who want to store and query structured data in the cloud and don’t want to actively manage its scalability, we have Amazon SimpleDB.

One of the biggest features our customer has been asking us has been to provide a managed relational database service in the cloud. The need for their relational database can be due to various reasons: programming familiarity, dealing with existing applications already built for relational databases, need for complex transactions and joins which are relational databases’ forte. If you fall in this category, you will be happy with Amazon RDS.

For more details on Amazon RDS, take a look at:

Job Openings in Cloud Computing

Posted in Cloud Computing on October 13, 2009 by swaminathans

I’m looking for some really smart people who would like to work on building large scale distributed systems to join my team. We have various positions open : right from beginners to senior technical leaders.

In an earlier post, my boss, Werner Vogels, summarized qualifications he expects his ideal candidates to meet in a blog post that he wrote 5 years ago. My standards are no different :).

I’m looking for candidates who can meet these requirements. If you’re a beginner (college grad or so), then ideally you should be willing to get to that skill level soon. So, if you’re interested in building such large scale systems, send me your CV to my email:

Expand Your Datacenter to Amazon Cloud

Posted in Cloud Computing on August 26, 2009 by swaminathans

So, far in AWS, we have provided new services to “expand our cloud offering”. Today, with the introduction of Amazon VPC, we allow our customers to expand their datacenter to the cloud by providing a secure and seamless bridge to AWS cloud.

As Werner mentions in his post, one of the significant challenges for enterprise in moving to cloud is: how to integrate applications running in the cloud into his existing management frameworks.

To put it in a simpler way, traditionally CIOs had to plan “move to cloud” initiatives as a major project as they cannot reuse their existing software management for their EC2 instances. With the introduction of Amazon VPC the bar becomes very low.

Amazon VPC enables enterprises to connect their existing infrastructure to a set of isolated AWS compute resources via a Virtual Private Network (VPN) connection, and to extend their existing management capabilities such as security services, firewalls, and intrusion detection systems to include their AWS resources.

We are really proud to launch Amazon VPC and believe this is a true milestone in the field of Cloud Computing!

From push to pull…

Posted in Cloud Computing on July 14, 2009 by swaminathans

Last week, in an internal talk series, Werner pointed to a paper titled “From push to pull: Emerging models for mobilizing resources” by Hagel and Brown.

That night, I read that paper and what an intriguing paper it was! For folks who haven’t read this paper, the fundamental crux of this paper is follows. The traditional resource model is changing from a push model (where resource needs are planned in advance and pushed to the consumers) to a pull model (where consumers are pulling resources on demand).

For instance, in the media industry, people are moving away from traditional media sources like television where content is pushed to the audience. Instead, people are happy to pick and choose what content they want to view and pull from different sources like YouTube. In the paper, authors point to other examples such as the university education model. For example, one of the most popular universities in America, University of Phoenix, is introducing a new curriculum model that allows students to decide (i.e., “pull” ) what subjects they want to study instead of traditional “push” model. This has made the university so popular that it is the largest private university in USA.

You might be wondering, why is this related to “large scale systems”?  I just realized the obvious that IT industry is going through the same revolution of moving towards pull model and cloud computing is one of the key enablers for it.

In the past, the traditional model for running an IT shop is to plan ahead in terms of what is our hardware needs, software needs, make the appropriate buying and planning decisions. For instance, typically companies had to plan for their resource demands at least an year in advance and “push” the resources.

With cloud computing, IT shop have an option to “pull” resources only when they need to and not worry about an year-ahead planning cycle. Many may already know the story of NYTimes digital IT shop that pulled AWS EC2 resources to run its image conversion job (see TimesMachine).  Others might not know Animoto handled its peak demands using dynamic scaling capabilities of Amazon EC2.

It is interesting how cloud computing has enabled the pull resource model in the IT world.

Anyway, if you get a chance, please read Hagel and Brown’s paper!