Keynotes

LADIS 2011 keynote and invited speakers include:

Bios and abstracts

Eric Brewer

Challenging Consistency

Abstract:
As we move more and more into the cloud, designers face fundamental challenges as to how to achieve highly available services that present consistent state in the presence of faults. Although the CAP theorem provides some limits, there remains huge flexibility in the design of tradeoffs among the core goals of availability, durability and consistency. We walk through some of these options and aim to refine both the goals and the parameters of the space, in particular, my belief that self-consistent stale data is an important tool. The imperfectness of the real world is our friend: since "stuff happens" it is more important to retroactively correct problems rather than to avoid them completely, which is not possible. This means thinking about recovery more broadly including auditing and escalation to human support in a useful way.

Bio:
Professor of Computer Science, UC Berkeley ; VP, Infrastructure, Google

Dr. Brewer focuses on all aspects of Internet-based systems, including technology, strategy, and government. As a researcher, he has led projects on scalable servers, search engines, network infrastructure, sensor networks, and security. His current focus is (high) technology for developing regions and cloud computing. He is currently on leave from Berkeley working on the next generation of infrastructure at Google. In 1996, he co-founded Inktomi Corporation with a Berkeley grad student based on their research prototype, and helped lead it onto the NASDAQ 100 before it was bought by Yahoo! in March 2003. In 2000, he founded the Federal Search Foundation, a 501(c)(3) organization focused on improving consumer access to government information. Working with President Clinton, Dr. Brewer helped to create USA.gov, the official portal of the Federal government, which launched in September 2000.
He was recently elected to the National Academy of Engineering for leading the development of scalable servers (early cloud computing) and received the ACM Infosys Foundation award for 2009.

Dahlia Malkhi

CORFU: Transactional Storage at the Speed of Flash. Mahesh Balakrishnan, Dahlia Malkhi, Vijayan Prabhakaran, Ted Wobber, Microsoft Research, Silicon Valley.

Abstract:
Falcon exposes a cluster of flash devices to applications as a single, shared log. Applications can append data to this log or read from the middle. Internally, this shared log is implemented as a distributed log spread over the flash cluster. There are two reasons why this design makes sense:

1) From a bottom-up perspective, flash requires log-structured writes to ensure even and minimal wear-out as well as high throughput. By implementing a distributed log, we eliminate the need for low-level logging on each flash device. This means we can operate over very dumb flash chips directly attached to the network, resulting in massive savings in power and infrastructure cost (basically, we don’t need storage servers any more).

2) From a top-down perspective, a really fast flash-based shared log is great for applications that need strong consistency, such as databases, transactional key-value stores and metadata services. We can run a database at speeds that saturate the raw flash. For some types of strongly consistent operations (like atomic updates), we are able to run at a few hundred thousand operations per second.

We have a full implementation of Falcon over a cluster of nodes with SSDs. Work is underway to replace standard compute nodes with custom-built network-flash. Diversified applications are currently being built on top of Falcon, from a fully-replicated in-memory RDBS service to a reliable high-throughput virtual block device.

Bio:
Dahlia Malkhi is presently a principal researcher at Microsoft Research, Silicon Valley. Prior to joining Microsoft Research, Dr. Malkhi was a tenured associate professor at the Hebrew University of Jerusalem (1999-2007) and a senior researcher at AT&T Bell-Labs (1995-1999). Dr. Malkhi works on algorithmic aspects of distributed computing and reliability since the early nineties. She is a recipient of two IBM Faculty awards and a winner of the German-Israeli Foundation (G.I.F.) Young Scientist award. In the past, she has chaired ACM PODC 2006 and DISC 2002; and co-chaired Locality 05 and Locality 07; and has been serving as an associate editor of the Distributed Computing Journal since 2002.

Hamid Pirahesh

Big Data Platforms Shaping The Next Generation Big Analytics in Enterprises

Abstract:

Emergence of extreme scale map-reduce architectures is reshaping analytics far beyond classic warehousing in enterprises. We cover the complete stack, including server clusters, storage, middleware and analytics applications. Information technology is going through a fundamental change, influenced primarily by (1) Flexible provisioning and scalability, (2) Rise of analytics around semi-structured and
unstructured data in the context of semantically rich data objects in mainstream data processing, (3) Shake up of ecommerce services due to smart mobile devices and location based services, (4) Recent wave of open communities, such as Hadoop and R. We will discuss the emergence of new data models, parallel query languages (e.g., JAQL, Pig, Hive), programming languages such as Scala and Spark (mainly at Research stage), parallel mining and machine learning in these high scale parallel systems. We will show how these new architectures complement the powerful capabilities of traditional data warehouses. At the hardware level, we will look at integration of parallel processing in multi-node, multi-core systems with hardware accelerators, such as FPGAs and GPUs. This talk will focus on research and the role it plays in significantly reshaping the huge analytics industry. Commercial adoption of this emerging platform requires significant work in several important areas, including: (1) tools and languages for users, particularly introduction of novel visualization technologies (2) industrial strength high availability (3) Security, (3) Economical High Scale, (4) Integration of wide variety of data
sources. The users are particularly interested in using this technology for business intelligence services and integration services on big data.

Bio:
Hamid Pirahesh, Ph.D., is an IBM fellow, ACM Fellow, and a senior manager, responsible for the exploratory database department at IBM Almaden Research Center in San Jose, California. He also has direct responsibilities in various aspects of IBM information management products. He has served as an associate editor of ACM Computing Surveys and has served on program committee of major computer conferences. Hamid is a member of IBM Academy. Hamid's current focus is analytics at the massive scale on highly scalable servers. Hamid was a principle member of the original team that designed the query processing architecture of the IBM DB2 UDB relational DBMS and delivered the product to the marketplace. He has made major contributions to query language industry standards. His research areas include cloud computing, OLAP and aggregate data management, query optimization, data warehousing, management of semi-structured and unstructured data, including nosql DBs. He also serves as a consultant to various IBM divisions, including the software division and IBM Global Services.

Amin Vahdat

Bringing scale out growth of capacity to data center networks.

Abstract:
Scale-out architectures supporting flexible, incremental growth in capacity are common for computing and storage. However, the network remains the last bastion of the traditional scale-up approach, where increasing performance requires increasing levels of specialization at tremendous cost and complexity. Today, the network is often the weak link in data center application performance and reliability. In this talk, we summarize our work in bringing scale out growth of capacity to data center networks. With a focus on the UCSD Triton architecture, we explore issues in managing the network as a single plug-and-play virtualizable fabric scalable to hundreds of thousands of ports and petabits per second of aggregate bandwidth.

Bio:
Amin Vahdat is currently a Principal Engineer at Google working on data center architecture. He is on leave from a faculty position in the Department of Computer Science and Engineering at the University of California San Diego, where he holds the Science Applications International Corporation Chair. Vahdat's research focuses broadly on computer systems, including distributed systems, networks, virtualization, and operating systems. He received his PhD in Computer Science from UC Berkeley and is a past recipient of the the NSF CAREER award, the Alfred P. Sloan Fellowship, and the Duke University David and Janet Vaughn Teaching Award.

Important Dates

Submissions Due
June 17, 2011
Notification of Acceptance
July 15, 2011
Camera Ready
July 29, 2011
LADIS 2011
September 2-3, 2011