Archive Page 2

Why Go to Graduate School and How to Get into the Program of your Dreams

This is the time of year when applications to graduate schools are due and I see a lot of both misinformation and lack of information among applicants.  I thought it might be valuable to put together some advice on the application process from “the other side,” someone who spends a lot of time looking at the applications and helping to decide who is admitted.  My experience is with applications to a highly research-oriented  MS and PhD program in Computer Science and Engineering at UC San Diego.  However, in speaking with my colleagues over the years, I believe that the thoughts below generalize to a variety of top CS research programs and, to some extent, to science and engineering graduate programs as a whole.

For myself, I did not know most of the below when I was applying to graduate programs.  All I knew was that I wanted to be a professor and that I needed a PhD.  Sometimes, that is enough.

I will edit this document as I get additional questions and feedback, so feel free to post your thoughts and comments.

Q: Why should I go to graduate school?

There are a number of good reasons to go to graduate school, though of course it is not for everyone.

  • You love Computer Science and are passionate about learning more about it.  Four years was just not enough to cover everything you wanted to learn.  More advanced classwork on topics you saw as an undergraduate will often make the material “click”.  The opportunity to perform research will give you a new perspective on how to approach problem solving and a new skill set that will be broadly applicable to many different work settings.
  • You really want to be a college professor. You look around at your own professors and think they have the greatest job imaginable. There are of course exceptions that prove the rule, but in general you need a PhD to become a professor.
  • You want to perform research.  You like working on open-ended problems with the opportunity to both advance the state of scientific understanding and the chance to perhaps influence the way people and companies do things in the future. There are a number of great industrial research labs that hire PhDs to perform exactly this kind of work.
  • You have an entrepreneurial bent and want to start a company. This reason is probably a bit controversial since you may be better off just going to work and learning about important problems facing industry. However, a higher-risk though perhaps higher-reward approach is to go to graduate school to learn about cutting-edge ideas with an eye toward applying them to the marketplace. This applicant is fairly rare as it requires both a strong entrepreneurial spirit and the ability to perform leading research (which often times does not have any immediate commercial application).
  • You want to get a better job, with more interesting responsibilities and a higher salary, than what you might be able to get with a Bachelor’s degree.  Depending on the job market and your own qualifications, this could be a great reason to go to graduate school, but very likely an MS rather than a PhD program. A typical 2-year MS program at a good school is likely to put you in a position for better jobs with higher starting salaries.  However, a PhD is likely the wrong way to go because by the time you account for all the years required to complete your PhD, you would have been better off starting in industry, gaining experience, gaining promotions, and perhaps moving on to your second or third job. For better or for worse, people rarely stay at companies for very long these days.  In Silicon Valley, the median amount of time at a company seems to be 18 months. You may be able to get significantly further ahead by just working and gaining experience and contacts rather than going for a PhD.

Q: What does the admissions committee look for in a successful applicant?

The ideal graduate student will have the following characteristics:

  • Research experience. Nothing prepares a student for graduate work like actually focusing on the most important aspect of the graduate school learning process, performing original research. However, such experience is relatively rare for undergraduates, paradoxically especially so at major research universities. The most important aspect of research experience is typically not the actual work you do but the opportunity for you to get to know a professor relatively well. And this leads to the next point.
  • Letters of recommendation. Having strong letters of recommendation is critical and something that you can control more than good research experience in many cases. It does help to have letters from writers who members of the admissions committee know. So if you are interested in doing research in operating systems, then you should try to take that course early. Chances are decent that some member of the admissions committee at some of the schools you are applying to will know the operating systems professor at your university.
  • Important personal characteristics. There are a number of qualities that are more important predictors of success in graduate school (and beyond) than generic intelligence. These qualities include creativity, focus, leadership, independence, diligence, passion, integrity. Unfortunately, it is possible to attend a great school, earn terrific grades, and even publish some papers without having these critical qualities.  This is why appropriately detailed letters of recommendation are so important.  If they can attest to some of these difficult to quantify characteristics, then the applicant will definitely have a leg up.
  • Rigorous undergraduate program. Attending a strong undergraduate program ensures that you have some baseline mastery of important computer science topics and techniques. Essentially, the admissions committee is looking for applicants that are as “research ready” as possible. If you do not have to spend time to learn the basics, then you can get started with successful research more quickly.
  • Strong GPA/GRE scores.  The definition of a good GPA is calibrated by the quality of a school and also historic norms for “grade inflation” at a particular institution.  Since we see many applications from a subset of schools every year, the admissions committee often has a logical database of norms to consult against. GRE scores are a bit more difficult to evaluate, especially since it is possible to essentially memorize one’s way to strong GRE scores.
  • Work experience.  Contrary to popular opinion, a few years of industrial experience can be a huge plus for an applicant. Practical experience in leading industrial positions can expose students to important problems and often leads to students who have stronger implementation skills coming into the program.  In addition, an applicant who spends time in industry and makes the conscious decision to come back to graduate school (giving up regular hours, a higher salary, etc.), typically shows a high level of dedication to graduate study.  They know it is what they want, rather than “it seemed like the next thing to do.”
  • Personal statement. You can consider this to be a writing sample that also gives some insight into your personality and maturity.  This is your chance to describe some of the work that you have done and why you found it interesting and important.  If you already have an idea of what research you would like to pursue and why, this would be a great place to discuss it.  If you have spent the time to get to know the research of a one or more professors in the department you are applying to, it would definitely help to include a personalized paragraph in the personal statement.  Many applicants use the personal statement as an opportunity to wax eloquent on the beauty of basic research and how they were set on the path to fundamentally change scientific understanding at an early age.  Some faculty (e.g., me) have a soft spot for such idealism.  But most are turned off by it, so on balance it is best to avoid such discussion unless you have something really distinctive or substantial to say (the wonder in your eye when you first laid eyes on a computer does not count).

On the PhD side, applicant screening is difficult because the characteristics of a good PhD student are different from the characteristics of a great undergraduate student.  Doing well in undergraduate courses requires being able to apply a relatively small set of concepts in a particular course to a relatively focused problem domain.  Individual problems may take hours to solve and, in rare cases, may require more focused work for days or weeks. Performing well in research requires applying ideas from a large set of domains to a problem that is likely poorly defined and almost certainly has no fixed answer.  Still, the admissions committee does consider a student’s grades as reflective of raw intellect and baseline knowledge of important computer science skills.

GRE scores are similarly an indication of at least some baseline mathematical and writing ability.  Overall, the GRE scores tend to provide the least differentiation among applicants. I cannot think of a single instance where a student was selected over another student over GRE scores.  Still, it is something that the admission committee does at least look at.  Since the GRE tests the most basics of mathematics and since Computer Science typically requires strong mathematical and analytical abilities, most admissions committee members look for near perfect GRE math scores.  Some admissions committee members largely dismiss the GRE math as only an indication of an applicant’s ability to perform simple mathematics quickly.  I know at least a few admissions committee members who put significant weight on the GRE Verbal score.  Communicating research ideas, both through oral presentations and written research papers, is critically important.  Since this skill is relatively under-developed in many graduate students, this is a skill that we look for.

Q: What can I do to prepare for graduate school applications?

The key is to be organized and to plan ahead (two skills not necessarily required for success in undergraduate programs but that will prove to be critical for success in graduate school!).  Many programs now offer online admissions applications (we certainly do at UCSD here).  Still, you have to arrange for all of your letter writers to send their letters to the various programs you are applying to.  Many schools offer letter services for their undergraduates where they can ask their writers to place a letter in a file for them.  The applicant can then simply request that copies of the letter be sent to individual programs. You have to ensure that your GRE scores are similarly delivered.

As indicated above, having strong letters of support is one of the most important parts of an application.  And this is simply not something that you can start preparing for in November before December applications are due in the same year.  Ideally, this is a process that spans multiple years by cultivating a relationship with faculty members in your department.  Summer internships at companies are also a good opportunity for securing letters. Becoming involved in a research internship at a remote institution for the summer is another terrific opportunity.  A number of programs such as NSF’s REU (Research Experiences for Undergraduates) recruit for such positions at universities across the country.  This is something to apply for in your sophomore or junior year (or earlier!).

Of course, another option is to work on research with faculty in your own institution.  If you have done well in a professor’s class, they are very likely to be happy to work with you.  Doing research during the academic year is challenging because of all of the short-term demands on your time (a preview for your first few years of graduate school!).  So again you have to be organized.  A great way to get momentum for research is to start over the summer. Some professors offer paid internships for undergraduates over the summer.  Other times, however, such funding is not available.  My advice would be that if you have an opportunity to perform research with a great faculty member/set of students over the summer and you are very interested in learning about research/graduate school, then volunteering for an unpaid internship is a great investment in your future.

Q: Should I apply for MS or PhD programs?

There are multiple tradeoffs here.  I will summarize at a high level.

PhD:

+ Largely a prerequisite if you want to teach at the college/university level or focus on basic research in industry (there are exceptions that prove the rule).
+ Typically, admission comes with a guarantee of funding.

MS:

+ Significantly easier to be admitted into an MS program.  From the department’s perspective, the risk is lower because typically there is no offer of financial support and the commitment is for two years rather than five to six years.  Someone with a strong undergraduate record is also fairly likely to perform well in an MS program though perhaps may not be an excellent researcher.
+ Relatively short time commitment (18-24 months) with significantly improved job prospects relative to a Bachelor’s degree.
+ If your record is relatively borderline for admission into top PhD programs, can use the MS as a proving ground to significantly improve chances for PhD admission later.

While the MS option typically does not guarantee funding, some to many MS students (certainly at UCSD) still obtain funding through TAships, RAships, or summer internships (currently a 3 month summer internship in the US often pays in the $18-20k total range).  Still, you should only go into an unfunded MS position if you have the means to fund it (through loans or otherwise) in the worst case.  Looking at it another way, many graduate students in law and medicine in the US go into debt (certainly more than the cost of an 18-24 month MS program) as an investment against their future earning power.  It may be worth considering the tradeoffs here as well if you are very excited about pursuing graduate work in computer science.

Q: Does it help to send email to a professor asking for an evaluation?

In general, sending a generic form letter to hundreds of professors is unlikely to help at all.  If you do send such a letter, make sure that you proofread it and that you get the professor’s name and area of research correct.  A poorly written note or one that cites a different professor’s papers can leave a bad impression.  However, if you have something intelligent to say about a professor’s research, beyond a simple “I found your paper on X to be very interesting and in line with my own interests,” then it could be worthwhile.  And, of course, if you have an exceptionally strong record where you might be a clear admit, then it could be worthwhile to get yourself on a particular faculty member’s radar.

But note that the bar for “clear admit” is quite high.  At UC San Diego, we get many strong applications where the line between accept and reject is very fine and impossible to predict ahead of time.  Clear admit says essentially: independent of available funding, current research focus, the strength of the rest of the pool, etc., this student will be admitted in any given year.  At most top 25 departments, this means at least 3 of 4 of the following: top 1% recommendation letters from well-known letter writers, top undergraduate institution, very high GPA/GRE scores, and research experience preferably with published papers in top venues. Out of 1000 applicants, we might only have 40-50 that fall into this clear admit category in any given year.

Q: I have been admitted to a number of programs.  What should I look for in a school?

The biggest mistake I see students make, especially among foreign applicants, is to order their admits based on US News and World Report rankings and select the school with the highest ranking  Your goal is to maximize your long-term success and that means maximizing your prospects once you complete your degree.  I will focus here on the PhD side, but similar considerations apply for the MS degree.

In maximizing your experience in graduate school, in general you want to maximize the quality of the research that you perform and the single most important thing here is your research adviser and the other graduate students you work with on a day to day basis.  So, in considering a school, the first thing to look at are the set of faculty members that you might be interested in working with.  If you are not sure what you might like to do, you should make sure that the various areas that you are interested in are well represented in a particular department.  If you are interested in working in a particular area, is there more than one faculty member working in that space?  You might love the work of a particular professor, but it might be the case that the professor may not be taking on students or may be on leave in a given year. More subtly, your personalities may not mesh well or the advising style of a particular professor may not work well for you.  Some high level distinctions include students who like significant freedom versus professors who might have a relatively narrow set of topics that they want their students to work on.  The reverse can also be problematic: some professors are very hands off while a particular student may need relatively close interaction (at least initially).

The best way to determine whether you would enjoy working with a faculty member is to attend the school’s visit day.  This will give not only the opportunity to meet the professor but to speak with the professor’s other students to get a good feel for what it would be like working with a faculty member.  Of course, the difficulty of attending visit days for foreign students is one of the challenges in accurately evaluating all of the alternatives on a list.  In this case, students should still be proactive in setting up telephone conversations with both faculty and students at the institution.  At the very least, you should verify that some of the faculty you are interested in working with have the capacity (in terms of both time and money) to take on additional students.

Circling back to the topic of rankings, if a higher ranked institution does not have any professors working in areas you are interested in or if your style of working does not mesh well with the available faculty, then it is less likely that you will be able to perform high quality research.  And, of course, this will in turn impact your chances of getting your dream job upon graduation.

Clearly, rankings do play some role in your subsequent success and it would be naive to think they do not matter at all.  If you are able to do work of equivalent quality at two institutions and one is substantially more prestigious than another, then choosing the higher-ranked one makes sense.  But the quality of your work trumps all other considerations in my opinion.  Certainly, when we evaluate faculty applicants for our own department, the quality and impact of the research performed by an applicant is by far the number one evaluation.  Probably the second most criteria is the leadership skills and vision of the applicant.  School ranking is never explicitly considered.

Since you will not be spending 100% of your time doing research and since your personal happiness goes a long way in determining your overall work productivity, other considerations are also important.  Essentially, are there factors about the location of a school that would impact the things you like to do in your free time (e.g., spending time with friends or family, going to the theatre, snowboarding, museums, outdoor sports, etc.).

Q: One school is offering me a better financial aid package than another.  Can I use this to negotiate?

You can try, but in most cases, schools offer the best financial packages they can to an applicant.  If the difference is between no funding at one one school and full support at another, then it is worth inquiring about available funding.  However, if the difference is a few thousand dollars in the form of a special fellowship at one school relative to another, I would consider the difference to be in the noise relative to all the other things that go into determining your long term success.  Once again, if everything else is equal, then choosing the school with a slightly better financial package makes sense.  But in virtually all cases, other considerations will be more important than the total amount of support.

Another question to consider is the length of guaranteed support in an offer letter.  Some schools promise support to PhD applicants for five+ years, while others may only promise support for one, two, or three years.  You should not place too much stock in the various differences here.  The fact is that, currently, virtually all PhD students in top tier departments receive one form of support or another as long as they are making good progress toward their dissertation. Available support of course varies from school to school and from research area to research area, but it is the clear exception where a PhD student making good progress has no funding options.

And guaranteeing funding has legal implications at some schools that make it difficult to provide such guarantees.  For example, if a professor wishes to recruit 2 new graduate students in a given year and the historical accept rate for admissions offers is 40%, then the professor may wish to admit 5 students total.  However, a particular university might require the professor to demonstrate funding for all 5 students for all 5 years, or 25 years of total graduate student support.  This requirement comes despite the fact that the faculty member only expects 2 of the students to accept and hence really only needs 10 years of total support.  (If there is a “success disaster” where 3 or 4 students accept, presumably that same professor would not recruit in subsequent years to absorb the bubble.)  So overall, depending on campus requirements, it may not even be possible for a faculty member to guarantee support since there may be legal contractual obligations associated with the guarantee.

In general, the best way to determine what the real funding situation is like at a school or a particular group is to ask other students.  If senior students have all had full RAships and full summer support for the past five years, then you can typically use the past as a good predictor for the future, independent of the specifics of the offer letter.

Presentation Summary “High Performance at Massive Scale: Lessons Learned at Facebook”

Recently, we were fortunate to host Jeff Rothschild, the Vice President of Technology at Facebook, for a visit for the CNS lecture series.  Jeff’s talk, “High Performance at Massive Scale: Lessons Learned at Facebook” was highly detailed, providing real insights into the Facebook architecture. Jeff spoke to a packed house of faculty, staff, and students interested in the technology and research challenges associated with running and Internet service at scale.  The talk is archived here as part of the CNS lecture series.  I encourage you to check it out; below are my notes on the presentation.
Site Statistics:
  • Facebook is the #2 property on the Internet as measured by the time users spend on the site.
  • Over 200 billion monthly page views.
  • >3.9 trillion feed actions proceessed per day.
  • Over 15,000 websites use Facebook content
  • In 2004, the shape of the curve plotting user population as a function of time showed exponential growth to 2M users.  5 years later they have stayed on the same exponetial curve with >300M users.
  • Facebook is a global site, with 70% of users outside of the US.
  • Today, there are 1.3B people in the world who have quality Internet connectivity, so there is at least another factor of 4 growth that Facebook is going after. Jeff presented statistics for the number of users that each engineer supports at a variety of high-profile Internet companies: 1.1M for Facebook, 190,000 Google, 94,000 Amazon, 75,000 Microsoft.
Photo sharing on Facebook:
  • Facebook stores 20 billion photos in 4 resolutions
  • 2-3 billion new photos uploaded every month
  • Originally provisioned photo storage for 6 months, but blew through available storage in 1.5 weeks.
  • Facebook serves 600k photos/second –> serving them is more difficult than storing them.
Scaling photos, first the easy way:
  • Upload tier: handles uploads, scales the images, sotres on NFS tier
  • Serving tier: Images are served from NFS via HTTP
  • NFS Storage tier built from commercial products
  • Filesystems aren’t really good at supporting large numbers of files
Scaling photos, 2nd generation:
  • Cachr: cache the high volume smaller images to offload the main storage systems.
  • Only 300M images in 3 resolutions
  • Distribute these through a CDN to reduce network latency.
  • Cache them in memory.
Scaling photos, 3rd Generation System: Haystack
  • How many IO’s do you need to serve an image?  Originally, 10 I/O’s at Facebook because of the complex directory structure.
  • Optimizations got it down to 2-4 IOs per file served
  • Facebook built a better version called Haystack by merging multiple files into a single large file. In the common case, serving a photo now requires 1 I/O operation.  Haystack is available as open source.
Facebook architecture consists of:
  • Load balancers as front end requests are distributed to Web Servers retrieve actual content from a large memcached layer because of the latency requirements for individual requests.
  • Presentation Layer employs PHP
  • Simple to learn: small set of expressions and statements
  • Simple to write: loose typing and universal “array”
  • Simple to read
But this comes at a cost:
  • High CPU and memory consumption.
  • C++ Interoperability Challenging.
  • PHP does not encourage good programming in the large (at 3M lines of code it is a significant organizational challenge).
  • Initialization cost of each page scales with size of code base
Thus Facebook engineers undertook implementing optimizations to PHP:
  • Lazy loading
  • Cache priming
  • More efficient locking semantics for variable cache
  • Memcache client extension
  • Asynchrnous event-handling
Back-end services that require the performance are implemente in C++. Services Philosophy:
  • Create a service iff required.
  • Real overhead for deployment, maintenance, separate code base.
  • Another failure point.
  • Create a common framework and toolset that will allow for easier creation of services: Thrift (open source).
A number of things break at scale, one example: syslog
  • Became impossible to push large amounts of data through the logging infrastructure.
  • Implemented Scribe for logging.
  • Today, Scribe processes 25TB of messages/day.
Site Architecture
Overall, Facebook currently runs approximately 30k servers, with the bulk of them acting as web servers.
The Facebook Web Server, running PHP, is responsible for retrieving all of the data required to compose the web page.  The data itself is stored authoritatively in a large cluster of MySQL servers.  However, to hit performance targets, most of the data is also stored in memory across an array of memcached servers. For traditional websites, each user interacts with his or her own data.  And for most web sites, only 1-2% of registered users concurrently access the site at any given time.  Thus, the site only needs to cache 1-2% of all data in RAM.  However, data at Facebook is deeply interconnected; each user is interested in the state of hundreds of other users.  Hence, even with only 1-2% of the user population at any given time, virtually all data must still be available in RAM.
Memcache
Data partitioning was easy when Facebook was a college web site, simply partition data at the level of individual colleges.  After considering a variety of data clustering algorithms, found that there was very little win for the additional complexity of clustering.  So at Facebook, user data is randomly partitioned across indiviual databases and machines across the cluster.  Hence, each user access requires retrieving data corresponding to user state spread across hundreds of machines.  Intra-cluster network performance is hence critical to site performance. Facebook employs memcache to store the vast majority of user data in memory spread across thousands of machines in the cluster.  In essence, nodes maintain a distributed hash table to determine the machine responsible for a particular users data.  Hot data from MySQL is stored in the cache.  The cache supports get/set/incr/decr and
multiget/multiset operations.
Initially, the architecture needed to support 15-20k requests/sec/machine but that number has scaled to approximately 250k requests/sec/machine today.  Servers have gotten faster to keep up to some but Facebook engineers also had to perform some fundamental re-engineering of memcached to improve its performance.  System performance improved from 50k requests/sec/machine to 150k to 200k to 250k by adding multithreading, polling device drivers, stats locking, and batched packet handling respectively. In aggregate, Memcache at Facebook processes in 120M requests/sec.
Incast
One networking challenge with memcached was so-called Network Incast. A front-end web server would collect responses from hundreds of memcache machines in parallel to compose an individual HTTP response. All responses would come back within the same approximately 40 microsecond window.  Hence, while overall network utilization was low at Facebook, even at short time scales, there were significant, correlated packet losses at very fine timescales.  These microbursts overflowed the limited packet buffering in commodity switches (see my earlier post for more discussion on this issue).
To deal with the significant slow down that resulted by synchronized loss in relatively small TCP windows, Facebook built a custom congestion-aware UDP-based transport that managed congestion across multiple requests rather than within a single connection. This optimization allowed Facebook to avoid the, for example, 200 ms timeouts associated with the loss of an entire window’s worth of data in TCP.
Authoritative Storage
Authoritative Facebook data is stored in a pool of MySQL servers. The overall experience with MySQL has been very positive at Facebook, with thousands of MySQL servers in multiple datacenters.  It is simple, fast, and reliable.  Facebook currently has 8,000 server-yearas of runtime experience without data loss or corruption.
Facebook has learned a number of lessons about data management:
  • Shared architecture should be avoided; there are no joins in the code.
  • Storing dynamically changing data in a central database should be avoided.
  • Similarly, heavily-referenced static data should not be stored in a central database.
There are a number of challenges with MySQL as well, including:
  • Logical migration of data is very difficult.
  • Creating a large number of logical dbs, load balance them over varying number of physical nodes.
  • Easier to scale CPU on web tier than on the DB tier.
  • Data driven schemas make for happy programmers and difficult operations.

Lots of examples of Facebook’s contribution back to open source here.

Given its global user population, Facebook eventually had to move to replicating its content across multiple data centers.  Facebook now runs two large data centers, one on the West coast of the US and one on the East coast.  However, this introduces the age-old problem of data consistency. Facebook adopts a primary/slave replication scheme where the West coast MySQL replicas are the authoritative stores for data.  All updates are applied to these master replicas and asynchronously replicated to the slaves on the East coast.  However, without synchronous updates, consecutive requests to the same data item from the same user can return inconsistent or stale results.
The approach taken at Facebook is to set a cookie on user update requests that will redirect all subsequent requests from that user to the West coast master for some configurable time period to ensure that read operations do not return inconsistent results.  More details on this approach is detailed on the Facebook blog.
Areas for future research at Facebook:
  • Load balancing
  • Middle tier: balance between programmer productivity and machine efficiency
  • Graph-based caching and storage systems
  • Search relevance via the social graph
  • Object discovery and ranking
  • Storage systems
  • Personalization
Jeff also relayed an interesting philosophy from Mark Zuckerberg: “Work fast and don’t be afraid to break things.”  Overall, the idea to avoid working cautiously the entire year, delivering rock-solid code, but not much of it.  A corollary: if you take the entire site down, it’s not the end of your career.

Harsha Madhyastha and Colleagues Win Best Paper Award

Harsha Madhyastha‘s paper “Moving Beyond End-to-End Path Information to Optimize CDN Performance” won the best paper award at IMC 2009.  The paper presents measurements information from Google’s production CDN to show that redirecting clients to the nearest CDN node will not necessarily result in the lowest latency.  Harsha and his colleagues built a tool called WhyHigh in production use at Google that uses a series of active measurements to diagnose the cause of inflated latency to the relatively large number of clients that experience poor latency to individual CDN nodes.  Definitely a worthwhile read.

Congratulations Harsha!

The Ever Changing Face of the Internet

Craig Labovitz made a very interesting presentation e the recent NANOG meeting on the most recent measurements from Arbor’s ATLAS Internet observatory.  ATLAS takes real time Internet traffic measurements from 110+ ISPs with real-time access to more than 14 Tbps of Internet access.  One of the things that makes working in and around Internet research so interesting (and gratifying) is that the set of problems are constantly changing because the way that we use the Internet and the requirements of the applications that we run on the Internet are constantly evolving.  The rate of evolution has thus far been so rapid that we constantly seem to be hitting new tipping points in the set of “burning” problems that we need to address.

Craig, currently Chief Scientist at Arbor Networks, has long been at the forefront of identifying important architectural challenges in the Internet.  His modus operandi has been to conduct measurement studies at a scale far beyond what might have been considered feasible at any particular point in time.  His paper on Delayed Internet Routing Convergence from SIGCOMM 2000 is a classic, among the first to demonstrate the problems with wide-area Internet routing using a 2-year study of the effects of simulated failure and repair events injected from a “dummy” ISP and the many peering relationships that MERIT enjoyed with TIER-1 ISPs.  The paper showed that Internet routing, previously thought to be robust to failure, would often take minutes to converge after a failure event as a result of shortcomings of BGP and the way that ISPs typically configured their border routers.  This paper spawned a whole cottage industry on research into improved inter-domain routing protocols.

This presentation had three high level findings on Internet traffic:

  • Consolidation of Content Contributors: 50% of Internet traffic now originates from just 150 Autonomous Systems (down from thousands just two years ago).  More and more content is being aggregated through big players and content distribution networks.  As a group, CDN’s account for approximately 10% of Internet traffic.
  • Consolidation of Applications: The browser is increasingly running applications.  HTTP and and Flash are the predominant protocols for application delivery.  One of the most interesting findings from the presentation is that P2P traffic as a category is declining fairly rapidly.  As a result of efforts by ISPs and others to rate-limit P2P traffic, in a strict “classifiable” sense (by port number), P2P traffic accounts for less than 1% of Internet traffic in 2009.  However the actual number is likely closer to 18% when accounting for various obfuscation techniques.  Still this is down significantly from estimates just a few years ago that 40-50% of Internet traffic consisted of P2P downloads.  Today, with a number of sites providing both paid and advertiser-supported audio and video content, the fraction of users turning to P2P for their content is declining rapidly.  Instead, streaming of audio and video over Flash/HTTP is one of the fastest growing application segments on the Internet.
  • Evolution of Internet Core: Increasingly, content is being delivered directly from providers to consumers without going through traditional ISPs.  Anecdotally, content providers such as Google, Microsoft, Yahoo!, etc. are peering directly with thousands of Autonomous Systems so that web content from these companies to consumers skips any intermediary tier-X ISPs in going from source to destination.
    When ranking AS’s by the total amount of data either originated or transited, Google ranked third and Comcast 6th in 2009, meaning that for the first time, a non-ISP ranked in the top 10.  Google accounts for 6% of Internet traffic, driven largely by YouTube videos.

Measurements are valuable in providing insight into what is happening in the network but also suggest interesting future directions.  I outline a few of the potential implications below:

  • Internet routing: with content providers taking on ever larger presence in the Internet topology, one important question is the resiliency of the Internet routing infrastructure.  In the past, domains that wishes to remain resilient to individual link and router failures would “multi-home” by connecting to two or more ISPs.  Content providers such as Google would similarly receive transit from multiple ISPs, typically at multiple points in the network.  However, with an increasing fraction of Internet content and “critical” services provided by an ever-smaller number of Internet sites and with these content-providers directly peering with end customers rather than going through ISPs, there is the potential for reduced fault tolerance for the network as a whole.  While it is now possible for clients to receive better quality of service with direct connections to content providers, a single failure or perhaps a small number of correlated failures can potentially have much more impact on the resiliency of network services.
  • CDN architecture: The above trend can be even more worrisome if the cloud computing vision becomes reality and content providers begin to run on a small number of infrastructure providers.  Companies such as Google and Amazon are already operating their own content distribution networks to some extent and clearly they and others will be significant players in future cloud hosting services.  It will be interesting to consider the architectural challenges of a combined CDN and cloud hosting infrastructure.
  • Video is king: with an increasing fraction of Internet traffic devoted to video, there is significant opportunity in improved video and audio codecs, caching, and perhaps the adaptation of peer-to-peer protocols for fixed infrastructure settings.

 

Elements of A Terrific Visualization

Now for a bit of a diversion from the usual topics I have been writing about lately.  I have always been a big fan of a good visualization, and I recently ran across an excellent one here, depicting some of the inputs and outputs that go into left-leaning versus right-leaning political thinkers.  Of course, it is not perfect and of course it makes some simplifications.  But there are two things I like about it:

  • It draws some very interesting basis for different ways of thinking, valuing freedom over equality or vice versa for example.
  • Either “side” looking at the visualization would likely think “I always knew the other side was fundamentally flawed and that I was right all along.”  So it fairly (overall) represents different viewpoints without passing judgement.

This depiction made me think of Minard’s classic 1869 visualization of Naploeon’s land campaign through Russia during 1812-1813.

The Economist Discovers Cloud Computing

Once, The Economist starts writing about a technology topic, you know that it has hit the mainstream.  The print edition has a nice overview article on Cloud Computing this week, reproduced online here.  I’ve written just a bit on this topic in an earlier post, but to summarize the driving forces behind Cloud Computing as seen by the Economist:

  • Economies of scale: The large service providers can deliver computation and storage more cheaply by amortizing the cost over a large customer base. The expertise is already available in house to manage hardware installations, software upgrades, backup, fault tolerance, etc.
  • Convenience: users will be able to access their data and services from any device, anywhere.
  • Instant access to tremendous computation: new startups with the latest technology breakthroughs won’t have to invest in machine rooms filled with servers or hire the people to run them.  Instead, they can pay for the necessary computation and storage by the hour on, for instance, Amazon Web Services.

Of course, Cloud Computing comes with the usual list of dangers and pitfalls:

  • Lock in: one cloud computing provider may become dominant, crowding out all competitors perhaps through unfair business practices.  Even if there is a vibrant ecosystem, moving data from one cloud provider to another may not be easy.
  • Loss of privacy: large companies may maintain significant information about their users, for example, the entire search history of every user.
  • Lack of safety: there are numerous examples of cloud service providers losing customer data entirely.  Just recently, Danger, a subsidiary of Microsoft, lost the contacts, photos, etc. of a large number of users.

Perhaps my vision is obfuscated by all the hype, but I believe that the delivery of computing and storage as a utility for a significant class of applications is an inevitability.  The list of above challenges is of course incomplete.  For instance, see some very nice work from my colleagues on the privacy of computation in cloud environments.  But I see these challenges as opportunities for industry and researchers in academia to address some of the pressing problems facing larger-scale adoption of Cloud Computing.

The Blurring of Layer 2 and Layer 3

Back when I took my graduate course on computer networks (from the tremendous Domenico Ferrari at UC Berkeley), the material was still taught strictly based on the seven-layer OSI protocol stack.  Essentially, our textbook had one chapter for each of the seven layers.  The running joke about the OSI model is that no one understands exactly what layers 5 (the session layer) and the layer 6 (the presentation layer) were all about.  In networking, we spend lots of time talking about layers 1, 2, 3, 4, and 7, but almost none about layers 5 and 6.  Recently, people have even started talking about layer 0, e.g., material scientists that create some of the physical substrates that support high levels of bandwidth on optical networks, and layer 8, the higher-level meaning that might be extracted from collections of applications and data, e.g., the Semantic Web.

What I have found interesting as of late however is that the line between two of the more well-defined layers, layer 2, the network layer, and layer 3, the internetwork layer has become increasingly blurred.  In fact, I would argue that much of the functionality that was traditionally relegated to either layer 2 or layer 3 has become blurred.  In the past, layer 2 was about getting data to/from hosts on the same physical network.  Layer 3 was about getting data among hosts on different physical networks.  Presumably, delivering data for hosts on, for instance, the same LAN segment should allow for simplifying assumptions relative to delivering data between networks.

However, technology forces have pushed us to a point where everything is about “inter-networking”.  A single physical LAN in isolation is just not interesting.  One would think that this would mean that layer 2 protocols would become increasingly marginalized and less important.  All the action should be at layer 3, because inter-networking is where all the action is.

However, just the opposite is in fact happening.  Just about all traditional layer 3/inter-networking functionality is migrating to layer 2 protocols.  So if one were to squint just a little bit, functionality at layer 2 and layer 3 is virtually indistinguishable and often duplicated.  Just as interesting perhaps is that layer 2 may in fact be the place where inter-networking takes place by default, at least within the campus, the enterprise, and the data center.  It would be too radical (for now) for me to make claims about it extending to the Internet as a whole, though a number of projects, including the 100×100 effort, have considered this very position.

Here, I will consider some of the reasons why inter-networking is migrating to layer 2.  There are at least two major forces at work here.

  • The first issue goes back to the original design of the Internet and its protocol suite.  The designers of the Internet made a crucial, and at the time entirely justified, design decision/optimization.  They used a host’s IP address to encode both its globally unique address and its hierarchical position in the global network.  That is, a host’s 32-bit IP address would be both the guaranteed unique handle for all potential senders and the basis for scalable routing/forwarding in Internet routers.  I recently heard a talk from Vint Cerf where he said that this was the one decision that he most wishes he could revisit.This design point was perfectly reasonable, and in fact a very nice optimization, as long as Internet hosts never, or at least very rarely, changed  locations in the network.  As soon as hosts could move from physical network to physical network with some frequency, then conflating host location with host identity introduces a number of challenges.  And of course today, we have exactly this situation with WiFi, smart phones, and virtual machine migration.  The problem stems from the fact that scalable Internet routing relies on hierarchically encoding IP addresses.  All hosts on the same LAN share the same prefix in their IP address; all hosts in the same organization share the same (typically shorter) prefix; etc.

    When a host moves from one layer 2 domain (previously one physical network) to another layer 2 domain, it must change its IP address (or use fairly clumsy forwarding schemes originally developed to support IP mobility with home agents, etc.).  Changing a host’s IP address breaks all outstanding TCP connections to that host and of course invalidates all network state that remote hosts were maintaining regarding a supposedly globally unique name.  Of course, it is worth noting that when the Internet protocols were being designed in the 70’s, an optimization targeting the case where host mobility was considered to be rare was entirely justified and even very clever!

  • The second major force at work in pushing inter-networking functionality into layer 2 is the relative difficulty of managing large layer-3 networks.  Essentially, because of the hierarchy imposed on the IP address name space, layer 3 devices in enterprise settings have to be configured with the unique subnet number corresponding to the prefix the switches are uniquely responsible for.  Similarly, end hosts must be configured through DHCP to receive an IP address corresponding to the first hop switch they connect to.

It is for these reasons that network designers and administrators became interested in managing multiple physical networks as a single layer 2 domain, even going back to some of the original work on layer 2 bridging and spanning tree protocols. In an extended LAN, any host could be assigned any IP address and it could maintain its IP address as it moved from switch to switch.  For instance, consider a campus WiFi network.  Technically, each WiFi base station forms its own distinct physical network.  If each base station were to be managed as a separate LAN, then hosts moving from one base station to another would need to be assigned a new IP address corresponding to the new subnet.  Similarly, with the advent of virtualization in the enterprise and data center, it is no longer necessary for a host to physically migrate from one network to another.  For load balancing, planned upgrades, and thermal management, it is desirable to migrate virtual machines from one physical host to another.  Once again, migrating a virtual machine should not necessitate resetting the machines globally unique name.

Of course, putting inter-networking functionality into layer 2 comes with significant challenges, especially when considering “textbook” Ethernet perhaps the most popular layer 2 network protocol:

  • Forwarding across LANs at layer 2 involves a single spanning tree that may result in sub-optimal routes and worse admits only path between each source and destination.
  • A number of support protocols, such as ARP, require broadcasting to the entire layer 2 domain, potentially limiting overall scalability.
  • Aggregation of forwarding entries becomes difficult/impossible because of flat MAC addresses increasing the amount of state in forwarding tables.  An earlier post discusses the memory limitations in modern switch hardware that makes this issue a significant challenge.
  • Forwarding loops can go on forever since layer 2 protocols do not have a TTL or Hop Count field in the header to enable looping packets to eventually be discarded.  This is especially problematic for broadcast packets.

In a subsequent post, I will discuss some of the techniques being explored to address these challenges.



Follow

Get every new post delivered to your Inbox.

Join 35 other followers