Three Abstracts Submitted for CEC 2006

The following abstracts were submitted for Sun’s internal Customer Engineering Conference 2006. Of course there is no guarantee that this material will be accepted by the CEC panel, but I’d be happy to present the same (or similar) material at other events. If you’re interested, please drop me a line.

Microbenchmarking – Friend or Foe?

Many purchasing, configuration and development choices are made on the basis of benchmark data. Industry organisations such as SPEC and TPC exist to inject a measure of realism and fairness into the exercise. However, such benchmarks are not for the faint hearted (e.g. they require considerable hardware, software and people resources). Additionally, the customer may feel that an industry-standard benchmark is not sufficiently close to their own perceived requirements. Yet building a bespoke benchmark for a real world application workload is an order of magnitude harder than going with something “off the peg”. It is at this point that an alarming number of customers make the irrational leap to some form of microbenchmarking — whether it is good old “dd” to test an I/O subsystem, or perhaps LMbench’s notion of “context switch latency”. The whole is rarely greater than the sum of its parts, but the issue often ignored is that a microbenchmark — by very definition — only considers one tiny component at a time, and then only covers a small subset of functionality in total. Furthermore, it is often observed that some microbenchmarks are very poor predictors of actual system performance under real world workloads.

Is there any place for microbenchmarking? Certainly, we need to be aware that customers may be conducting ill-advised tests behind closed doors. But should we ever dare engage in such dubious activities ourselves? In short: yes! In the right hands microbenchmarks can highlight components likely to respond well to tuning, and assist in the tuning process itself. This session will focus on libMicro: an in-house, extensible, portable suite of microbenchmarks first used to drive performance improvements in Solaris 10. The libMicro project was driven by the conviction that “If Linux is faster, it’s a Solaris bug”. However, some of the initial data made the case so strongly that we chose to adopt the Monsters Inc. slogan “We scare because we care” at first! libMicro is now available to you and your customers under the CDDL via the OpenSolaris programme. Key components of libMicro will be demonstrated during this session. The demo will include data collection, reporting and adding of new cases to the suite.

Note: I was taking them seriously about 2500 chars and two paragraphs.

Synchronicity: Solaris Threads and CoolThreads

The Unified Process Model is one of the best kept secrets in Solaris 10. Yet this “so what?” feature entailed changes to over 1600 source files. But was it all a waste of effort? For over a decade Sun has been recognised as a thought leader in software multithreading, but did we lose the plot when we dropped the idealistic two level MxN implementation for something much simpler in Solaris 9? To both of these questions we must answer a resounding “No!”. Indeed, the Unified Process Model, under which every process is now potentially a multithreaded process, was only possible by a simpler, more scalable, more reliable, more maintainable, realistic one level 1:1 implementation. And all this goodness just happens to coincide with the CoolThreads revolution. As other vendors chime in with CMT, Solaris is streets ahead of Linux and other platforms in being able to deliver real benefits from this technology. It is extremely important that we are able to understand, articulate and exploit this synchronicity.

Note: this time I realised that they didn’t really mean 2500 chars!

DTrace for Dummies

Wonder what all the fuss is about? Need a good reason before you engage your brain with this stuff? Think this may be one new trick too far for an aging dog? Just curious? Then this session is for you! We have a reputation for making DTrace come alive for even the most skeptical and indifferent of crowds — D is certainly not for “dull” at our shows! Don’t worry, we won’t get you bogged down in syntax or architecture. But we will convince you of the dynamite that is the DTrace observability revolution — that, or you are dummer that we thought! Everything you see will happen live. We don’t use any canned scripts. Anything could happen. You’d be a fool to miss it!

Notes: This was a joint submisson from me and Jon Haslam. We’ve found our combination of sound technical content and brit humour very effective at getting across the DTrace value proposition to a wide audience. We first did our double act (Jon types while Phil talks) at SUPerG 2004. Following rave reviews we were asked to present a plenary session at SUPerG 2005.

Technorati Tags: ,

Leave a Reply

Your email address will not be published. Required fields are marked *