Monday, July 25, 2011

Recipe for Debugging

How do you take a working script and convince yourself it is broken?

1. Try to use outdated sub-scripts in another directory
2. Don't check the test inputs as a potential error
3. All of the above.

In other words, the CSV generating scripts finally have headers! Better analysis here we come. The other reason for the victory dance atmosphere on the project is due to the fact (knock on wood) all the scripts are working and we're collecting the last run of GPU-aware workloads!! The end of data collection is finally nigh. I'm sure I'l regret this phase ending when I have to sit down and write about the analysis. Meantime it wouldn't be unreasonable to feel a little burn out on stalking data logs.

Thursday, July 21, 2011

Cable Drama (Not of the Soap Variety)

Let me begin with a sincere thank you to to the folks at Microsoft Research in Mountain View for donating a WattsUp meter friendly USB cable. Anyone who has to depend on these meters knows their cables can't be easily replaced with a quick trip to the store. This is due to the recessed port on the actual meter.

The story behind this donation revolves around two WattsUp meters [see left in the red circles].

One of these meters began to return too many errors and bad packets. Not a problem, we avoided it by changing the setup so the Machine Under Test (MUT) uses the "good" meter. Switching the two computers between this "good" meter did cause a minor headache because it left room for errors to happen. For each workload we wanted to run on both boxes (meaning all benchmarks) we would have to swap meters. This meant shutdown the computers, unplug a great many cables and hope things didn't get messed up when the cables were then plugged into their new configuration.

How does a new cable play into this meter drama? Last time we re-configured the wiring our "good" meter began to exhibit the same behavior as the "bad" meter. This time the only thing changed were the usb cables on the meters relaying the data from the MUT's meter to the Data Acquisition Machine. Turns out both meters work fine, it was a dud cable.

Dr. Rivoire, our faculty adviser, in her research at Microsoft, had recently replaced cables on the WattsUp meters used in her research there because of a similar problem. QED let's change out the cable. In short the bottleneck on progress has be resolved thanks to a coworker of Dr. Rivoire's for cutting and refitting the casing on a USB cable to fit WattsUp's unique recessed port.

Workloads ahoy I'll finally have a chance to use my new parsing script written in perl abusing some tricks of regular expressions & unique logs.

Friday, July 1, 2011

Progress Summary

Our project's model is currently trained on models where the CPU is the main consumer of dynamic power. Results are fragmented between the two test machines, lolcat and rickroll, due to data collection errors. The next steps planned are finishing data collection, examining GPU benchmarks possibly with the added benefit of instrumenting the GPU, and analyzing oddness within the results.

For FDTD3D (GPU benchmark) rickroll’s MSE, rMSE/mean and DRE display a delta in the results. The MSE at frequency 2000 is 4.13 and goes to 318.29 at frequency 2200.  DRE repeats this delta at the two frequencies moving from 0.10 to 0.63.  Root MSE/Mean’s 2000 to 2200 frequency delta is a change from 0.01 to 0.12. A reasonable explanation would be to hypothesize that before 2200 the CPU is bound. Other data presently does not support this explanation.

Using two benchmarks, nbody and binomialOptions, as sets of train and test  (same model both train and test, using nbody as train but binomialOption as test and vise versa) lolcat’s results stress how unaware the model is of the GPU’s influence on the expected power (but the power does correlate well to CPU & disk for this workload). The model cannot predict a reasonable expected power when the GPU is stressed in addition to the CPU, or the GPU is stressed but not the CPU.

Once calibration data recollection on locat finishes and is analyzed for errors, the next step will be proceeding on GPU awareness. For more insight on the GPU’s role with power consumption NVIDIA-smi will be implemented for GPU instrumentation.  The model can’t predict beyond the CPU exercising at 100% but if a GPU aware component is added the prediction should be less erroneous.