March 18, 2014 - 2:33 am by Joss Whittle
3D Printing Graphics PhD
Before I found out I’d be doing a PhD this year I submitted an MEng project proposal to a professor in my department with the idea to build a cheap 3D Printer out of Lego Technic. The plan was to use the ready available, and easy to assemble parts and motors of the Lego Technic along side a powerful yet easy to program Arduino unit is a brain. I even got so far as to make some preliminary CAD mockups and a working mockup of the chasis.
But with starting my PhD research, I was forced to shelve the project (like so many others) so as to focus on my actual work.
Lately however, it came to my attention that the Computer Science department had bought a Makerbot Replicator 2X 3D Printer which is available to members of the research group as both a research tool, and as a teaching tool. The mumbling consensus in the department is that it’s fine to use the printer “within good reason and common sense…”, and that the logic of “teaching yourself to use it means you can potentially teach undergrads about it in the future…” means that as long as it doesn’t hinder us or other peoples research it’s fair to use it!
After all, what’s the point in owning the fun toys if we don’t play with them? :)
With this in mind there’s been something on my list of Stuff I want to own: for a long time, that is of course a Utah Teapot!
The Utah Teapot is a famous dataset in the Computer Graphics community and used to be an official measurement of graphics performance in terms of Number of Teapots rendered / Second
(I’m not kidding).
After a few hours of playing around modifying the original dataset in Autodesk Maya to flesh it out as a proper object it was time to print! Yes I know, I almost certainly could have found a better model online that was properly modified and tested on a 3D Printer.. but this is a dataset I have been rendering and working with for years now and it only felt right that I should try to modify it myself to print one. Though if I ever go back to print another teapot, bigger, I’ll definitely be using one like I linked that was designed in AutoCAD rather than an Artistic Modelling package.
The whole print took just over 100 minutes to complete, and I probably spent an additional ~30 minutes or so sanding it down with some fine grit sandpaper to give it a smooth finish.
Tags
3D Printing
March 17, 2014 - 4:16 pm by Joss Whittle
Matlab PhD
Pressing onwards with research after a short break to play with the Xeon Phi rack, I’ve been working on visualizations for Monte Carlo Simulations.
My aim is to have a clean and concise means of displaying (and therefore being able to infer relationships from) the data of higher dimensional probability distributions. The video below shows one such visualization where a 2D Gaussian PDF has been simulated using Hamiltonian Dynamics.
Hamiltonian Dynamics Monte Carlo
The two 3D meshes represent the reconstructed sample volume as a 2D histogram of values rescaled to the original function. The error between the reconstruction and the original curves is shown through the colour of the surface where hot spots denote areas of high error.
The mean sq error for the whole distribution is displayed in the top right on a logarithmic scale. An algorithm which converges in an ideal manner will graph it’s error as a straight line on the log scale.
The sample X and Y graphs in the bottom right allow us to visualize where the samples are being chosen as the simulation runs and infer about whether samples are being chosen independently of one another. The centre-bottom graph simply gives us a trajectory of samples over the course of the simulation. This allows us to additionally catch if an algorithm is prone to getting stuck in local maxima’s.
Metropolis Hastings Monte Carlo
The above video shows the same simulation run with a basic Metropolis Hastings algorithm. Here the proposal sample x
and y
attributes are chosen independently of one another, meaning this is not a Gibbs Sampler. Although I intend to implement one within the test framework for comparisons sake soon.
A key difference between the two simulations here is to note the Path Space Trajectory shown in the bottom-centre graph and how it relates to the Sample Dimension Space graphs shown on the middle & bottom-right. Hamiltonian Dynamics chooses sets of variables where the x
and y
elements are highly dependent on one another within a coordinate, yet almost entirely independent of other pairs of coordinates within the Path Space.
Metropolis Hastings on the other hand, chooses coordinate pairs entirely independently of one another, and values within each dimension of the Path Space that are highly dependent on one another. These characteristics of the Metropolis sampler are undesirable, which is why adding a dependency within coordinate pairs (Gibbs Sampling) helps to accelerate multi-dimensional Metropolis Samplers.
Tags
Bayesian Statistics, DataVis, Hamiltonian, Matlab, Metropolis-Hastings, Monte Carlo Integration