Wednesday, April 16, 2014

28x faster renders

I finally got a simple BVH implemented. To get a quick idea of what a BVH is, check out the Bounding Volume Hierarchy page on Wikipedia. My tree generation algorithm is pretty simple:

  • if the number of remaining shapes is less than or equal to the target group size
    • add the shapes to this node's list
    • create a bounding box for the group
  • otherwise
    • find the bounding box that contains the remaining shapes
    • figure out on which axis (x, y, or z) the bounding box is longest
    • sort the shapes on the longest axis
    • split the list at the median
    • create a left node containing the first half of the list
    • create a right node containing the second half of the list
It's nothing fancy, and I haven't tried to find an optimal group size yet, but it works. And, like the title of this post implies, for certain scenes the render times are almost 28 times as fast. Not too shabby.


Tree traversal is pretty simple, too....

Wednesday, April 9, 2014

OBJ Importing and Anti-Aliasing

Thanks to a new tutorial by Mike Farnsworth at Renderspud, I added an OBJ Loader class to my ray tracer. My implementation only supports the basics (no quads, vertex normals, nor texture coordinates; just triangles), but it works. And what better model to test it on than everyone's favorite open-source monkey, Suzanne:
The model has 968 faces. At 512x512 with 2-4 anti-aliasing samples, this took nearly 4 hours to render on my trusty laptop. It's a start.

Speaking of anti-aliasing, we now have it. There are two modes. The first one sends out an equal number of rays through each pixel, each slightly offset from one another. The results are averaged together to form the final pixel color. The second mode sends out a single ray through each pixel to get a rough image. It then finds the edges (the parts of the image that will benefit most from anti-aliasing) by calculating the color gradient of the image (see Sobel Operator for more on this). Then it re-renders only the high-contrast pixels with a higher number of samples. This way, smooth parts of the image render quickly, but the edges are properly anti-aliased.
 (I'm being really liberal with the definition of the + sign in this graphic)
I'm still fiddling with the edge detection and how to get the best-looking image for the most reasonable render time.

Now it's time to start implementing an acceleration structure like a BVH.

Thursday, April 3, 2014

A quick update

So I've been quite busy since the last post. The progress is very encouraging. And a lot of fun. Here's a screenshot of my very first raytraced image. (When I say 'first' I actually mean first):
There it is! Six red spheres on a green plane. Can't you see them? Well, at least I got the color right on the spheres.

Here's my latest image. As you can tell, it's come a long way.
I've implemented point lights, a diffuse shader, a reflective shader, plane intersection, and sphere intersection. The diffuse shader took me about 3 hours to write. Once that was working, the reflective shader took about 10 minutes. When I find some time, I'll implement a specular shader, then get started on anti-aliasing.
Fun fun fun!

Edit: And another screenshot with specular highlights.

Monday, March 24, 2014

Finally starting

So when I started this blog a few years ago, I was very gun-ho about making a render engine. The only problem was that I had no idea what I was doing. I quickly realized that I got in way over my head with my skill set at the time. But after taking two computer graphics courses, an algorithms course, working with several 3D packages, talking with professors and people in the industry, and doing a lot of reading on Wikipedia....I'm back.

And I know what I'm doing now. And it's gonna be good. (and I'm doing this for a class project so I have more time and motivation to do it.)

So I thought I'd give an overview of my plans for the engine. I'm open to tips and suggestions; I know for a fact that there are people who are smarter than I am.

I've got at least two phases in mind for the foreseeable future of this project.
Phase 1: A ray tracer that will fulfill the requirements of the class project but leave tons of room for building a pretty good render engine.
Phase 2: A unidirectional path tracer that simulates bounce lighting or 'Global Illumination' as some call it. I'll try and leave room for a modern shader language as well.

In order to keep myself motivated, I'll post a list of the features I'd like to implement. This will also serve to remind me what I hope to achieve. So here goes. My goals for phase 1 are as follows:

  • Recursive ray tracing
  • Shapes:
    • Sphere
    • Plane
    • Triangle (and triangle meshes)
  • Customizable shader model
    • Diffuse
    • Specular
    • Reflection
    • (Maybe) Refraction
  • Two anti-aliasing modes using super sampling:
    • uniform number of AA samples per pixel
    • Variable number of samples: more for edges and fewer for smooth areas
  • Two sample patterns:
    • Grid pattern
    • Jitter pattern
  • Box filter to average the samples (with room for other filters)
  • Point light (spot and area lights will have to wait until phase 2)
  • Camera movement controls
  • Render to OpenGL window
The project is due April 12th, so I've got some work to do. I've already created all of the header files for the classes I'll need. I might end up posting a UML diagram of the project. It never hurts to be organized.

Stay tuned!

Friday, December 6, 2013

Free supercomputer





Here’s a little-known fact: did you know that BYU owns a supercomputer? It’s called the Mary Lou supercomputer, and it’s housed in the top floor of the Crabtree building. This computer is used by students and professors who require an enormous amount of computing power to do their research. It’s been used to compute complex physics simulations, analyze DNA strands, and even render 3D animated short films. Mary Lou is quite the busybody.


At 30 million dollars, Mary Lou’s computing prowess didn’t come cheap. But the scientific advancements and research made possible by this computer have been invaluable.


Wouldn’t it be great to have another one? What if I told you we could give Mary Lou a sister...for free?


Hear me out; It’s not as crazy as it sounds.

BYU campus has dozens of buildings, each containing potentially multiple computer labs that each house hordes of computers. All-in-all, there are thousands of desktop computers available. This army of computers provides for the computing needs of students throughout the day, but what do the computers do at night? Currently, nothing. They simply sit all night, running, but not doing anything valuable. Why not put those computers to work?

With the help of some open-source software and a bit of organizing we could harness the power of this otherwise unused army.

Here’s how it could work: A student or professor creates a research project that requires a lot of horsepower. Before sending their compute job to the supercomputer, their job would need to be split into small units of work, called packets, which can be worked on concurrently. Once the user submits their job, it goes to a central workstation which controls the flow of each of these packets through the buildings and computer labs on campus. The packets are distributed among the idle computers which perform the required computation, then send the results back to the owner. Even though each individual computer is not extraordinarily fast, as a whole, the entire virtual supercomputer is extremely fast because of its ability to process thousands of packets at once.

By merely installing a bit of software on each computer, we could have another supercomputer for next to nothing! We already own the hardware! So whaddaya say? What would you do with a free supercomputer?

























A traditional supercomputer is not so different from the virtual one I proposed. It's normally a bank of thousands of processors, all networked together to form an extremely powerful machine. Complex simulations are still split into small chunks that each processor computes on its own. The results are compiled and analyzed for further research. The beauty of a virtual supercomputer is that the hardware is already owned by the university, which is undoubtedly the most expensive part of the whole proposal. Open-source software already exists that coordinates parallel processing in this way, so no additional cost would be required for purchasing software.

Wednesday, December 5, 2012

Lessons learned from an interview with Microsoft

Three steaks in three nights. Medium-well, medium, and medium. I'll name them Hopeful, Rejected, and Consoled. The names describe the gamut of emotions I felt last weekend. I'm normally an emotionally stable individual but job interviews with anyone tend to psyche me out. Such was the case with my interview for an SDE (Software Development Engineer) Internship with Microsoft's Windows team.

Five interviews, actually. The first one was reminiscent of the interview for my entry-level web development job. It was fairly easy, I hardly had to sell myself, and the technical question was almost trivial. That interview took place in my university's career center. It's just a trial run to see if I was at least somewhat competent. The next four, however, took place at Microsoft's campus in the heart of Redmond, Washington. And what a treat the whole thing is! They pay for round-trip flights, two nights at the Westin, a rental car for the duration of the trip, $75 per day for fine dining (which explains the steaks), and a few other treats. If you've got one of these coming up, I would highly suggest working through these problems, finding a good answer, and seeing if you can find a better one. Each interview lasts 45 minutes so you've got to think quick.

First was kind of a tricky math problem. I was given an array of integers of length n. The values stored in the array were restricted to values from 1 to n-1, inclusive. My task was to come up with a way to find the first duplicate value in the array. A brain dead solution is easy, but can you come up with an algorithm that runs in linear time and does not use a secondary array?

The next one dealt with an interesting linked list. You're given a singly-linked list. Each node actually has two links to a 'next' node, link1 and link2. The two links in a node will not necessarily point to the same node. Think of it as a list of people. One set of links could be the list sorted by height. The other set of links could represent the list sorted by age. The task is to reverse the order of one set of links while preserving the order of the other. (I assume there was more to this problem, but I didn't get very far on the first part so I never heard the second part). Here's a visual for clarification:

Link1 path: A--->B--->C--->D
Link2 path: A--->C--->B--->D


The third one was very similar to the second (thankfully). You have an n-ary tree (a tree where each node has 0 or more child nodes). The tree is stored as a type of linked list. Each node has two links: one to the next 'sibling' node and one to the first child node. My task was to reverse the order of sibling nodes while preserving the parent-child relationship. For example, if you have this tree:

A
|
B------------->C
|              |
D--->E--->F    H--->I

After reversing the order of the siblings, you'd have:

A
|
C-------->B
|         |
I--->H    F--->E--->D

Notice that in the first tree, B's first child is D, but in the second tree, its first child is F. Since we had a few minutes left after I gave a working solution, my interviewer then asked me to write another function which prints out the contents of the tree in order.


The fourth and final one was fairly straightforward. I was given a binary search tree that contains ints. Remember that a binary search tree is one where each node has 0, 1, or 2 child nodes and that the left child (and all of its descendants) has a value that is less than the node and the right child (and all of its descendants) has a value that is greater than the node. I was supposed to determine whether a given binary tree was valid.

I wanted to share these questions with you, not so you can have an unfair advantage (which is why I didn't include any hints), but so that you can know what types of questions to prepare for. Be prepared to come up with a quick solution, make sure it works, then to optimize it as best you can. This was actually my second time going through the whole Microsoft interview process and these questions were fairly similar to those in my first round. I can't speak for all interviews since I only ever interviewed with the Windows team, but I would imagine that questions of the same difficulty would be asked in any SDE interview.

To be fair, I should include other questions they asked me that were of a more personal nature (these aren't word-for-word).

  • Why do you want to work at Microsoft?
  • What are the most important factors for you when considering a job?
  • What has been the most challenging problem/bug you've dealt with in any of your projects at school or work? How did you solve that problem? Be specific!
  • Have you ever had to convince someone else to use your plan/approach/strategy/solution over theirs? How did you do that?
  • Do you enjoy writing code? Why?
  • Do you like debugging?
  • What are some personal projects outside of school or work that you've been involved in?
  • What's been your favorite class in your college career so far? Why?
  • Least favorite class? Why?