bschmidtchen commented on slide_024 of Virtual Reality 3 & Course Conclusion ()

To help with reducing judder and keeping latency high, Oculus has developed a concept named TimeWarp for their headsets which transforms stereoscopic images based on the latest head-tracking information to significantly reduce the motion-to-photon delay.

In addition to this, they have developed Asynchronous SpaceWarp which generates extrapolated frames from previous frames generated by the VR application. Both of these advancements have allowed current VR developers to enable more complex and computationally heavy features in their applications.


bschmidtchen commented on slide_013 of Virtual Reality 2 ()

Solving this problem particularly with AR is seen as one of the holy grails of simulating realism. Interestingly enough, one of the main ways to accomplish this is to use light fields to simulate both vergence and accommodation. There is some really interesting research coming from Stanford where they use multiple focal planes in AR/VR to give the viewer the sense of these ideas and then our minds kind of integrate between these focal planes.

The research can be seen here: http://www.computationalimaging.org/publications/the-light-field-stereoscope/


bschmidtchen commented on slide_014 of Virtual Reality 1 ()

This is one of the most interesting applications of VR that I have personally tried and there are many companies doing it in many different ways.

Some examples that also include 3D modeling applications are Tilt Brush, Oculus Medium, Oculus Quilt, and one start-up from Berkeley recently went to Y-Combinator for their design application named Pantheon VR!


Read noise and dark current noise both increase with thermal temperature, but dark current is due to manufacturing defects while read noise is due to particle movement dislodging electrons.


straversi commented on slide_019 of Intro to Geometry ()

The expression on the right-hand side of the equation is a vector. Each component can be calculated using the parameters u, v. With reference to the previous definition, this may be a point cloud representation. Maybe we could get an instructor in on this.

Looked ahead. not a point cloud. does seems implicit now when I think about it though.


chriscorrea14 commented on slide_048 of Light Field Cameras 1 ()

By "choosing the max u" do you mean that you set all pixels within the microlens to be the value of this sub-pixel? If not, what does this mean?


chriscorrea14 commented on slide_020 of Light Field Cameras 1 ()

Somewhat unrelated but, is there a reason why we use multiple smaller lenses versus one big one? Is the size the reason why?


chriscorrea14 commented on slide_015 of Image Processing ()

Why isn't the quantization Matrix symmetric? Is it by design or coincidence?


carlosflrs commented on slide_019 of Intro to Geometry ()

How is this particular mesh represented? I'm confused because by the example it looks like there's a function for each x, y, z value -- wouldn't that be and implicit representation?


icswim commented on slide_048 of Introduction to Color I ()

I'm a little confused by what it means when it says "Subtractive color describes reflected spectrum". I thought that the colors we see are our perception of the light that is reflected off a surface (ex: leaves are green because leaves absorb all other colors). How is subtractive color different from additive color?


icswim commented on slide_009 of Introduction to Color I ()

If spectral power distributions are linear, why isn't the green cross-section in the 3rd diagram the additive result of the power from the blue and yellow SPDs?


fluorine commented on slide_023 of Cloth Simulation ()

These are bending springs


fluorine commented on slide_021 of Cloth Simulation ()

These are shearing springs


fluorine commented on slide_020 of Cloth Simulation ()

These are structural springs.


tcheng96 commented on slide_008 of Cloth Simulation ()

What is "m" in this situation?


icswim commented on slide_041 of Radiometry and Photometry ()

I'm a bit confused as to why we need to project $\Omega$ onto the plane. From what I understand, we project O onto the unit sphere in order to use our sterradian definition of radiance. However, if we're trying to calculate the irradiance from the uniform area source from the center point of the hemisphere, why do we care about Omega's projection?

Thanks so much!


kaj011 commented on slide_063 of Intro to Signal Processing ()

I think when there are a lot of pixels and kernel size is small, a filter by multiplication in the frequency domain is more efficient because it requires one multiplication whereas convolution requires a lot of mappings.


kaj011 commented on slide_047 of Accelerating Ray Tracing ()

This concept is very similar to most of the tree-based machine learning algorithms which try to minimize entropy which is a cost in a similar manner.


kaj011 commented on slide_009 of Intro to Ray Tracing ()

In the project, we convert image plane into a unit area. I still don't understand why we should do it. Could anyone please explain?


kaj011 commented on slide_007 of Intro to Ray Tracing ()

Regarding the discussion topics, I think if there are a lot of objects behind a big object, we don't need to check their visibility. So, in this case, ray casting is better because it only looks at the regions that are visible.


k-vent commented on slide_028 of Intro to Ray Tracing ()

Another test is if tmax < tmin


k-vent commented on slide_028 of Intro to Ray Tracing ()

If after taking the intersections we find tmin or tmax to be less than 0 or not on the planes, the ray misses the box.


tpdf usually depends on the illumination of the reflecting light ray. If tpdf is high, the light ray is still strong so we probably shouldn't stop. On the other hand, if illumination is zero, there is no point in casting another ray since the reflecting ray has run out of energy. We do not worry about infinite recursion because we also limit the maximum number of recursions with ray.depth


k-vent commented on slide_053 of Accelerating Ray Tracing ()

Here SAH is the ratio of the surface area of the primitives to the bounding box of their partition. A low cost correlates to an high ratio in both partitions.


VortexNerd commented on slide_027 of Intro to Ray Tracing ()

I think the slabs in this context are the "chunks" of the axes spanned by the box. So in 2D you have two slabs, the [x_min, x_max] slab and the [y_min, y_max] slab. This intersection just involves finding the ray intersection for each slab independently and taking their overlap. Extending this to 3D will just add one more slab.


I think there's an error in this slide. (random01() < tpdf) doesn't makes sense. For example, if the termination probability is one, this will always recurse. Another example: if the termination probability is zero, this will never recurse. Shouldn't it be something like (random01() < (1 - tpdf))


fluorine commented on slide_027 of Intro to Ray Tracing ()

3 pairs of slabs. Not just 3 slabs. 3 pairs of slabs is 6 faces.


sam commented on slide_049 of Radiometry and Photometry ()

I have trouble connecting the units of these terms with what the terms are actually useful for, so here's a stab at explaining them simply:

Suppose we have a point light source. Radiant Power (Flux) refers to the total amount of energy the light source emits per second. For example, if I compare two light bulbs, plugging in the one with more Radiant Power will cost me more on my electricity bill than the one with less.

Let's trace a single ray of light outward from the point source. Radiant Intensity refers to the power emitted by the light source along that line. So, if we add up the Intensities of all the possible outgoing rays, we get back the Radiant Power.

Now, let's suppose we have an area light source (eg. a computer monitor) rather than a point light source. We can treat that area light source as if it were a big grid of point sources. The Radiance refers to the power emitted by a particular point on this grid in a particular direction.

Finally, suppose we have a point on the ground and some light sources overhead. We can take draw a ray leaving the point and hitting one of the lights. That ray has some Radiance from the light source. If we add up all the radiances from all the possible outgoing rays, we get the Irradiance.


cs184-aby commented on slide_029 of Intro to Ray Tracing ()

the line p' is on is the line x = p'.x or in 3D the yz-plane x = p'.x .


cs184-abj commented on slide_029 of Intro to Ray Tracing ()

What is the line p' is on then? Is that the x-axis?


ikarlsson commented on slide_029 of Intro to Ray Tracing ()

In this case "perpendicular to x-axis" refers to that the line we are intersecting with is perpendicular to the x-axis. This simplifies the math a bit (the equation to the right of the diagram).

Edit: Just an added note, the line we are intersecting with can be a line attributed to some bounding box or bounding volume.


cs184-abj commented on slide_029 of Intro to Ray Tracing ()

Can someone help clarify the 'perpendicular to x-axis' picture for me? I'm a little confused as to what is perpendicular to where. Which plane is represented by the two vertical lines?


xiangzhengmao commented on slide_057 of Global Illumination 2 & Path Tracing ()

here tpdf is the probability of not terminating, so (1-tpdf) is the probability of terminating, but shouldn't it be the probability of not terminating? Otherwise we are not even allowing tpdf to be 1


jcai commented on slide_076 of Splines, Curves and Surfaces ()

For a Bezier surface of degree $(n,m)$, we have a control cage with $(n+1)(m+1)$ control points. Then the number of linear interpolations for the first stage is equal to $(n+1) * \dfrac{m(m+1)}{2}$---$n+1$ curves, each with $m+1$ points. The final interpolation for $n+1$ points would require $\dfrac{n(n+1)}{2}$ interpolations. In total, this turns out to be $(n+1) * \dfrac{m(m+1)}{2} + \dfrac{n(n+1)}{2}$ interpolations.


jcai commented on slide_052 of Splines, Curves and Surfaces ()

A Bezier curve of degree $n$ will require $n$ subsequent stages of interpolations, leading to $\dfrac{n(n+1)}{2}$ total linear interpolations.


Emmiemmie commented on slide_076 of Splines, Curves and Surfaces ()

This diagram is very helpful for project 2 (part 2).


carlosflrs commented on slide_013 of Splines, Curves and Surfaces ()

Here it was mentioned that the derivatives don't match at the end points. I'm not entirely sure what that means. I'm thinking that if the derivatives match at the end points that means that we have a smooth line through the two points. Hence we can generalize this to be able to correctly model a surface?


agarlin18 commented on slide_040 of Texture Mapping ()

What do s and t refer to?


provi commented on slide_040 of Texture Mapping ()

Instead of using floor(x)+1, you should use (int)(x+0.5) to find the nearest integer.

Edit: Sorry, was not reading correctly.

Edit2: http://www.tutorialized.com/tutorial/Bilinear-Filtering-101/42364 this link will help a lot.


provi commented on slide_027 of Texture Mapping ()

I'm a little confused here. Shouldn't the left image get clipped by the screen? Therefore, only the middle part of the texture space gets sampled?