

This setting is required for “Screen Space - Camera” canvases, and is called the “Render Camera.” When setting up a Canvas to render either in World Space or in a Camera’s screen space, it’s possible to specify the Camera which will be used to generate interaction events for the UI’s Graphic Raycaster.


Problem: World Space canvases need to know which camera their interaction events should come from. Solution: Casting rays via 2D or 3D physics can be expensive, so use this feature sparingly.Īlso, minimize the number of Graphic Raycasters by not adding them to non-interactive UI Canvases, since in this case, there is no reason to check for interaction events. The blocking mask determines whether the Raycaster will cast rays via 2D or 3D physics, to see if some physics object is blocking the user’s ability to interact with the UI. If you set Render Mode on your Canvas to Worldspace Camera or Screen Space Camera, you can also set a blocking mask. Problem: In some ways, the Graphic Raycaster does act as a raycaster. Turning off the Raycast Target will directly reduce the number of intersection checks the Graphic Raycaster must perform each frame. Solution: Turn off the Raycast Target for static or non-interactive elements.įor example, text on a button. The challenge is that not all UI elements are interested in receiving updates. It takes the set of UI elements that are interested in receiving input on a given canvas, and performs intersection checks: it checks that the point at which the input event occurs against the RectTransform of each UI element on the Graphic Raycaster’s Canvas is marked as interactive. You need a Graphic Raycaster on every Canvas that requires input, including sub-canvases.ĭespite its name the Graphic Raycaster is not really a raycaster: by default, it only tests UI graphics. It translates screen/touch input into Events, and then sends them to interested UI elements. The Graphic Raycaster is the component that translates your input into UI Events. Problem: Optimal use of the Graphic Raycaster: For example, separate dynamic elements from static ones (at around 29:36, Ian provides a nice example of smart subdivision of canvases). When subdividing canvases with child canvases, try to group things based on when they get updated. They maintain their own geometry and perform their own batching. Child canvases also isolate content, from both their parent and sibling canvases. You can also nest canvases, which allows designers to create large hierarchical UIs without having to think about where different things are onscreen across many canvases. So, slicing up your canvases in the main tool available for resolving batching problems with Unity UI. So, when they change one element, they can experience a CPU spike costing multiple milliseconds (to hear more about why rebuilding is expensive, go to the 24:55 mark in Ian’s talk).Įach canvas is an island that isolates the elements on it from those of on other canvases. Many users build their entire game’s UI in one single canvas with thousands of elements.
Unity 3d raycast how to#
The problem is that, when one or more elements change on a Canvas, the whole Canvas has to be re-analyzed to figure out how to optimally draw its elements. Because batch generation is expensive, we want to regenerate them only when necessary. UI elements need to be collected into batches so that they’re drawn in as few draw calls as possible. Generating these meshes can be expensive. It generates meshes that represent the UI elements placed on it, regenerates the meshes when UI elements change, and issues draw calls to the GPU so that the UI is actually displayed. The Canvas is the basic component of Unity UI. Problem: When one or more elements change on UI Canvas, it dirties the whole Canvas.
