UGUI series-principle analysis (Unity3D)

UGUI series-principle analysis (Unity3D)

Unity UGUI Principles (1): Canvas rendering mode

the goal

Learn about different UI Render Modes

Use environment and version

Window 7 Unity 5.2.5

Render Mode

There are three ways of UI rendering

  • Screen Space Overlay: Screen Space Overlay

  • Screen Space-Camera: Screen Space-Camera

  • World Space: World coordinate space

Screen Space-Overlay

In this mode, the Camera will not be referenced, and the UI will be displayed directly on top of any graphics

  • 1. Pixel Perfect: It can make the image clearer, but there is additional performance overhead. If there are a lot of UI animations, the animation may be uneven

  • 2. Sort Order: depth value, the higher the value, the more front the display

Screen Space-Camera

Use a Camera as a reference and place the UI plane at a certain distance in front of the Camera. Because it is a reference to the Camera, the UI plane will automatically resize if the screen size, resolution, and Camera frustum change. If the object (GameObject) in the Scene is closer to the camera than the UI plane, it will block the UI plane.

  • 1.Render Camera: the camera used for rendering

  • 2.Plane Distance: the distance to the Camera

  • 3. Sorting Layer: the sorting layer that Canvas belongs to, add it in Edit->Project Setting->Tags and Layers->Sorting Layers, the lower the layer, the more front

  • 4.Order in Layer: The order under the sorting layer that Canvas belongs to, the higher the value, the more front the display

World Space

Treat the object as a plane (GameObject) in the world coordinates, that is, as a 3D object, display 3D UI

  • 1. Event Camera: The Camera that handles UI events (Click, Drag), and the set Camera can trigger the event

Reference Unity Manual: Canvas

docs.unity3d.com/Manual/clas...


Unity UGUI Principles (2): Canvas Scaler Scaling Core

the goal

  • 1. Know the different UI Scale Mode
  • 2. Pixels Per Unit per unit pixel
  • 3.Canvas Scale Factor
  • 4. Reference Resolution (default screen size)
  • 5. The relationship and algorithm between Screen Size and Canvas Size

Use environment and version

Window 7

Unity 5.2.4

Canvas Scaler

Canvas Scaler is a Compoent that controls the overall size and pixel density of UI elements in the Unity UI system. The scaling ratio of Canvas Scaler affects the elements under Canvas, including font size and image boundaries.

Size

  • Reference Resolution: Default screen size

  • Screen Size: current screen size

Canvas Size: Canvas Rect Transform width and height

Scale Factor

docs.unity3d.com/ScriptRefer...

Used to zoom the entire Canvas, and adjust the Canvas Size the same as the Screen Size

Let's first look at a piece of official code

CanvasScaler.cs

protected void SetScaleFactor(float scaleFactor) { if (scaleFactor == m_PrevScaleFactor) return; m_Canvas.scaleFactor = scaleFactor; m_PrevScaleFactor = scaleFactor; } Copy code

As you can see from the code, Canvas Scaler scales all the elements under this Canvas by setting the Scale Factor under Canvas

When the Scale Factor is 1, Screen Size (800 600), Canvas Size (800 600), the picture size is 1 times

When the Scale Factor is 2, Screen Size (800 600), Canvas Size (400 300), and the picture size is 2 times

When the Scale Factor is 2, the Scale Factor will adjust the size of the entire Canvas and make it the same size as the Screen Size. After the calculation, the Canvas Size will be enlarged by 2 times, which is exactly equal to the Screen Size, and the picture below will be enlarged by 2 times.

UI Scale Mode

Constant Pixel Size Canvas Size is always equal to Screen Size, and all UI elements are directly scaled through Scale Factor

    1. Scale Factor: Use this Factor to scale all the elements under this Canvas
    1. Reference Pixels Per Unit:

First introduce the Pixels Per Unit in the picture file settings, which means that in this Sprite, a unit in the world coordinate is composed of several pixels.

The test pictures used here are the original size 100*100 picture files, here are collectively referred to as test pictures

For example, there is a 1 1 Cube in the scene , and a Sprite image is designated as the test image, and the Transform Scale of both is 1. When Pixels Per Unit=100, each unit consists of 100 Pixels, Sprite is 100 100 Pixels, The size of Sprite in world coordinates will become 100/100 * 100/100 = 1*1 Unit

(Left: Cube, right: Sprite)

When Pixels Per Unit=10, each unit consists of 10 Pixels, and Sprite is 100 100 Pixels, then the size of Sprite in world coordinates will become 100/10 * 100/10 = 10 10 Unit

(Left: Cube, right: Sprite)

Conclusion:

  • One unit in Unity is equal to 100 Pixels
  • From this, the formula can be derived:

Sprite size in world coordinates = original image size (Pixels)/Pixels Per Unit

Let's go back to Reference Pixels Per Unit. The official explanation is that if the picture file has Pixels Per Unit set, it will convert 1 pixel of Sprite into 1 pixel in UI

Image.cs

public float pixelsPerUnit { get { float spritePixelsPerUnit = 100; if (sprite) spritePixelsPerUnit = sprite.pixelsPerUnit; float referencePixelsPerUnit = 100; if (canvas) referencePixelsPerUnit = canvas.referencePixelsPerUnit; return spritePixelsPerUnit/referencePixelsPerUnit; } } Copy code

From the official code above, it can be seen that Image uses spritePixelsPerUnit/referencePixelsPerUnit to calculate new pixelsPerUnit

Image.cs

public override void SetNativeSize() { if (overrideSprite != null) { float w = overrideSprite.rect.width/pixelsPerUnit; float h = overrideSprite.rect.height/pixelsPerUnit; rectTransform.anchorMax = rectTransform.anchorMin; rectTransform.sizeDelta = new Vector2(w, h); SetAllDirty(); } } Copy code

When setting the image size, the width/height/pixelsPerUnit

To implement it, create a Canvas parameter as follows

Create an Image under Canvas, set Sprite as a test chart, and do 4 different tests with the parameters as follows: The test method is to modify Reference Pixels Per Unit and Pixels Per Unit, and click Set Native Size of Image Compoent to set the image Original size, take this to see the picture change.

The above table shows that when the value is changed, the default size of the picture will also change

From this, the formula can be derived

UI size = original image size (Pixels)/(Pixels Per Unit/Reference Pixels Per Unit)

Scale With Screen Size: Scale with the set Reference Resolution (default screen size)

  1. Reference Resolution: Default screen size

  2. Screen Match Mode: zoom mode

First look at the official algorithm

CanvasScaler.cs

Vector2 screenSize = new Vector2(Screen.width, Screen.height); float scaleFactor = 0; switch (m_ScreenMatchMode) { case ScreenMatchMode.MatchWidthOrHeight: { //We take the log of the relative width and height before taking the average. //Then we transform it back in the original space. //the reason to transform in and out of logarithmic space is to have better behavior. //If one axis has twice resolution and the other has half, it should even out if widthOrHeight value is at 0.5. //In normal space the average would be (0.5 + 2)/2 = 1.25 //In logarithmic space the average is (-1 + 1)/2 = 0 float logWidth = Mathf.Log(screenSize.x/m_ReferenceResolution.x, kLogBase); float logHeight = Mathf.Log(screenSize.y/m_ReferenceResolution.y, kLogBase); float logWeightedAverage = Mathf.Lerp(logWidth, logHeight, m_MatchWidthOrHeight); scaleFactor = Mathf.Pow(kLogBase, logWeightedAverage); break; } case ScreenMatchMode.Expand: { scaleFactor = Mathf.Min(screenSize.x/m_ReferenceResolution.x, screenSize.y/m_ReferenceResolution.y); break; } case ScreenMatchMode.Shrink: { scaleFactor = Mathf.Max(screenSize.x/m_ReferenceResolution.x, screenSize.y/m_ReferenceResolution.y); break; } } Copy code

a. Expand: Expand the Canvas Size in width or height so that it is higher than the Reference Resolution. The calculation is as follows

scaleFactor = Mathf.Min(screenSize.x/m_ReferenceResolution.x, screenSize.y/m_ReferenceResolution.y);

It means to calculate the length and width separately, "Screen Size" occupies the proportion of "Reference Resolution".

For example, the Reference Resolution is 1280 720, and the Screen Size is 800 600

ScaleFactor Width: 800/1280=0.625

ScaleFactor Height: 600/720=0.83333

Apply the ScaleFactor formula: Canvas Size = Screen Size/Scale Factor

Canvas Width: 800/0.625 = 1280

Canvas Height: 600/0.625 = 960

The Canvas Size is 1280*960, and the height is changed from 720 to 960, which is the largest enlargement (display all elements)

b. Shrink: Shrink the Canvas Size in width or height so that it is lower than the Reference Resolution. The calculation is as follows

scaleFactor = Mathf.Max(screenSize.x/m_ReferenceResolution.x, screenSize.y/m_ReferenceResolution.y);

It means to calculate the length and width separately, "Screen Size" occupies the proportion of "Reference Resolution".

For example, the Reference Resolution is 1280 720, and the Screen Size is 800 600

ScaleFactor Width: 800/1280=0.625

ScaleFactor Height: 600/720=0.83333

Apply the ScaleFactor formula: Canvas Size = Screen Size/Scale Factor

Canvas Width: 800/0.83333 = 960

Canvas Height: 600/0.83333 = 720

Canvas Size is 960*720, and the width is changed from 1280 to 960, which is the largest reduction

c. Match Width or Height: Perform mixed scaling based on Width or Height, calculated as follows

float logWidth = Mathf.Log(screenSize.x/m_ReferenceResolution.x, kLogBase); float logHeight = Mathf.Log(screenSize.y/m_ReferenceResolution.y, kLogBase); float logWeightedAverage = Mathf.Lerp(logWidth, logHeight, m_MatchWidthOrHeight); scaleFactor = Mathf.Pow(kLogBase, logWeightedAverage); Copy code

After taking the logarithm of ScaleFactor Width and Height respectively, and then performing average blending, why not use March to blend Width and Height directly?? Let's compare

Suppose the Reference Resolution is 400 300 and the Screen Size is 200 600. The size relationship is

Reference Resolution Width is 2 times of Screen Size Width

Reference Resolution Height is 0.5 times of Screen Size

It will look like the picture below

When March is 0.5, ScaleFactor should be 1 (flattened)

ScaleFactor Width: 200/400=0.5

ScaleFactor Height: 600/300=2

General mix:

ScaleFactor = March * ScaleFactor Width + March * ScaleFactorHeight

ScaleFactor = 0.5 * 0.5 + 0.5 * 2 = 1.25

Logarithmic mixing:

logWidth: log2(0.5) = -1

logHeight: log2(2) = 1

logWeightedAverage: 0

ScaleFactor: 20 = 1

The scaleFactor is generally 1.25 for blending and 1 for logarithmic blending. The result is obvious. Using logarithmic blending can more perfectly correct the size

Constant Physical Size

Scaling through the Dpi (Dots Per Inch) of the hardware device

  1. Physical Unit: the type of unit used

2. Fallback Screen DPI: Backup Dpi, when the device Dpi cannot be found, use this value

3.Default Sprite DPI: The default picture Dpi

float currentDpi = Screen.dpi; float dpi = (currentDpi == 0? m_FallbackScreenDPI: currentDpi); float targetDPI = 1; switch (m_PhysicalUnit) { case Unit.Centimeters: targetDPI = 2.54f; break; case Unit.Millimeters: targetDPI = 25.4f; break; case Unit.Inches: targetDPI = 1; break; case Unit.Points: targetDPI = 72; break; case Unit.Picas: targetDPI = 6; break; } SetScaleFactor(dpi/targetDPI); SetReferencePixelsPerUnit(m_ReferencePixelsPerUnit * targetDPI/m_DefaultSpriteDPI); Copy code

Conclusion

ScaleFactor is the ratio of "current hardware dpi" to "target unit"

ReferencePixelsPerUnit needs to calculate a new value with the current Dpi, and then pass it to Canvas to find the size, the formula is as follows:

New Reference Pixels Per Unit = Reference Pixels Per Unit * Physical Unit/Default Sprite DPI

UI size = original image size (Pixels)/(Pixels Per Unit/new Reference Pixels Per Unit) Reference material Unity Manual: Canvas

docs.unity3d.com/Manual/clas...


Unity UGUI Principles (3): RectTransform

the goal

1. Understand RectTransform Component 2. Anchor 3. Pivot 4. Blue Print Mode and Raw Edit Mode

Use environment and version

Window 7

Unity 5.2.4

RectTransform

RectTransform is the 2D corresponding Component of Transform, Transform represents a single point, and RectTransform represents a 2D rectangle (UI space). If the parent and child objects have RectTransform, then the child object can specify the position and size in the parent object rectangle. In short, RectTransform is Define the position, rotation and size of UI elements

Anchor

The anchor point (alignment point) of the object. If the parent and child both have RectTransform, the child object can be aligned to the parent object according to the Anchor, which is divided into Min and Max positions. As shown in the figure below, there are 4 triangles around the object

Anchor position coordinates and relationship. When we use the mouse to click 4 triangles to adjust the Anchor, the proportion message will appear intimately. This proportion is the zoom ratio of the child object in the parent object. When there is a picture Anchor Min and Anchor under Canvas Max is all (0.5, 0.5), as shown in the left part of the figure below

If you adjust Anchor Min to (0.3, 0.5), Anchor Max is adjusted to (0.5, 0.7), as shown in the right part below

Pay attention to the Pos X, Pos Y, Width, Height on the left picture, it will change to Left, Top, Right, Buttom on the right picture

Because when the Anchor is at the same point, the coordinates and size of the object are displayed. When the Anchor is not at the same point (a rectangle will be formed at this time), the Anchor rectangle will be displayed to fill the space, as shown in the following figure, (PS is when we move the object (It will intimately display the current distance relationship with Anchor)

After reading the above, I still don t understand how it works. Let s take a look at the five pictures under Canvas through examples. Both Anchor Min and Anchor Max are (0.5, 0.5). The position of the object will be aligned to the center of the parent object. When the size of the parent object changes, the situation is as follows

There is a picture under Canvas, Anchor Min and Anchor Max are both (0.0, 1.0), the position of the object will be aligned to the upper left corner of the parent object, when the size of the parent object changes, the situation is as follows, the object will be fixed at the upper left corner

There is a picture under Canvas, Anchor Min is (0.0, 0.0), Anchor Max is (1.0, 0.0), the position of the object will be aligned to the lower left corner and lower right corner of the parent object. When the size of the parent object changes, the situation is as follows: The width of the object will change with the parent object

From the above several examples, we can know that the child object will be aligned to the parent object according to the set Anchor. When the size of the parent object changes, the child object will be updated through the Anchor. As mentioned above, when we click 4 triangles to adjust the Anchor , The screen will display the proportion information intimately. I believe that experienced people must know the meaning of the proportion. This proportion is the scaling ratio of the child object in the parent object. The following example shows the original value Parent Size (400, 350)

Image Size (120, 105)

Anchor Min is (0.2, 0.5), Anchor Max is (0.5, 0.8)

Parent Size Half-time value Parent Size (200, 175)

Image Size (60, 52.5) Image Size Width = 400 * 50% * 30% = 60 Image Size Height = 350 * 50% * 30% = 52.5 Anchor Min is (0.2, 0.5), Anchor Max is (0.5, 0.8)

From the above, it can be known that after the parent object is reduced by 2 times, the parent object updates the child object through the Anchor ratio of the child object. In this way, we can achieve different screen resolutions and automatically change the UI size and position. Anchor Presets

Click the upper left corner of RectTransform to open the Anchor Presets tool. The commonly used Anchors are listed here, which can be quickly applied. Hold down Shift to change together with Pivot, hold down Alt to change the pivot point of the object itself together with the position , Affect the rotation, zoom, and position of the object. To change the UI Pivot, you must first turn on the Pivot button on the control panel, as shown below

When Pivot (0.5, 0.5) When Pivot (0, 1)

Blue Print Mode, Raw Edit Mode

Blue Print Mode (Blueprint Mode) ignores the Local Rotation and Local Scale of the object. It is convenient to adjust the object with the original rotation and size. Raw Edit Mode (Original Edit Mode) When adjusting the Pivot and Anchor in the Inspector, the object will maintain its current position and Size (the numerical part in the Inspector), the adjustment situation is as follows, please pay attention to the numerical part

Adjust Pivot in Inspector Adjust Anchor in Inspector Reference Unity Manual: Basic Layout

docs.unity3d.com/Manual/UIBa...

Unity's uGUI's adjustment function (Introduction to RectTransform)

tsubakit1.hateblo.jp/entry/2014/...


Unity UGUI Principles (4): Event System Manager Events and Triggers

the goal

1. Event System 2. Input Module input control 3. Graphic Raycaster 4. Physics Raycaster and Physics 2D Raycaster

Use environment and version

Window 7

Unity 5.2.4

Event System

When the UI is created, Unity will automatically help us create the Event System object. This object is based on mouse, touch, and keyboard input methods. The Event is sent to the Object. There are 3 components under the object, namely Event System Manager, Standalone Input Module, Touch Input Module

1.Event System Manager

Control all events, and coordinate the mouse, touch, and keyboard input methods (Input Module) with the selected Object. Each "Update" Event System will receive all calls and determine which Input to use at this moment Modules Event System Info When you press Play, click the Event System object, and the selected object, location, and Camera receiving the event will be displayed in the inspector.

.

First Selected

The Object to be selected for the first time during execution, for example: After selecting InputField (input box), press Play and the cursor Force will be placed on InputField

Send Navigation Events

Whether to enable the UI navigation function, the navigation function can use the keyboard "up", "down", "left", "right", "Cancel (Esc)", "Sumit (Enter)" to control the selected UI

Example: If there are multiple menu buttons on the screen, we can set the Navigation Options on the button. Here we use the Explicit method to specify which one should be selected by pressing the keyboard "up", "down", "left", and "right" object

Select On Up: the object to be selected after pressing the "up" key on the keyboard, Down, Left and Right will not be repeated

Visualize Buttin: Press Visualize to see the yellow line pointing to the object

Drag Threshold

Drag Event sensitivity, the lower the more sensitive

2.Standalone Input Module

The computer input control module mainly affects mouse and keyboard input. Use Raycasters in Scene to calculate which element is clicked, and transmit Event

Horizontal Axis

Represents the Horizontal Axis in the Input Module, which can be set to the value in the Input Manager, Vertical Axis, Submit Button, Cancel Button do not go into details

Input Actions Per Second

The maximum number of buttons and mice that can be entered per second

Repeat Delay

Delay of repeated input

Event execution complete process

keyboard input

1. Move Event: verify the input axis, left, right, up, down buttons through the input manager, and pass them to the selected object

2.Submit, Cancel Button: When the object has been Preesed (pressed), verify the input submit and cancel buttons through the input manager and pass them to the selected object

Mouse input

1. If it is new, press

a. Send PointerEnter event

b. Send PointerPress event

c. Temporarily store drag related

e. Transmit BeginDrag event

f. Set the selected object in the Event system to the pressed object

2. If it is continuously pressed (Drag)

a. Handling mobile related

b. Send Drag event

c. PointerEnter event and PointerExit event that cross to other objects when handling Drag

3. If it is released (mouse release)

a. Send PointerUp event

b. If the mouse release is the same as the object when pressed, send a PointerClick event

c. If there is drag related temporary storage, send Drop event

d. Send EndDrag event

4. The middle mouse button scroll wheel sends scroll event

3.Touch Input Module

Touch input module, mainly used on mobile devices, can respond through Touch and Drag, use Raycasters in Scene to calculate which element is clicked, and pass the Event

Event execution complete process

Same as the mouse input of the Standalone Input Module, the mouse click can be thought of as a touch

4. Event System trigger process

1. User input (mouse, touch, keyboard)

2. Use Event System Manager to decide whether to use Standalone or Touch Input Module

3. After deciding which Input Module to use, use Raycasters in Scene to calculate which element is clicked

4. Send Event

Graphic Raycaster (Graphic Raycaster)

Component location: Unity Menu Item Component Event Graphic Raycaster

To create one of the components of the Canvas object nowadays, Raycaster will observe all the graphics under the Canvas and detect whether it has been hit. The ray detection is actually after specifying the position and direction, projecting an invisible line and judging whether there is a colliding object on the line, the ray detecting this Click the official has detailed instructions, here is used to determine whether to click the UI graphics

Ignore Reversed Graphics:

Back to the graphics of the screen, whether the radiographic inspection should ignore this

Example: When the Y-axis of the figure is rotated by 180, it is facing the screen at this time. If there is a tick at this time, the figure will be ignored and not detected.

Blocked Objects, Blocking Mask:

Mainly used when the Canvas Component Render Mode uses World Space or Camera Space, when there is a 3D or 2D Object in front of the UI, it will prevent the rays from being transmitted to the UI graphics

Blocked Objects The type of Object that blocks the ray

The layer checked in Blocking Mask will block the rays

Example: If there is a button on the screen that overlaps with the Cube intentionally, you will find that the Button will still be triggered when you click on the overlap

If you change the Layer of the Cube to Test01, set the Blocked Objects to 3.D, and only check Test01 for the Blocking Mask, and click the overlap area again, you will find that the Cube will obstruct the ray detection. At this time, the button will not receive the ray, of course not. Will react

Physics Raycaster (Physics Raycaster) Component location: Unity Menu Item Component Event Physics Raycaster

Detect 3D GameObject in Scene through Camera (Collider Component is required). Objects that implement Event Interfaces interface will receive Message notification, for example, 3D GameObject can receive click event or drag event, etc....., To see more Event, please click me

Let us understand through examples

1. Establish EventSystem for event processing

Object location: Unity Menu Item GameObject UI EventSystem

2. Add Physics Raycaster Component under Camera to observe rays

3. To implement the Event Interfaces interface, there are two ways, one is to create Script to directly implement Interfaces, the other is to use Event Trigger Component

The first to create Script and directly implement Interfaces

a. Create a Script to implement Event Interfaces

EventTest.cs

using UnityEngine; using UnityEngine.EventSystems; public class EventTest : MonoBehaviour , IPointerDownHandler { public void OnPointerDown ( PointerEventData eventData ) { print(gameObject.name); } } Copy code

Line. 2: Import the namespace using UnityEngine.EventSystems

Line. 4: Inherit the Event Interfaces, here is IPointerDownHandler (click the event), see more Event, please click me

Line. 6~8: Implementation method, input PointerEventData as event data

b. Create a 3D object (here called Cube) and add BoxCollider Component

c. Put the Script under the Cube, the Intercepted Events information will appear in the Inspector, showing the Event being monitored

d. Click Cube at this time to notify the OnPointerDown method and pass in event information

The second is to use Event Trigger Component to implement Interfaces

a. Create a Script and implement a method to receive Event Trigger notifications

EventTriggerTest.cs

using UnityEngine; using UnityEngine.EventSystems; public class EventTriggerTest : MonoBehaviour { //BaseEventData dynamic incoming event information public void OnPointerDown ( BaseEventData eventData ) { print( "OnPointerDown--BaseEventData" ); } //Pure call public void OnPointerDown () { print( "OnPointerDown--non" ); } // Pass in int public void OnPointerDown ( int i ) { print( "OnPointerDown--int" ); } } Copy code

Line. 2: Import the namespace using UnityEngine.EventSystems

Line. 6~8: Implementation methods, 3 implementations here

b. Create a 3D object (here called Cube) and add BoxCollider Component

c. Put the Script under the Cube

d. Add Event Trigger Component under Cube, which mainly receives the Event from the Event System and calls the implemented Event

Component location: Unity Menu Item Component Event Event Trigger

e. Click Add New Event Type to select the type of Event to be implemented. Here we use PointerDown as an example

f. At this time, a UnityEvents will be added, which is a way of setting through the editor to set the methods and attributes to be notified when the Event is triggered. For details, please refer to the following, here is a brief description

Talk nonsense, write casually-Unity: Use UnityEngine.Events to make the program more flexible and stable

Unity Manual: UnityEvents

After clicking the "+" button, drag in the Scene GameObject to be notified, Unity Event will search for all Public methods and properties on this GameObject, and you can add "Notification method" and "Pre-modified properties" when the Event is triggered.

g. Drag GameObject into the Cube, and notify the method to set 3 methods in the Script

h. Clicking on the Cube at this time will trigger PointerDown and notify the 3 methods in the Script

4. Implementation notes:

Scene must have EventSystem GameObject

Camera must have Physics Raycaster Component

3D GameObject must have Collider Component

The way to implement Event Interfaces is to create a Script and directly implement Interfaces, and the other is to use Event Trigger Component. From the above implementation, you can know that you can use the editor to set the Event Trigger to set the trigger time "Notification method" and "modification attributes", and more flexible

Physics 2D Raycaster component location: Unity Menu Item Component Event Physics 2D Raycaster

The only difference from Physics Raycaster is that Physics 2D Raycaster detects the 2D GameObject in the Scene. Of course, there must be a Collider2D Component on the GameObject, so I won t repeat it here.

Postscript We understand the entire Event System triggering process through different input methods and the relationship with Raycaster, and we also know how to implement Event and apply Event. Whether it is 3D, 2D, or UI objects, it can be easily applied, which greatly improves the development speed. , Simplified syntax, can be said to be a very convenient function

Reference Unity Manual: Event System

docs.unity3d.com/Manual/Even...

Unity Manual: UnityEvents

docs.unity3d.com/Manual/Unit...

Unity Raycasting

unity3d.com/cn/learn/tu...

Talk nonsense, write casually Unity: Use UnityEngine.Events to make the program more flexible and stable

godstamps.blogspot.tw/2015/10/uni...


Unity UGUI Principles (5): Auto Layout automatic layout

the goal

1. Auto Layout System architecture 2. Layout Element element size 3. Horizontal, Vertical, Grid Layout Group element arrangement 4. Content Size and Aspect Ratio Fitter size control

Use environment and version

Window 7

Unity 5.2.4

Auto Layout System

Auto Layout System is a system based on the Rect Transform Layout System, which automatically adjusts the size, position, and spacing of one or more elements. It is divided into two parts: Layout Controllers (parent objects) and Layout Elements (child objects). The Auto Layout architecture is as follows (the theory is introduced here, and the implementation will be left behind)

Layout Element (child object)

After clicking the UI, you can switch to Layout Properties at the bottom of the Inspector to see the information

Layout Controllers obtain Layout Element size to allocate child objects through different layout methods. The basic principles are as follows

Assign the Minimum Size first

If there is enough space, allocate Preferred Size

If there is additional space, allocate Flexible Size

From the picture below, you can see how the width of the picture increases (the theory is introduced here, and the implementation will be left behind)

First allocate Minimum Size (300, red part)

If there is enough space, allocate Preferred Size (300~500, green part)

If there is extra space, allocate Flexible Size: 1 (500~700, blue part)

The more special one is Flexible, which represents the proportion of the entire size. If there are two objects under the Layout, set Flexible to 0.3 and 0.7 respectively, then the proportion will become the following figure (3:7)

Also note that the Text and Image Component will automatically assign the Preferred Size according to the content size

Layout Controllers (parent object) Layout Group

It does not control the size of Layout Controllers (parent objects), but controls the size and position of child objects. In most cases, appropriate space is allocated according to the minimum, preferred, and flexible size of each element, and layout groups can also be embedded Sets are divided into 3 types: Horizontal (horizontal), Vertical (vertical), Grid (grid)

Horizontal Layout Group

Horizontal direction (Width) Arrange sub-objects

Component location: Unity Menu Item Component Layout Horizontal Layout Group

Padding: Fill the internal space

Spacing: the spacing of each element

Child Alignment: When the entire space is not filled, the child object alignment position

Child Force Expand: Force control child objects to fill up the space

Understand the parameters through examples:

A. Open a new Scene Unity Menu Item File New Scene

B. Add a Canvas Unity Menu Item GameObject UI Canvas

C. Add an empty object under Canvas as Layout Controllers (hereinafter referred to as the parent object)

D. Add Horizontal Layout Group Component Unity Menu Item Component Layout Horizontal Layout Group to the parent object

E. Create 5 Buttons (child objects) under the parent object, as shown below, when the size changes, the child object size will be automatically allocated

F. At this time, the Rect Transform Component of the Button cannot be adjusted, because we have allocated space through the Horizontal Layout Group, and the Rect Transform will show which Layout Group is currently controlled

G. Adjust the Padding value as shown in the figure, you can see the filled area

H. Adjust the Spacing value as shown in the figure, you can see the element interval

I. Next, we will add 5 Buttons to Layout Element Component to cover the preset size, which is used to manually set the size of each element. The position of the component: Unity Menu Item Component Layout Layout Element

J. At this time, uncheck the Child Force Expand Width of the Horizontal Layout Group to not force the child objects to fill the extra space, but manually set it through the Layout Element

K. Several different settings are used here to understand how the Horizontal Layout Group obtains the Layout Element size to allocate child objects

Review the size distribution method of the sub-objects, if you are not sure, please go back to the Layout Elements section above

Assign the Minimum Size first

If there is enough space, allocate Preferred Size

If there is additional space, allocate Flexible Size

Change the Layout Element Min Width of the 5 Buttons to 20, 30, 40, 50, 60. At this time, you can see the width distribution of each Button. When you change the size of the parent object, the size of the child object will not change because there is only allocation Min Width, and will not allocate additional effective space

At this time, change the Child Alignment of the Horizontal Layout Group, you can see that the elements are aligned

Parent object Layout Properties Min Width = 5 button width (20+30+40+50+60=200) + Spacing(40) + Padding Left, Right(20) = 260

Now adjust the Layout Element value of the first Button as shown in the figure

Set the Preferred Width to 100 here

1. First allocate Minimum Size (20)

2. In the case of sufficient space, the remaining Preferred Size (20~100 space) will be allocated, as shown below

Now adjust the Layout Element value of the first Button as shown in the figure

Set Flexible Width to 1 here

1. First allocate Minimum Size (20)

2. If there is enough space, the remaining Preferred Size (20~100 space) will be allocated

3. If there is extra space, allocate the remaining Flexible Size, as shown below

Now check the Child Force Expand Width of the Horizontal Layout Group to force the child objects to fill up

1. First allocate Minimum Size (20)

2. If there is enough space, the remaining Preferred Size (20~100 space) will be allocated

3. If there is extra space, allocate the remaining elements Flexible Size and Child Force Expand Width

Conclusion: As we saw above, all elements will be allocated Minimum Size first, then there will be enough space, the remaining Preferred Size will be allocated, and finally Flexible Size and Child Force Expand Width

So far we have learned how the Horizontal Layout Group obtains the Layout Element size to allocate child objects

###Vertical Layout Group

The vertical direction (Height) arranges the sub-objects, which is only horizontal or vertical from the Horizontal Layout Group, so I won t repeat them here.

Component location: Unity Menu Item Component Layout Vertical Layout Group

###Grid Layout Group

Arrange child objects in a grid

Component location: Unity Menu Item Component Layout Grid Layout Group

Padding: Fill the internal space

Cell Size: the width and height of each element

Spacing: the spacing of each element

Start Corner: The corner (position) to start the arrangement, which is divided into "upper left", "upper right", "lower left", and "lower right". Please look carefully at the element numbers

Start Axis: "horizontal" or "vertical" arrangement, please look carefully at the element numbers

Child Alignment: When the entire space is not filled, the child object alignment position

Constraint: arrangement restriction

Flexible: Automatically arrange flexibly according to size

Fixed Column Count: limit the arrangement "number of rows (straight)"

Fixed Row Count: limit the arrangement "number of columns (horizontal)"

Layout Fitter

Controls the size of Layout Controllers. The size depends on the child object or the set size ratio, which is divided into Content Size Fitter and Aspect Ratio Fitter

Content Size Fitter

Controls the size of the Layout Controllers (parent objects). The size depends on the Minimum or Preferred size of the child objects. The zoom direction can be changed through Pivot

Component location: Unity Menu Item Component Layout Content Size Fitter

Horizontal, Vertical Fit: horizontal and vertical fit adjustment

None does not adjust

Min Size is adjusted according to the Minimum size of the child object

Preferred Size is adjusted according to the preferred size of the child object

Understanding through examples:

If we have a requirement now, we must make the "parent object size" scale according to the "child object size", as follows (to make it easier to see the size of the parent object, add a black frame)

A. Open a new Scene Unity Menu Item File New Scene

B. Add a Canvas Unity Menu Item GameObject UI Canvas

C. Add an empty object under Canvas as Layout Controllers (hereinafter referred to as the parent object)

D. Add Horizontal Layout Group Component Unity Menu Item Component Layout Horizontal Layout Group to the parent object

At this time, if a Button (child object) is added, as mentioned above, the Horizontal Layout Group will allocate the size of the child object according to the Layout Element of the child object, without modifying the size of the parent object itself, as shown below (it is easy to see the parent Object size, increase the black frame)

E. Add Button (child object) under the parent object, and add Layout Element Component to cover the default size, and adjust the Minimum Width to 100. Component position: Unity Menu Item Component Layout Layout Element

F. Add Content Size Fitter Component to the parent object, adjust the Horizontal Fit to Min Size, and adjust the size of the parent object through the Minimum Width of the child object (the horizontal direction is actually to obtain the Width of the child object)

G. At this time, if the Button copy increases, the size of the parent object itself will also change, as shown below

H. Adjust the pivot of the parent object, you can control the zoom direction, as shown below

I. Through the above example, we first use the Horizontal Layout Group to arrange the child objects, and add a Layout Element to the child object to cover the default size, and finally get the child object Layout Element through the Content Size Fitter to set the parent object size, so far the size of the parent object is Will be scaled according to the size of the sub-object

###Aspect Ratio Fitter

Controls the size of Layout Controllers, adjusts the size according to the aspect ratio of the object, and can change the zoom direction through pivot

Component location: Unity Menu Item Component Layout Aspect Ratio Fitter

Aspect Mode: adjustment mode

None: no adjustment

Width Controls Height:

Based on the Width as the benchmark, change the Height according to the proportion

When the Width changes, the Height will change proportionally

Height Controls Width:

Based on Height as the benchmark, change the Width according to the proportion

When Height changes, Width will change proportionally

Fit In Parent: Automatically adjust the width, height, position, and anchors according to the ratio, so that the size of the figure is completely aligned in the parent object. This mode may not cover all the space

Adjust the ratio (it is convenient to clearly see that the parent object has a black background)

Adjust the size of the parent object, the object will be aligned with the parent object according to the proportions

Envelope Parent: Automatically adjust the width, height, position, and anchors according to the ratio, so that the size of the figure completely covers the parent object. This mode may exceed the space

Adjust the ratio (it is convenient to clearly see that the parent object adds a black frame)

Adjust the size of the parent object, the object will wrap the parent object according to the proportion

Aspect Ratio: ratio, this value is width/height

difference:

Content Size Fitter automatically adjusts the size through sub-objects

Aspect Ratio Fitter is adjusted by value (aspect ratio)

Postscript Auto Layout System can arrange multiple UIs quickly and conveniently. It will automatically adjust the content when the size is changed. It can also be applied to multi-layered enclosures. It is also very convenient and intuitive in future adjustments and modifications. It is a must in the UI system. One of the functions of learning!!

Reference Materials Unity Manual- Auto Layout

docs.unity3d.com/Manual/UIAu...

Unity Manual- Auto Layout_UI Reference

docs.unity3d.com/Manual/comp...


Unity: UGUI automatically adjusts the size and position of various screens

UGUI adaptive

Two articles about GUI auto-adjustment have been published before, "Unity automatically adjusts the scale and position of the GUI" and "Unity: automatically adjusts the zoom and position of the screen in response to various screen ratios". Since Unity released a new one in version 4.6 After the GUI system (UGUI), it has become easier and more convenient to use Unity to make UI. At the same time, it does not need to rely on third-party UI tools. There have also been some innovations in the event system, and the interaction between UI events and Component has become more Visualization, the degree of mutual coupling is greatly reduced, which makes the use more flexible. Although UGUI solves many previous UI production problems, there are still some parts that need to be adjusted by the user during actual development.

After the release of Unity 4.6, UGUI-related game objects, unlike general game objects that use Transform, are Rect Transform, which have more fields such as width, height, Anchors, Pivot, etc. These fields visualize the UI production The design is very helpful, and the Anchors among them allow the UI as a whole to have more powerful and flexible control over various screens of different proportions.

If you use Free Asspect to observe the screen in Unity's Game view, you can drag the edge of the window to adjust the screen ratio, you can find that UGUI itself does not move the position or zoom size, if you change 320x480 to 480x320, it may cause the UI to be affected. The edges of the screen are cut off, and Anchors can be very useful at this time, so that elements such as UI pictures or buttons can change their width and height as the screen ratio changes, but the drawback is that the text will not change size accordingly. Of course, if the text is checked with Best Fit, it will also have the effect of automatically changing the Size. However, UI elements such as buttons and pictures change with the screen ratio and also change their own aspect ratio, but the text only changes the Font Size. , So when the screen ratio is changed, the picture is deformed, and the text does not follow the deformation, then the distance between the text and other UI or the gap between the edge of the button and the text in the button may change strangely. These are all What we don't want to see, so the best situation should be that when the screen ratio changes, UI pictures, text, buttons and other text can maintain the original aspect ratio and automatically zoom in size and position.

Normal portrait

When turned to landscape, the screen is cut off

Set up Anchors for the UI

After turning to landscape, the UI is automatically deformed and adjusted, but the text is different

When the first UGUI game object is created in the scene, the Canvas game object must be automatically created first. In this object, the three components of Canvas, Canvas Scaler, and Graphic Raycaster are configured by default, and the Canvas Scaler is It is used to control the overall zoom of UGUI, so as long as you follow the steps below, you can achieve the most basic purpose of adjusting the UI according to the original scale:

  • 1. Keep all UI Anchors at the default value (0.5).
  • 2. Set the Ui Scale Mode field of Canvas Scaler to Scale With Screen Size.
  • 3. Enter the width (X) and height (Y) of the basic resolution in the Reference Resolution field.
  • 4. Select Match Width Or Height in the Screen Match Mode field.
  • 5. If the screen is landscape, enter 0 in the Match field; if the screen is portrait, enter 1 in the Match field.

Keep Anchors at the default value of 0.5

Set the relevant fields of Canvas Scaler

In this way, when the screen is turned from portrait to landscape, the GUI will automatically maintain the original ratio to adjust its size and position.

Converted to a landscape screen, the proportion and position of the UI remain unchanged

However, this is ideal when the ratio is the same and it is directly turned horizontally to portrait. When encountering a screen with a different ratio, such as the UI layout originally designed by the iPhone 4's 640x960, it will happen when it encounters the iPhone 5's 640x1136 screen. The situation where both sides are cut off.

When the proportions are different, the edges are cut off

At this time, we must change the Match field of the Canvas Scaler componet of the Canvas game object to 0, so that the UI will be scaled and adjusted to the screen according to the original scale.

Adjust the Match of the Canvas Scaler so that the cut-off part returns to the screen.

In order to deal with various unknown screen ratios (especially Android and desktop platforms), we can hang a component containing the following piece of code for Canvas to automatically adjust the Match field of the Canvas Scaler component.

void Awake () { CanvasScaler canvasScaler = GetComponent<CanvasScaler>(); float screenWidthScale = Screen.width/canvasScaler.referenceResolution.x; float screenHeightScale = Screen.height/canvasScaler.referenceResolution.y; canvasScaler.matchWidthOrHeight = screenWidthScale> screenHeightScale? 1 : 0 ; } Copy code

After completing the above work, UI production will be much easier in the future. If there is no special requirement, there is basically no need to move the Anchors value of the UI. Most of them simply adjust the size and position of the UI. The screen ratio will also be automatically adjusted to maintain the original design layout. In addition, if you need to make UI dynamic scaling, displacement, etc. to make the screen more vivid, you should change the value of its Local space as the main point, so the scaling part is not required Special consideration, Canvas Scaler will handle it for us.

However, there is one exception that cannot be solved using this method at present. When the UI text uses the Size tag of Rich Text to specify the text size, the text size will not be affected by the Canvas Scaler. It specifies absolute text Size, this point must be paid special attention.

Incidentally, in addition to the UI layout and changes, sometimes you also want to do some particle special effects on the UI, or let non-GUI game objects interact with the UI on the screen. At this time, just use the Canvas game The Render Mode field of the Canvas component of the object is changed to Screen Space-Camera and the contents of the Render Camera and Plane Distance are set, then the objects placed between the UI plane and the designated Camera can be displayed on the UI to interact with it, if It is a Sprite game object, you can adjust the order according to the Sorting Layer and the GUI; however, when encountering an automatically adjusted GUI screen, the size and position of these non-GUI game objects may be different from the expected, in order to avoid this situation If it happens, you can refer to the previously published article "Unity: Automatically adjust the screen zoom and position for various screen ratios" (this article is Unity 4.3 version, so I didn't get it for everyone) to adjust the Camera.

Change the Render Mode of Canvas to Screen Space-Camera

PS: Currently using Unity version 5.0.1f1.


Unity: UI process management mechanism for making UGUI

Why do you need a UI process management mechanism

Since Unity 4.6 released the new GUI system, Unity finally has a relatively complete visual editing UI tool that can be used. Therefore, we can easily and intuitively add buttons on the screen, using drag and drop, drop-down menus and other actions. Set the function of which GameObject and which Component on which UI event should be executed, so it is very easy to trigger the function of our own program through the UI, but the entire game content may have quite a lot of screens and different UI buttons Or the behavior will shift to different screens, and different UI views need to be turned on. If you do not plan the rules of the UI screen movement, it may be difficult to maintain during complex screen transitions, and even multiple times in the future. Unnecessary bugs are generated after the changes have made the UI screen dynamics become quite confusing.

No matter how beautiful and gorgeous the UI is, how well the details such as feedback, typesetting, and effects are done, if the overall structure and the flow of the line are wrong, the hard work of doing so much will be wasted, because it will UI movement lines that make players lose their way are uncomfortable to use; even if the UI movement line design is used, there is no problem, but there is no planning and consistent process management, and it is easy to cause some troubles in production and maintenance.

For example, most UI screens will have a "back" button to return to the previous screen. When the UI screen comes in from the A screen, it should be possible to return to the A screen, and when it comes in from the B screen, it should be possible. Return to screen B, just like we usually use web browsers, no matter where the current page came from, pressing "Previous" will return to the page before entering the current page, and the "Back" on the UI The button has the same function, except that this return is a button, which will be set by us to perform a certain function of a certain Component, and through this function to enable it to open a certain UI screen, so we are intuitively a bit like Like the hard-written program, screen A enters screen B, so click the return button of screen B to open screen A, what if screen C can also enter screen B? Then make a temporary record to let the B screen judge whether it was entered from the A or C screen, or when entering the B screen, notify the B screen where it came from. At this time, the first question comes. A screen that will enter the B screen, the return button function of the B screen needs to be modified to achieve the correct judgment; the second question, suppose that both A and B screens can enter the C screen, and the C screen can enter the A and D screens , A and D screens can be entered into B screen... and so on for more in-depth cross-moving lines, adding a UI screen or changing to a certain UI screen process, almost all screens return keys have to be modified, this This kind of project is too vast, and the maintenance status that affects the whole body is the easiest to produce unknown bugs.

For this reason, the designed UI process management mechanism must meet at least two conditions:

  • Players will not get lost when using it.
  • No matter how many screens are changed in the future, there is no need to modify or maintain the return button.

To meet these two conditions, there is one of the most direct methods, that is, as mentioned earlier, the "previous page" function of the web browser. How is this function implemented? The concept is actually browsing records, that is to say, we only need to record the history of the UI screens that have been switched when the player is operating, and then return in order. This way, the player will not get lost when using it. When we make and maintain it, There is no need to customize the return button for each screen.

The concept of the process is okay, and there are some other parts to pay attention to:

  • Enter and exit dynamics between each UI screen,
  • When the entry and exit screen is in progress, UI events cannot be effective.
  • Even if the exit UI screen is not visible on the game screen, it should not be allowed to continue to execute.

So much has been written before, and the most important thing is how to implement it. The following is a video of the entire implementation process. There is no detailed explanation in the film. Therefore, further detailed explanation is provided in the following article:

The basic structure of the UI canvas

First decide what aspect ratio and resolution we want to use to compose the UI. Usually it will be the same as the screen to be previewed when the Game view is set.

Establish the main canvas of the UI (Canvas), and set the Ui Scale Mode and Reference Resolution in its Canvas Scaler. This is mainly to enable the UI to have basic adaptive adjustments when facing devices with different screen ratios in the future

Next, a Canvas should be created for each UI screen, and other UI components in the screen should be placed under the Canvas, so that the Canvas Scaler set in the main canvas above will also be directly applied to each UI screen below it In addition, another advantage of this is that you can directly change the Sort Order of the Canvas to arrange the order of the UI screen. At the same time, in addition to UI components, each screen also needs to use Image to make a "transparent occlusion layer" covering the size of the full screen, and place it on top of other UI components. It is usually in a state where the GameObject is closed. When the UI screen is in progress It is turned on when entering and exiting the scene dynamically, so as to protect the UI button from being clicked.

Make animation

The entry and exit animation files of the UI screen are named Open and Closed respectively. The easiest way is to fade in, fade out or zoom in or zoom out. Of course, you can also design richer animation effects. Most importantly, enter, During the playback of the exit animation file, remember to turn on the previously created "transparent masking layer".

Turn on "Transparent Occlusion Layer" during animation playback

Set up animation control

After we make the animation files for entering and exiting the scene for the UI screen, open the Animator view, we can find that Unity has automatically helped us to create two animation states (State) with the same name as the animation files in the Animator.

By default, Unity will automatically put AnimationClip in the Animator as the state

Next, establish a Transition connection from the Open state to the Closed state, and set the transition conditions in the Inspector view, because Unity defaults that the transition between the two animation states requires a blending process (for example, the idle action transition of a character) To the running action), this mixing process will sacrifice a bit of the playback time of the animation file itself. The animation transitioned in the UI screen does not need this mixing process, so here, we can directly set the transition time Transition Duration to 0 and let it It is very clear to play the Closed animation; because the Open state does not need to switch to the Closed state immediately after the playback ends, but waits for the program to notify the Animator to trigger the Out parameter before switching to the Closed state, so here, also uncheck Has Exit Time , And set a transition condition Out in Conditions.

Set several related fields in Transition

Then, establish a Transition connection from the Closed state to the Exit state, and set the Transition Duration to 0 in the Inspector view and the Exit Time to 1. Why do you want to do this? Theoretically, after the Closed is played, the UI screen will complete the exit action. You don t need to pay attention to it before it enters the game next time, but in order to prevent the UI screen that is not in the game screen from continuing to execute, it wastes performance resources. , We hope that after exiting, we can close the entire GameObject of these UI screens. Therefore, we will write Animator's State-specific StateMachineBehaviour script to do this later, and StateMachineBehaviour's OnStateExit will only be "after" the end of the state Is called, so if the Transition connection is not established for Closed, OnStateExit of StateMachineBehaviour will not be executed. In addition, Exit Time is 0 to 1 to represent the entire animation time from the beginning to the end. We hope that Closed can be able to It is clear that the playback will not really end until the end time, so the Exit Time here is directly set to 1.

Set the correct time value

Since, when creating an animation file in Unity, it is assumed that the animation will be played repeatedly in a loop, so we have to manually find the Open and Closed animation files separately, and uncheck Loop Time in the Inspector view. Cancel Loop Time

###Animation status script This part, the most important thing is to let the UI screen exit, and its GameObject can also be closed at the same time. In the past, we hung our own script on the GameObject and made the animation file. When adding Event in the Animation view, since Unity 5.0 was released, there is no need for such trouble. We can directly click any state in the Animator view, and click the Add Behaviour button in the Inspector view to create a dedicated Animator state Script, so that you can directly write the content of this Script to perform other operations on the state in the Animator for the timing of state start, in progress, end..., etc. Therefore, we need to close the GameObject after the UI screen exits. Just create a StateMachineBehaviour for Closed and add a short line of code in OnStateExit.

Add Behaviour to the Animator's State

using UnityEngine; public class UIStateClosed : StateMachineBehaviour { override public void OnStateExit ( Animator animator, AnimatorStateInfo stateInfo, int layerIndex ) { animator.gameObject.SetActive( false ); } } Copy code

UI process management script

The next thing is the highlight. After finishing the previous work, the rest is to rely on this script to do the main management, and what work does this script do? We list them as follows:

  1. Create a list to record the history of the UI screen, which is used to record the UI screens that have been entered in order to facilitate sequential return.
  2. Record the first UI screen opened in the history. When there is only one item left in the history, you cannot return.
  3. It is not possible to enter the same screen as the current UI screen.
  4. The target screen that is about to enter or return must be moved to the top layer.
  5. The target screen entered forward must be recorded in the history.
  6. When returning, the current screen must be removed from the history.

In addition, the previous article "Unity: Understanding the Difference and Application of Tag and Layer" mentioned that hard-coded string values should be avoided in the procedural code. Therefore, this Script will use the name set in the Animator's Parameters. Notify the Animator to transition to the Closed state, so you also need to declare a public field to set the name.

With the above work requirements, we can write the following code:

using UnityEngine; using System.Collections.Generic; public class UIManager : MonoBehaviour { public GameObject startScreen; public string outTrigger; private List<GameObject> screenHistory; void Awake () { this .screenHistory = new List<GameObject>{ this .startScreen}; } public void ToScreen ( GameObject target ) { GameObject current = this .screenHistory[ this .screenHistory.Count- 1 ]; if (target == null || target == current) return ; this .PlayScreen(current, target, false , this .screenHistory.Count); this .screenHistory.Add(target); } public void GoBack () { if ( this .screenHistory.Count> 1 ){ int currentIndex = this .screenHistory.Count- 1 ; this .PlayScreen( this .screenHistory[currentIndex], this .screenHistory[currentIndex- 1 ], true , currentIndex- 2 ); this .screenHistory.RemoveAt(currentIndex); } } private void PlayScreen ( GameObject current, GameObject target, bool isBack, int order ) { current.GetComponent<Animator>().SetTrigger( this .outTrigger); if (isBack){ current.GetComponent<Canvas>().sortingOrder = order; } else { current.GetComponent<Canvas>().sortingOrder = order- 1 ; target.GetComponent<Canvas>().sortingOrder = order; } target.SetActive( true ); } } Copy code

Apply

After completing the above work and programming, just create an empty GameObject and hang up the UIManager script, put the first UI screen GameObject to be displayed in the Start Screen field, and fill in the Out Trigger with the same Trigger parameters set by the Animator The name.

Set the Start Screen and Out Trigger fields

Then, in the On Click of each UI Button, set the GameObject attached with this UIManager script. If the button is to go to the next screen, select ToScreen from the drop-down menu and set the GameObject of the target UI screen in its parameter field. If the button is to return to the previous screen, select GoBack in the drop-down menu.

Go to the next UI screen, select ToScreen

Set up the GameObject of the next UI screen

Return to the previous UI screen, select GoBack

Next, when executing, you can record the history of each entered screen, and return to the previous screen in sequence each time you return. Each UI screen to be displayed will also be on the top layer of the game screen. It will not be obscured, and at the same time, it will not be accidentally pressed on the button and jump to the unexpected screen during the dynamics of the screen entering and exiting; because the moving line is defined as returning from where it enters, so The user will be more intuitive and not easy to get lost in the many screen browsing. At the same time, because of the screen history record, when the back button is always processed through GoBack, no matter how many UI screens or UI screens are added in the future No matter how the process is changed, the return function does not need to be modified any more, and it can also return to the correct screen.

Above, this simple UI process management mechanism is complete. After that, if we need to change the logic of the UI to perform the conversion action (for example, the ordering rules of the target screen), we only need to modify the PlayScreen in the UIManager; if we want to change To create other entry and exit animation effects, you only need to replace the motion in each state in the Animator; if you make multiple versions of UIManager and coexist in the same project or scene, you can also directly change the On Click setting of the Button in the UI , The whole mechanism is quite flexible, but the player operation will not be confused, and the follow-up maintenance will be easier.

Case project: pan.baidu.com/s/1sk9bnGh Password: do7y

PS: Currently using Unity version 5.1.0f3


Unity: Use UnityEngine.Events to make the program more flexible and stable

Since Unity 4.6 released the new GUI system, we can discover new event fields from the created GUI Control. For example, to create a Button, you can specify the option to go when the button is clicked from the On Click field in the Inspector window. Execute the function of which Component on the GameObject, so that button events can be edited more visually and flexibly. Of course, other GUI Controls also have similar fields that can be set in this way, and this field is determined by Generated by the UnityEvent type under UnityEngine.Events, our own Component can also provide such a field, allowing visual editing and making the program more flexible.

In this video, two examples are used to demonstrate how to do it. Through screen operation and voice instructions, I believe you can learn more

1. let s briefly explain the basis of Unity s writing program, that is, when we create our own Script in the Project window, attach this Script to the GameObject, and it will become the Component of the GameObject. Usually, the variable field (Field) If the public modifier is used to declare and its type is serializable, a field with its value can be generated in the Inspector window. This is because Unity itself automatically serializes the public field by default relationship.

The variable field of public will appear in the Inspector window for editing.

However, if the field declared in the Class is not provided for external access, it is somewhat inappropriate to use the public modifier. Therefore, when using other modifiers such as private or protected to declare, if you want the field Variables can also appear as editable fields in the Inspector window. You can insert the Attribute of the SerializeField in the upper line of it. Then Unity knows that this field is to be serialized so that it can be edited in the Inspector window.

Private variable fields with SerializeField will also appear in the Inspector window.

If you want to make an event field like the Button of UGUI, the most basic thing is to add a using UnityEngine.Events at the top of the program file, and then as long as the field declared using UnityEvent as the type, there will be a field like Event fields like Button.

For variable fields of UnityEvent type, event fields will appear in the Inspector window.

The UnityEvent field can be edited on the editor quite flexible. When the target Component contains a property or method declared using public, and the type of the incoming value is one of bool, int, float, or string If you do, you can directly select and provide the incoming value here, and you can set multiple different types of targets, and then call the execution directly when the program is executed. In addition, if the Method does not declare the incoming parameters, You can also choose to set up. If more than two parameters are declared, the menu here will not appear.

Although the UnityEvent field is quite convenient and flexible, the value it passes in is set in the editor, and only one value can be set. If the program in our Component wants to directly provide parameter values to the target, UnityEvent alone can t do it. Fortunately, UnityEngine.Events provides four additional generic classes for us to expand in addition to UnityEvent.

As we can see in the two examples in the video, the PassEvents Script is specifically used to write new event types that extend the UnityEvent generic class. For related information, please refer to the UnityEngine.Events related information in the official document, such as UnityEvent. There are a total of UnityEvent, UnityEvent<T0,T1>, UnityEvent<T0,T1,T2>, UnityEvent<T0,T1,T2,T3> for expansion. In fact, they all have the same functions, but , The number of acceptable parameters is different, that is, we can declare UnityEvent of the parameter type we need by inheriting these generic classes, and the number of parameters that can be accommodated can be one to four. E.g:

[ System.Serializable ] public class PassString : UnityEvent < string >{} [ System.Serializable ] public class PassColor : UnityEvent < Color >{} Copy code

Because the declared class hopes to be used to declare field variables, it can be displayed in the Inspector window. In addition to using SerializeField to mark the field above the field, the type must also be serializable. System.Serializable is also added to the top of the class.

At this point, we can use the UnityEvent defined by ourselves to declare the event field and make it displayable in the Inspector window.

###1. In the first example of Simple Computer, it is mainly to make a simple calculator to demonstrate the usage and benefits of UnityEngine.Events; before making anything, you must first understand the purpose of this thing. And what are the functions, so let's use a simple UI to display an expression. This expression provides two input fields, a text for arithmetic symbols, a text for displaying the calculation results, and four arithmetic function buttons .

Here, we first define the basic functions of the calculator:

  • Click the calculation function button, and the text of the calculation symbol will be changed according to the calculation type.
  • After clicking the calculation function button, the calculation result text will be displayed on the UI.

The function is quite simple, so we can create a C# Script directly in the Project window to handle almost all functions. 1. name this Script MyComputer.

Since the calculation result will be converted into a string and sent to the UI for display, UnityEvent will be used here to transfer data, but the general UnityEvent cannot directly bring parameters in the program call, so we also need to use the previous The mentioned approach is to declare UnityEvents that can take string parameters, because you only need to write one line to declare this derivative class, and in the future, you may also declare several UnityEvents with different parameters. Therefore, we have to do it in the Project window. Create a C# Script called PassEvents to write these classes. If the classification is good, this Script can also be directly used in other projects.

In PassEvents Script, first of all, because it is to make a derivative of the generic classes of UnityEngine.Events, you must write a line using UnityEngine.Events at the beginning. Next, only the pass word will be used in this demonstration. String, so, just declare a Class that can pass a string first.

using UnityEngine.Events; [ System.Serializable ] public class PassString : UnityEvent < string >{} Copy code

Don't forget System.Serializable, otherwise, you will not see the field in the Inspector window.

In MyComputer Script, our simple calculator is mainly to calculate the result of two calculated values. With the use of UI, the received value will be a string originally, but we must use it as a value in the program. To calculate, so, first declare two internal access-only fields to temporarily store the incoming value. The source of this value is passed by the Edit End event of UGUI InputField. Of course, we are in MyComputer In Script, you don t have to deal with that much, just provide two properties so that the outside can pass in the string. No matter who wants to pass it in, just provide the entry and convert the incoming string to a numeric value. Then, other calculation functions can be calculated.

Because the main work of MyComputer Script is to receive calculation data and calculate the results to pass out, so we need to announce the event of each calculation result to express what happened. As for who will finally pass the result to, in fact, it is not Need MyComputer to manage.

At present, we define MyComputer Script to only perform four calculation functions: addition, subtraction, multiplication, and division. Therefore, we directly calculate the result of the obtained value, and convert the result into a string, and then call and execute the corresponding corresponding through Invoke Event, then the basic functions of the calculator are complete.

private float _value1; private float _value2; [ SerializeField ] private PassString onAdd; [ SerializeField ] private PassString onSubtract; [ SerializeField ] private PassString onMultiply; [ SerializeField ] private PassString onDivide; public string value1{ set { float .TryParse( value , out this ._value1); } } public string value2{ set { float .TryParse( value , out this ._value2); } } public void Add () { this .onAdd.Invoke(( this ._value1 + this ._value2).ToString()); } public void Subtract () { this .onSubtract.Invoke(( this ._value1- this ._value2).ToString()); } public void Multiply () { this .onMultiply.Invoke(( this ._value1 * this ._value2).ToString()); } public void Divide () { if ( this ._value2 == 0 ) return ; this .onDivide.Invoke(( this ._value1/this ._value2).ToString()); } Copy code

Therefore, to obtain the above code, special attention should be paid to the division Divide() part, because the divisor in the division must not be zero, otherwise, an error will occur, so make a judgment before the calculation, if the division is zero, then Do not perform subsequent calculations.

After writing the content of MyComputer Script, in Unity, first create an empty object and give it MyComputer as its Component. At this time, you can clearly see the event bar for addition, subtraction, multiplication, and division from the Inspector window. Bit.

The event fields for addition, subtraction, multiplication, and division appear in MyConputer.

Next, you can set the relationship between the UI and MyComputer. 1. you need to allow the UI input value to be passed to MyComputer. Therefore, you need to set the input string from the End Edit event in the two InputFields to pass to The value1 and value2 properties of MyComputer.

The string entered in Value1 Field is passed to value1 of MyComputer

The string entered in Value2 Field is passed to value2 of MyComputer

Here you can find that this End Edit is actually the same thing as the PassString declared by us. After setting it, there is no need to set what string to pass in the Inspector window, because this End Edit event is when the text in the InputField is input When finished, press Enter or click outside of the UI text field to make it impossible to input text, but it will be regarded as executed when the text input is completed. When it is called for execution, the text entered will be entered Passed out through this event, so after setting here, as long as the text field on the UI is entered, the entered content will be passed to MyComputer, and MyComputer will be the same as written in our program. The received string is converted into a numeric value and temporarily stored.

Next, select the buttons representing the addition, subtraction, multiplication, and division functions on the UI individually, and set their On Click event fields to execute the corresponding calculation functions in MyComputer.

On Click event of Button + executes Add() of MyComputer

Button-On Click event executes MyComputer's Subtract()

On Click event of Button x executes Multiply() of MyComputer

Button/On Click event executes Divide() of MyComputer

And this On Click event, I believe everyone is very familiar with it, is triggered when the button is clicked. To be correct, it should be executed after clicking the button and releasing the button within the same button.

At this point, the UI can provide the input data to MyComputer and ask MyComputer to perform the functions that need to be executed. Then, we will set up its addition, subtraction, multiplication, and division events for MyComputer so that the UI can display The results.

Recall the main functions of the calculator we wrote earlier. The first step is to change the calculation symbol. Therefore, every calculation event must be set to change the calculation symbol text on the UI when the event occurs. Here, although MyComputer's By default, the calculation event will pass the result from the program, but the UnityEvent field itself can directly pass in the static parameters. Therefore, our code does not need to write the part about changing the operation symbol for those calculation functions, as long as it is directly in the Inspector Just set it on the window.

The second step is to display the calculation result on the UI. Since the calculation result string is directly passed to the corresponding event in the MyComputer code, the target for each event is to be displayed on the UI. The Text of the result is fine.

Each calculation function event allows the UI to change the calculation symbol and display the calculation result

In this way, whenever MyComputer performs a calculation, it will pass the string to the UI for display.

Calculation result screen

Now that the calculator needs a complete set of functions, we may want to make some more subtle adjustments. For example, if the input value has not changed, the calculation that has been performed does not want to be calculated again, so Turn off the function button. In the Button of UGUI, there is an Interactable field, which is used to manage the interactive switch between the user and the button. When it is turned off, the Button will lose its function and show a darker color. Therefore, we are here A function can be formulated is to turn off the Interactable on the corresponding function button after the calculation result is obtained, and this function does not need to modify the code at all, just add the execution target to the event field of each calculation function again.

Close the button after calculating the result.

The calculated buttons are all closed.

Although the button can be turned off after calculation, if the content of the input value field is changed, there should be a chance to recalculate. Therefore, there needs to be a function that allows the button to be turned on again. At this time In fact, we can directly perform the action of enabling Interactable on each button in the End Edit event of the two InputField input fields. However, if the calculator has many function buttons and many input fields, it will be slower one by one. Slow setting, the execution process will be too scattered.

Therefore, we can imagine that MyComputer is not only responsible for calculation, but also provides a state reset function. This state reset function does not perform anything by itself, but just calls the execution state reset event. Then, set it on this event. As long as the state reset function is called and executed, they will all be executed. Therefore, we need to add the following code in MyComputer Script:

[ SerializeField ] private UnityEvent onResetStatus; public void ResetStatus () { this .onResetStatus.Invoke(); } Copy code

In the Unity editor, we can set the On Reset Status field of MyComputer to set the Intractable to turn on the four function buttons.

When the state is reset, the button is enabled again.

In this way, the End Edit event of the two InputFields is to specify the execution of the state reset function of MyComputer.

The End Edit event specifies the execution of ResetStatus() of MyComputer.

Although the state reset event in the movie is to re-enable the button, you can also change the action to be performed by the state reset event. For example, let the calculation result text turn into a question mark, then every time the input field is re-enabled After the input is completed, not only the disabled button will be re-enabled, but the result text will also turn into a question mark. After pressing the function button, the correct result will be displayed.

Now that we have the function of resetting the state, can we only let the button that currently calculates the result be disabled, and the other buttons are in the enabled state? In this way, you do not have to re-enter the data in the input field to enable the button. Before doing this function, we must first explain that each event field can specify multiple execution targets, and these execution targets actually have an execution order. It will execute the second after the first is executed. Execute down one by one.

Therefore, we do not need to modify the code to do this function, but can directly change the event content and sequence of each calculation function, so that the state reset is performed after each calculation is performed, and then the calculation result is displayed and the current is disabled. Calculate button and change calculation symbol.

Change to perform state reset first before performing other actions.

At this point, the simple calculator function will come to an end. We can find that the code of MyComputer Script is quite independent. It only provides an entrance to the incoming data so that the incoming data can be temporarily stored, and it provides calculation functions for external execution. And throw the execution result into the event. As for who will pass in the data, who will perform the calculation function, and who will respond to the execution of the event, all don t need to be managed. Therefore, if all event fields are not set targets, when When asked to perform calculations, it can also perform calculations as usual without errors due to unset targets. Basically, MyComputer does not need to know who the event target should be. Everything is on the Inspector window, according to the actual situation. It can be set or changed as needed.

The same is true for the state reset part. MyComputer Script is only responsible for providing the state reset function and executing the state reset event. As for who requires the state reset and what the state reset does, there is no need to worry about it.

In this way, the programming can be made quite concise, but the flexibility in actual use is very large. In addition, because some required function changes do not need to be achieved by modifying the code, in addition to saving the time of programming, it also It can avoid errors caused by human error in modifying the code. The most important thing is that MyComputer Script does not need to know who is going to execute its function and what target function its event will eventually execute. Therefore, it also makes the program The degree of coupling between them is minimized. If the script of this method is moved to other projects for use, there is no need to worry about the connection with other programs and many omissions.

The following is the complete content of MyComputer.cs:

using UnityEngine; using UnityEngine.Events; public class MyComputer : MonoBehaviour { private float _value1; private float _value2; [ SerializeField ] private PassString onAdd; [ SerializeField ] private PassString onSubtract; [ SerializeField ] private PassString onMultiply; [ SerializeField ] private PassString onDivide; [ SerializeField ] private UnityEvent onResetStatus; public string value1{ set { float .TryParse( value , out this ._value1); } } public string value2{ set { float .TryParse( value , out this ._value2); } } public void Add () { this .onAdd.Invoke(( this ._value1 + this ._value2).ToString()); } public void Subtract () { this .onSubtract.Invoke(( this ._value1- this ._value2).ToString()); } public void Multiply () { this .onMultiply.Invoke(( this ._value1 * this ._value2).ToString()); } public void Divide () { if ( this ._value2 == 0 ) return ; this .onDivide.Invoke(( this ._value1/this ._value2).ToString()); } public void ResetStatus () { this .onResetStatus.Invoke(); } } Copy code

###2. Sphere Control In the second example, five spheres are designed to use the same Componet, but because of the different settings in actual use, they directly reflect different behaviors, and demonstrate how to make UnityEvent in addition to passing parameters In addition, you can also bring back information.

1. create five spheres in the scene. These are just the original creatures preset by Unity. By default, Unity will individually assign them Sphere Collider and preset Material. The Shader of Material is the Standand Shader of Unity 5. In this part, we don't need to make any changes.

Componet of the original creature sphere.

Here, the functional requirements of several spheres are defined:

  • The sphere can be triggered by clicking.
  • The sphere can bounce.
  • The sphere can change color.

Based on these requirements, first create their individual C# Scripts in the Project window and name them SphereTouch, SphereJump, and SphereDiscolor.

Let's write the code of SphereTouch first. SphereTouch only provides the event response when the user clicks on the sphere through the mouse. Therefore, SphereTouch will have a UnityEvent event field and implement Unity's built-in OnMouseDown. As long as the GameObject itself has Collider's Componet, when the mouse presses the button on the GameObject, it can trigger OnMouseDown to execute its content, and in its content, it just calls the UnityEvent event field directly to execute it. As for the behavior that needs to be reflected after the click, it is someone else s thing.

using UnityEngine; using UnityEngine.Events; public class SphereTouch : MonoBehaviour { [ SerializeField ] private UnityEvent onTouch; public void DoTouch () { this .onTouch.Invoke(); } void OnMouseDown () { this .DoTouch(); } } Copy code

It may be strange to see this code, why not just Invoke directly in OnMouseDown? In fact, this is just to allow SphereTouch to provide one more external function, so that other objects can also convey the click information. For example, the A object is clicked by the mouse, and then its On Touch event field is set to execute the B, C, and D objects. DoTouch function, then one object can be clicked, and four objects can react at the same time.

Next is the code of SphereJump. There are many ways to make the ball jump, such as using Unity's animation system to do it, or giving the object an upward thrust, and then letting it fall due to gravity. However, here is to simplify the operation. Step, directly use the program code to make it achieve the effect of jumping by moving the position, and this jumping action, to put it bluntly, is to move from the original position to a specified height position, and then move back to the original position. As for how high to jump , The speed of movement is uncertain in advance, so first, we need to declare two value fields that can be set in the Inspector window so that we can adjust the target height and jumping speed in the editor.

[ SerializeField ] private float hight = 1 ; [ SerializeField ] private float speed = 5 ; copy the code

In addition, during the jumping behavior, if you are asked to jump again immediately, it will jump halfway and then jump up again. This behavior is problematic, so a status record must be set. When the action is in progress, it will not be accepted again. The jumping request, wait until the action is completed before accepting and performing the requested behavior again.

private enum Status{ None, Moving } private Status _status = Status.None; copy code

Because the beating behavior we are going to do here is actually a combination of two movement behaviors, we must first write a function that can provide the object itself to move from the starting point to the end point. The movement between the two points is directly through Unity's built-in Vector3. Lerp can be achieved.

private IEnumerator Move ( Vector3 source, Vector3 target ) { float t = 0 ; while (t < 1 ){ transform.position = Vector3.Lerp(source, target, t); t += Time.deltaTime * this .speed; yield return null ; } transform.position = target; } Copy code

The t of Vector3.Lerp is a value between 0 and 1. We can treat it as the position of the progress from the starting point to the ending point, with 0 being the starting point and 1 being the ending point. Because the time of each refresh of the screen is different during actual execution, we cannot let t increase by a fixed value over time, but should multiply the value we expect to increase (that is, the speed) by Time.deltaTime, Therefore, we use yield return null to make the while loop execute every time it passes through a frame. When t exceeds 1, it means that the end has been reached and the loop can be ended.

There is the final step to complete the movement, that is, when Vector3.Lerp is executed for the last time, it is impossible to have t = 1, so at the end, the position must be corrected to the correct end position, so that the movement behavior is truly completed.

It is important to note here that what the Method returns is IEnumerator, which means it is used as a Coroutine, so you can use yield inside it to control some processes and time, and you need to call this Method to execute Use StartCoroutine to execute.

The next action to jump is to change the state to moving when the jump starts, and then obtain the starting point and the end point, first move the starting point to the end point, and then perform the end point move to the starting point after the execution, and wait for the action After completion, the beating is over, so you can change the status back to None.

private IEnumerator DoJump () { this ._status = Status.Moving; Vector3 source = transform.position; Vector3 target = source; target.y += this .hight; yield return StartCoroutine ( this .Move(source, target )) ; yield return StartCoroutine ( this .Move(target, source )) ; this ._status = Status.None; } Copy code

Now that the beating behavior has been written, it is necessary to provide a function for external calls. When executed by an external call, first determine whether there is a beating, and if not, perform the beating.

public void Jump () { if ( this ._status == Status.None) StartCoroutine( this .DoJump()); } Copy code

In this way, even if SphereJump is completed, it does not use the UnityEvent event, and is mainly responsible for the called execution of the action. Of course, it can be supplemented with basic events as needed, for example, start beating, beating, and beating end. However, it is not used in this demonstration, so it is omitted here.

The following is the content of SphereJump.cs:

using UnityEngine; using System.Collections; public class SphereJump : MonoBehaviour { private enum Status{ None, Moving } [ SerializeField ] private float hight = 1 ; [ SerializeField ] private float speed = 5 ; private Status _status = Status.None; public void Jump () { if ( this ._status == Status.None) StartCoroutine( this .DoJump()); } private IEnumerator Move ( Vector3 source, Vector3 target ) { float t = 0 ; while (t < 1 ){ transform.position = Vector3.Lerp(source, target, t); t += Time.deltaTime * this .speed; yield return null ; } transform.position = target; } private IEnumerator DoJump () { this ._status = Status.Moving; Vector3 source = transform.position; Vector3 target = source; target.y += this .hight; yield return StartCoroutine ( this .Move(source, target )) ; yield return StartCoroutine ( this .Move(target, source )) ; this ._status = Status.None; } } Copy code

Before writing SphereDiscolor, we have to go back to the PassEvents Script file and declare a UnityEvent that can be used to pass color parameters.

[ System.Serializable ] public class PassColor : UnityEvent < Color >{} Copy code

In SphereDiscolor, first declare the two variable fields used to temporarily store Material and the color of the Material, and then declare a field to set the default color of the sphere. Obtain the Material of the sphere itself in Awake and save it temporarily. And use the preset color you set to change the color of the sphere.

private Material _material; private Color _color; [ SerializeField ] private Color color = Color.white; void Awake () { this ._material = GetComponent<Renderer>().material; this .DefaultColor(); } public void DefaultColor () { this ._material.color = this .color; this ._color = this .color; } Copy code

Similarly, the function of changing to a preset color is also an independent function that can provide external call execution.

Then, the function of changing the color of the sphere, we respectively announced the function of directly changing to a specified color and the function of changing to a random color, both of which can be provided to external call execution.

Here, a UnityEvent event field that can pass the color value is also announced. When the color is changed, the event will be executed and the changed color will be passed out. Therefore, when the color of the sphere is changed, you can let It triggers other behaviors, and even provides a color to influence the triggered behavior.

[ SerializeField ] private PassColor onChangeColor; void Awake () { this ._material = GetComponent<Renderer>().material; this .DefaultColor(); } public void DefaultColor () { this ._material.color = this .color; this ._color = this .color; } public void Discolor ( Color color ) { this ._material.color = color; this ._color = color; this .onChangeColor.Invoke(color); } public void RandomColor () { this .Discolor( new Color(Random. value , Random. value , Random. value )); } Copy code

At this point, the programming part of this example comes to an end. Go back to the Unity screen and add these three scripts as Component for each sphere.

There may be questions here, why should it be divided into three Scripts and not written in the same Script? Because the functions of Unity's game objects are Component-oriented, which Component can have any function, which Component is removed, the game object does not have any function, so we separate the functions into separate scripts To write, each Script only needs to provide its own functions. Don t involve other functions. Then, when the sphere has SphereTouch Component, it can be clicked, if not, it cannot be clicked. When the sphere has SphereJump , It has the function of beating, so that we can clearly swap the game objects and change their abilities.

Therefore, when each sphere has a color-changing function, the color-changing function has a preset color field that will change the sphere to its designated initial color when the Play Mode starts.

Then, when each sphere has the function of being triggered, we can specify what behavior to trigger when the sphere is triggered. As shown in the movie, we can specify for each sphere to be triggered when it is clicked. The next ball jumps up and asks its previous ball to change color randomly.

When the second ball setting is clicked, the third ball jumps up and the first ball changes color randomly.

When the last ball is clicked and triggered, the first four balls return to their original colors.

When the fifth ball is clicked, the first four balls change to the original color.

Then, we can also set the second ball to affect the color of other balls when its color is changed.

When the second ball changes color, let the first ball and the fifth ball change to random colors.

Here, we once again experience the benefits of simply writing the functions provided by the Script, without specifying the behavior of others in the code, but being able to freely change the benefits and functional flexibility in the editor.

Although UnityEvent can directly specify a fixed value in the Inspector window to pass to the function of a Component and execute it, it can also use Invoke call to execute and pass parameters in through the code, but it cannot be executed like our usual call. It seems a little bit of a fly in the ointment when Method returns data like that.

Next, let's discuss how to also let UnityEvent bring back data. In fact, this is mainly based on the principle of using reference type objects to transfer between parameters instead of sending real values to achieve the purpose of bringing back data. In other words, we can instantiate a reference type object and pass it in the parameters of UnityEvent. When the data contained in this object is changed in the function performed by UnityEvent, call the party that executes the UnityEvent event. The changed data can also be obtained from the original object.

However, it seems too troublesome to declare many different classes in order to bring back different types of data. Therefore, it is better for us to make a general class specifically used as a holder object for transferring data. Therefore, a C# Script called PassHolder can be created specifically to do this.

public class PassHolder { public object value { set ; private get ;} public T GetValue < T >() { if ( this . value == null ) return default (T); return (T) this . value ; } } Copy code

Because all types inherit directly or indirectly from Object (this is not Unity's Object), declare a Property to receive objects of any type uniformly, and then use the declaration method of generic methods, To retrieve the stored data, here is just a simple judgment whether there is data, if there is no data, the type default value that is expected to be obtained will be returned. In this way, objects of this class can universally access any type of data.

With such a class, you can experiment with SphereDiscolor; let's add the color exchange function to SphereDiscolor first. Of course, before declaring that the UnityEvent event field of PassHolder can be passed, it is necessary to go back to the Script file of PassEvents and add the type that declares the passable color and PassHolder two parameters.

[ System.Serializable ] public class PassColorReturn : UnityEvent < Color , PassHolder >{} Copy code

What this function does is that when it is called for execution, it will send its own color through the exchange color event, and bring back the other party's color through PassHolder to change its own color.

In our current example, the color that can be changed is SphereDiscolor. Therefore, to add another color-changing function to it is that in addition to accepting the target color, it also needs to be able to receive PassHolder objects. Therefore, in addition to receiving In addition to changing the color for yourself, you must also write the original color into PassHolder, so that the other party that performs the color change function can also receive the color value that you want to bring back.

[ SerializeField ] private PassColorReturn onSwapColor; public void SwapColor () { PassHolder holder = new PassHolder(); this .onSwapColor.Invoke( this ._color, holder); this .Discolor(holder.GetValue<Color>()); } public void Discolor ( Color color, PassHolder holder ) { holder. value = this ._color; this .Discolor(color); } Copy code

After completion, in the Inspector window of the Unity editor, we can directly specify which ball to swap colors with in the event field of On Swap Color.

When the fourth ball is clicked, make the fifth ball jump up, change the color of the third ball, ask yourself to perform the color exchange and indicate the color exchange with the first ball.

In this way, several programs and short code only define their own functions and when to call the event to be executed, without specifying the affected types and functions. They can be displayed in the Inspector window. Flexible configuration of completely different behaviors for GameObjects with the same Component. It will not cause errors in the program due to incorrect types or inconsistent number of parameters, making the program execution and design more flexible and stable, and also making the content of the code more concise and clearer in logic. It will also become more visual and clearer in terms of maintenance and process adjustments.

In the past, articles and videos of "Unity: Using UGUI's ScrollRect to Make a Virtual Joystick" were published. Among them, the event that transmits the operation behavior of the virtual joystick is the application of UnityEngine.Events. Make good use of these practices and write programs that are reusable And the scalability will be greatly improved.

Well, this is the end of the instructions and demonstrations about UnityEngine.Events. If you like this article or the video shown, please help introduce it to your friends, and don t forget to subscribe to the video channel and click on the fan page. Likes, thank you!

UnityEvent official documentation:

docs.unity3d.com/ScriptRefer...

Current version Unity 5.2.1f1