- Shopping Bag ( 0 items )
Revolutionize your iPhone and iPad game development with Unity iOS, a fully integrated professional application and powerful game engine, which is quickly becoming the best solution for creating visually stunning games for Apple's iDevices easier, and more fun for artists. From concept to completion you'll learn to create and animate using modo and Blender as well as creating a full level utilizing the powerful toolset in Unity iOS as it ...
Revolutionize your iPhone and iPad game development with Unity iOS, a fully integrated professional application and powerful game engine, which is quickly becoming the best solution for creating visually stunning games for Apple's iDevices easier, and more fun for artists. From concept to completion you'll learn to create and animate using modo and Blender as well as creating a full level utilizing the powerful toolset in Unity iOS as it specifically relates to iPhone and iPad game development.
Follow the creation of "Tater," a character from the author's personal game project "Dead Bang," as he's used to explain vital aspects of game development and content creation for the iOS platform. Creating 3D Game Art for the iPhone focuses on the key principles of game design and development by covering in-depth, the iDevice hardware in conjunction with Unity iOS and how it relates to creating optimized game assets for the iDevices.
The technical understanding behind creating game assets is more in-depth than just modeling low-resolution geometry. Before we can get into actual modeling or creating texture maps, we'll first need to have a solid understanding of the hardware and game engine that the content will run on. Each platform or device will have it's own limitations, and what might run well on one platform doesn't mean it will run as well on another. For instance, the faster processor in the iPhone 4 or iPad, in some cases, may help in the processing draw calls than the slower processor in the iPhone 3GS. Another good example is that although the iPad has a 400 MHz boost in chip performance, the lack of an upgraded GPU introduces new bottlenecks in performance to be aware of. This is where the project's "game budget" comes into play. Your game budget is the blueprint or guide through which your game content is created. There are three specifications to be aware of when evaluating the hardware of the iPhone and iPad, which are memory bandwidth, polygon rate, and pixel fill rate. For instance, if you have a fast-paced game concept, you'll need to denote a high frame rate in your game budget such as 30–60 frames per second (fps), and all of the content you create must be optimized to allow the game to meet this frame rate budget. You'll also need to take into consideration that with the iPad, you're essentially rendering to a pixel count that is over five times that of the iPhone 3GS and around 1.2 times that of the iPhone 4 screen, which results in 5–1.2 times the amount of data being processed per frame. Without a solid understanding of the capabilities and limitations of the device your game is targeting, you won't know what optimizations to the game content that will be needed to achieve your game's budgeted frame rate. Essentially, it would be like working in the dark.
The goal of this chapter is to turn the lights on so to speak by familiarizing you with the iPhone and iPad hardware as well as take a look under the hood of the Unity iOS game engine. By familiarizing ourselves with the different iDevices and what's going on under the hood of Unity iOS, we can then understand the "why" behind building optimized content for these devices. We will also be able to properly determine the frame rate, poly-count, and texture size budgets in our overall game budget, which is determined by a balance of the type of game you're creating and the targeted frame rate, all of which is ultimately controlled by the hardware's memory bandwidth, polygon rate, and pixel fill rate.
In this section, we're going to discuss the hardware for the iPhone and iPad, and at this point, I'd like to make a couple of distinctions between the device models. Throughout the book, I will refer to the term "iDevices" to encompass all of the devices, i.e., iPhone, iPad, and iPod Touch. The term "iOS" is Apple's official name for the OS or operating system common to all of the iDevices. I'd also like to add that this book will not be covering the iPhone 3G or the iPod Touch second generation and below. As of the writing of this book, these devices are second- and third-generation devices, and I wanted to concentrate on the most current devices. I'll break this section into two categories, which are the ARM central processing unit (CPU) and PowerVR SGX graphics processing unit (GPU). As we cover the iDevice hardware, we'll also discuss how these categories relate to Unity iOS.
The CPU is the processing unit, and the iDevices use the ARM architecture with the Cortex-A8 core at version ARMv7-A, and from an artist's perspective, the CPU handles calculations. In Fig. 1.1, you can see a break down of the hardware differences in each of the iDevices. Each model of the iDevices contains different or updated hardware that can affect your game's performance such as how the hardware affects pixel fill rate.
Both the iPad and iPhone 4 contain the A4 processor. The iPhone 3GS and iPod Touch third generation both use an ARM Cortex-A8 that has been under-clocked to 600 MHz. As far as performance goes across these three devices, you can say as a basic rule that the iPad is the fastest in terms of processing, followed by the iPhone 4 due to the A4 being under-clocked and finally not far behind at all is the 3GS. Again, I stress that this is a very basic rule, and your content will really drive these results in terms of how pixel fill rate and polygon throughput affect your game. Profiling your game on the devices with your specific content is the safest and most accurate way to gauge performance, but it can be helpful to have general ideas in place about the device capabilities in the early stages of development.
There are many aspects to how the CPU affects your Unity iOS powered game. For instance, the CPU also processes scripts and physics calculations as well as holding the entire OS and other programs being run. Since this book is on creating game art, we'll focus the next subsections to be particular to the game's art content on our game objects and what operations are important to the CPU in these terms.
The draw call can be thought of as a "request" to the GPU to draw the objects in your scene and can be the area in which the CPU causes a bottleneck in performance. As we'll discuss in the GPU section, the iPhone and iPad uses OpenGL ES 2.0 (OGLES) emulating OGLES 1.1 shaders on the hardware level, and with this implementation, vertex data is copied for each draw call on every frame of the game loop. The vertex data is the vertices that make up our 3D objects and the information attached to each vertex such as position, normal and UV data, just like a 3D mesh's vertices in modo has positional, normal, and UV coordinate data.
The vertex data is transformed or moved in 3D space on the CPU. The result of this transformation is appended to an internal vertex buffer. This vertex buffer is like a big list or a container that holds all of the vertex data. Since the vertex data is copied on the CPU, this takes up around one-third of the frame time on the CPU side, which wastes memory bandwidth due to the fact that the iPhone shares its memory between the CPU and GPU. On the iPhone and iPad, we need to pay close attention to the vertex count of our objects and keep this count as low as possible as the vertex count is more important than actual triangle count. As you'll read in Chapter 2, we'll talk about how to determine what the maximum number of vertices you can render per individual frame.
I used to get confused by the "per frame" section of that statement. It helped me as a 3D artist to think about my game scene just like a scene in modo. For instance, if I have an animation setup in modo, the render camera will render the portion of that scene that the camera can see in its view frustum as set in the camera's properties, for each frame of the animation. The same is true in Unity iOS. In Fig. 1.2, you can see an illustration that depicts the way in which I visualize a scene's total vertex count per frame.
With each frame of the game loop, only a certain amount of vertices are visible within the camera's view frustum for a given frame, and within this frame, we should keep the vertex count for all of the objects in the scene to around 10 k. Now, this is a suggestion as to what works best on the iDevice hardware, but depending on your game and game content, this could possibly be pushed. The point being, with game development, there aren't any absolute answers when it comes to optimization. You have to optimize content to your game's performance standards, i.e., your frame rate budget. There are a lot of techniques to optimizing our vertex count as we'll discuss in the modeling chapters, and there are also rendering optimizations for the camera's view such as occlusion culling for controlling vertices that are sent to the vertex buffer.
It's obvious that the faster the CPU, the faster the calculations are going to be. However, just because you have more power doesn't necessarily mean you should throw more vertices to the system without regard to other performance considerations such as pixel fill rate as we'll discuss later in this chapter.
Batching is a means to automatically reduce draw calls in your scene. There are two methods in Unity iOS, which are dynamic and static batching. It's important to note that draw calls are a performance bottleneck that can largely be dependent on the CPU. Draw calls are generated each time the CPU needs to send data to the GPU for rendering. The GPU is very fast at processing large amounts of data; however, they aren't very good at switching what they're doing on a per-frame basis. This is why batching is good since it sends a large amount of data to be processed at one time. With that said, it always best to test both the CPU and GPU when determining bottlenecks as if your device has fill rate issues, which can be found on the iPad or iPhone 4, then the draw call bottleneck can get shifted to the GPU.
Dynamic Batching Here is how dynamic batching works at run time.
1. Group visible objects in the scene by material and sort them.
a. If the objects in this sorted group have the same material, Unity iOS will then apply transformations to every vertex on CPU. Setting the transform is not done on the GPU.
b. Append results to a temporarily internal dynamic vertex buffer.
2. Set the material and shader for the group only once.
3. Draw the combined geometry only once.
Both the vertex transformation and draw calls are taxing on the CPU side. The single instruction, multiple data (SIMD) coprocessor found on the iDevices supports vector floating point (VFP) extension of the ARM architecture, which handles the vertex transformations, thanks to some optimized routines in Unity iOS written to take advantage of the VFP coprocessor. The VFP coprocessor is actually working faster than the GPU and thus is used by Unity iOS to gain performance in batching objects to reduce draw calls.
The point to focus on in regards to the VFP and Unity iOS is the key to dynamic batching, which can be stated as follows. As long as it takes less time to apply the vertex transformations on the CPU rather than just issuing a draw call, it's better to batch. This is governed by a simple rule that as long as the object is smaller than 300 vertices, it will be less expensive to transform the vertices on the CPU, and thus, it will be batched. Any object over this 300-vertex limit is just quicker to draw and won't be batched or handled by the VFP coprocessor routines mentioned above.
While we are talking about the VFP coprocessor in the iDevices, I should mention that Unity iOS also offloads skinning calculations to the VFP, which is much faster than utilizing GPU skinning. Unity has optimized bone weight skinning paths to take advantage of the VFP coprocessor as well.
Static Batching Static batching is the other method Unity iOS uses to reduce draw calls. It works similarly to dynamic batching with the main differences being you can't move the objects in the scene at run time, and there isn't a 300-vertex limit for objects that can be batched. Objects need to be marked as static in the Unity iOS Editor, and this creates a vertex buffer for the objects marked as static. Static batching combines objects into one mesh, but it treats those objects as still being separate. Internally, it creates a shared mesh, and the objects in the scene point to this shared mesh. This allows Unity iOS to perform culling on the visible objects.
Here is how static batching works at run time.
1. Group visible objects in the scene by material and sort them.
a. Add triangle indices from the stored static vertex buffer of objects marked as static in the Unity iOS Editor to an internal index buffer. This index buffer contains much less data than the dynamic vertex buffer from dynamic batching, which causes it to be much faster on the CPU.
2. Set the material and shader for the group only once.
3. Draw the combined geometry only once.
Static batching works well for environment objects in your game.
Unity iOS now includes lightmapping and occlusion tools, which affect the conditions for static batching to work which are, use the same material, affected by same set of lights, use the same lightmap and have the same scale.
As we've discussed in this section, the iPhone and iPad CPU handles important aspects of your game in terms of how your game content is drawn and in terms of sending data to the GPU and math calculation performance. Thus far, we've briefly touched on concepts such as batching and VFP skinning to familiarize you with the base architecture of the iDevices and Unity iOS. In the later chapters, we'll discuss in-depth how to build optimized meshes in modo to reduce vertex count. We'll also thoroughly look at batching our objects by discussing what makes or breaks a batch as well as how our game character's textures and materials relate to batching. In Chapter 5, we'll discuss the VFP-optimized paths for bone weighting and rigging as we setup Tater for animation using Blender.
The GPU is the graphics processing unit and handles the rendering of our scene. Between the different iPhone models, you'll find that they all use the same GPU, which is the PowerVR SGX535.
The SGX 535 supports OpenGL ES 2.0. With Unity iOS, iPhone developers can use both the 1.1 and 2.0 versions of OpenGL ES. What this means for developers is that we can utilize shaders that support a programmable pipeline also, there will be no need to convert 1.1 shader to 2.0 every time the device meets a new shader in the game. In Fig. 1.3, you can see the clock speed of the SGX 535 GPU and how this relates to pixel fill rate and triangle throughput. The fill rate is the number of pixels that can be drawn to the screen per second, and throughput is the amount of triangles that can be processed per second.
As you can see in Fig. 1.3, the 3GS GPU can process 7 million triangles per second and around 2.5 pixels per second. However, as with the CPU, I'd also like to reiterate the fact that although it looks like the SGX is a rendering beast, it doesn't mean you can throw everything but the kitchen sink at it without regard. Your game's performance isn't entirely dictated by CPU and GPU speeds. For instance, RAM and slow Flash memory can also be a bottleneck as well, especially when trying to load larger texture sizes such as 1024 x 1024.
It all comes down to a balance between the player experience in terms of frame rate and game graphics. You will always need to profile for performance to match your game budget. Faster hardware is a plus and allows you to do more, but you must remember that building for the cutting-edge hardware will inevitably alienate a good degree of your potential market due to users with older models.
The iPhone 4, 3GS, and iPad all have different screen resolutions, and if you want your graphics to look awesome, you'll need to build to match the resolution of each device. For instance, if you build your game graphics based off the screen of the 3GS at 480 x 320 and then scale these graphics to the iPad at 1024 x 768, then your graphics and textures are going to look pretty ugly as they are scaled from the lower resolution to a higher screen resolution. In Fig. 1.4, you can see a menu from the book's demo app and how the menu was adapted for each of the screen resolutions across the iDevices.
The 3GS, iPhone 4, and iPad are all using the SGX 535 GPU; however, with the iPhone 4 and iPad, the GPU has to do more work to draw your game on the higher resolution screens. This can cause games that run well on the 3GS to drop in frame rate on the iPad and iPhone 4 in certain conditions. In Fig. 1.5, you can see the differences in resolution and how this relates to the GPU having to work harder on the iPhone 4 and iPad as it renders 4–5.1 times the screen real estate.
The fill rate is the number of pixels the GPU can render to the screen per second. With the iPhone 4 and iPad, you can experience a drop in frame rate when a transparent surface fills the screen. This is because on the iPhone 4 and iPad, the GPU can be thought of as fill-rate limited, meaning that GPU is maxed out and is the bottleneck. Again, this is because these devices have the same GPU as the 3GS, but have to render 4–5.1 times the screen resolution. I ran into this very issue creating the book's resource app, as we'll discuss in Chapter 4.
The SGX uses tile-based deferred (TBD) rendering. The concept behind a TBD rendering is to only render what is seen by the camera. By doing this, the TBD renderer doesn't waste clock speed and memory bandwidth on trying to figure out the pixels of objects that are hidden behind other objects, which is referred to as Overdraw. To help visualize what Overdraw is, in Fig. 1.6, I've used the Overdraw viewport rendering setting in Unity iOS to visualize Overdraw and showcase an example of how it relates to a scene. This isn't utilized for iPhone output as the TBD renderer of the GPU handles keeping Overdraw low by rejecting occluded fragments before they are rasterized.
Excerpted from Creating 3D Game Art for the iPhone with Unity by Wes McDermott Copyright © 2011 by Elsevier Inc. . Excerpted by permission of Focal Press. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Creating 3D Game Art for the iPhone with Unity: Featuring modo and Blender Pipelines
Getting to know the iDevice Hardware and Unity Working as a 3D artist
Fully Programmable Pipeline
Determining Your Game Budget
Frame Rate Budget
It All Sums Up
Planning the Vertex Budget
Creating a Performance Test Scene
Sizing Things Up
What is a Game Unit
Setting Up modo
Importing into Unity
True Vertex Count
Modeling Tater and Thumper
Polygon Flow and Spin Quads
Merging Polygons and Removing Edge
Understanding Textures and UV Maps
Creating UV Maps
Planning Your UV Maps
Creating UVs for Tater
Creating UVs for Thumper
Fundamentals of Game Textures
Power of 2 (POT) and Non-Power of 2 (NPOT)
Texture Compression - PVRTC
Using Mip maps
Creating the Diffuse Map
Faking Light and Shadow
Unity 3 OpenGL ES 2.0 Shaders
Building Up Volume Through Shading
Adding Some Wear and Tear
Creating Game Objects using modo - Part Two: Training Trash Yard
Creating a Style
What can reasonably be accomplished?
Breaking Down into Sections
Optimize Batching and Rendering
Creating the Level
Determining the Vertex Count
Using Texture Atlas
Building on the Grid
Does it make sense to use the grid?
Determining Working Texture Size
Creating the Ground
Creating the Walls
Creating the Props
Creating a Skybox
Creating the Trash Heap
Texturing the Level
Measuring the Scene
Creating the Textures
Animation using Blender - Part One: Rigging Tater
Matching Object Size
Unity Blender Support and FBX Workflow
Understanding Skinned Meshes within Unity
VFP Optimized Skinning
Optimized Skinning Paths
Rigging Tater in Blender
iPhone and iPad Rigging Essentials
Creating the Basic Skeleton
Adjusting Bone Rotations
Fixing the Pelvis Bone for Hip Sway
Disabling Inherit Rotation
Animation using Blender - Part Two: IK and Basic Animation
Completing the Rig: Adding IK
Setting up IK for Legs
Setting up IK for Arms
Tiding Things Up
Creating a Master Control
Using Bone Layers
Tweaking Weight Maps
Multiple Clips per File VS One Clip per File
Option One: Multiple Clips
Option Two: One Clip
How Animation Data Works in Unity
Names Are Match to Relative Animation Component
Animating in Blender
Creating an Animation Cycle
Exporting the FBX
Make sure frame range is set correctly
Disable Optimize Keyframes and Selected Objects
Animation using Blender - Part Three: Advanced Animation
Unity's Animation System
Creating Animations using Blender's NLA Editor
Using the NLA Editor
Blending Animations in Unity
Using Dynamics in Blender to Animate Objects
Setting Up Rigid Body Dynamics in Blender
Importing into Blender
Setting Up the Logic
Baking the Simulation
Editing the Animation with the NLA Editor
Creating Lightmaps using Beast
Beast and HDR
Correct Lightmap UVs
Using Dual Lightmaps
Beast Custom Settings
Color Grading Maps
Adding Grime to Maps
Working with Game Assets in Unity
Creating Prefab for Environment Props
Camera Control Kit
Setting Up Colliders
Using Physics in Unity
Setting Up the Target Object
Optimizing Physics using the Time Manager
Publishing your Game
Optimizing your Game
Tuning Main Loop Performance
Tuning accelerometer processing frequency
Posted January 2, 2013
Posted July 17, 2012
0 out of 5 people found this review helpful.Was this review helpful? Yes NoThank you for your feedback. Report this reviewThank you, this review has been flagged.