forked from IndieVisualLab/UnityGraphicsProgrammingBook1
-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathMarchingCubes.re
807 lines (543 loc) · 27 KB
/
MarchingCubes.re
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
= Introduction to the Marching Cubes method starting with the atmosphere
== What is the Marching Cubes method?
=== History and overview
The marching cubes method is one of the volume rendering methods and is an algorithm that converts 3D voxel data filled with scalar data into polygon data. The first paper was published in 1987 by William E. Lorensen and Harvey E. Cline.
The Marching Cubes method was patented, but since it was expired in 2005, it is now freely available.
=== Explanation of simple mechanism
First, the volume data space is divided into three-dimensional grids.
//image[marching_cubes_001@4x][3D volume data and grid division]{
//}
Next, let's take out one of the divided grids. The boundary of 8 vertices is calculated as 1 if the values of the 8 corners of the grid are greater than or equal to the threshold, and 0 if they are less than the threshold. @<br>{}
The figure below shows the flow when the threshold is set to 0.5.
//image[marching_cubes_002@4x][Determine boundaries according to corner values]{
//}
There are 256 combinations of the eight corners, but if you make full use of rotation and inversion, it will fit in 15 types. A triangular polygon pattern corresponding to the 15 combinations is assigned.
//image[marching_cubes_003@4x][Combination of corners]{
//}
== Sample repository
The sample project explained in this chapter is Unity Graphics Programming's Unity project. https://github.com/IndieVisualLab/UnityGraphicsProgramming内にあるAssets/GPUMarchingCubesにあります。
For implementation, Paul Bourke's site of Polygonising a scalar field@<fn>{paul}I ported it to Unity with reference to.
//footnote[paul][Polygonising a scalar field http://paulbourke.net/geometry/polygonise/]
This time, I will explain along this sample project.
There are three major implementations.
* Initialization of mesh, drawing registration process for each frame (C# script part)
* ComputeBuffer initialization
* Actual drawing process (shader part)
First, initialize the mesh and register the drawing. @<b>{GPUMarchingCubesDrawMesh} I will make it from the class.
=== GeometryShader Make a mesh for
As explained in the previous section, the marching cubes method is an algorithm that generates polygons by combining the eight corners of a grid.
To do that in real time, you need to dynamically create polygons.@<br>{}
However, it is inefficient to generate the vertex array of the mesh on the CPU side (C# side) every frame.@<br>{}
So we use Geometry Shader. GeometryShader is roughly a Shader located between VertexShader and FragmentShader, and can increase or decrease the vertices processed by VertexShader. @<br>{}
For example, a plate polygon can be generated by adding 6 vertices around one vertex.@<br>{}
Furthermore, it is very fast because it is processed on the Shader side (GPU side).@<br>{}
This time I will use the Geometry Shader to generate and display the Marching Cubes polygons.
First, @<b>{GPUMarchingCubesDrawMesh}Define the variables used in the class.
//list[kaiware_define_cs][Definition part of variable group][cs]{
using UnityEngine;
public class GPUMarchingCubesDrawMesh : MonoBehaviour {
#region public
public int segmentNum = 32; // The number of divisions on one side of the grid
[Range(0,1)]
public float threashold = 0.5f; // Threshold of scalar value to mesh
public Material mat; // Material for rendering
public Color DiffuseColor = Color.green; // Diffuse color
public Color EmissionColor = Color.black; // Emitting color
public float EmissionIntensity = 0; // Luminous intensity
[Range(0,1)]
public float metallic = 0; // Metallic feeling
[Range(0, 1)]
public float glossiness = 0.5f; // Gloss
#endregion
#region private
int vertexMax = 0; // Number of vertices
Mesh[] meshs = null; // Mesh array
Material[] materials = null; // Material array for each mesh
float renderScale = 1f / 32f; // Display scale
MarchingCubesDefines mcDefines = null; // MarchingCubes Constant array group
#endregion
}
//}
Next, create a mesh to pass to Geometry Shader. The vertices of the mesh should be placed one by one in the divided 3D grid.
For example, when the number of divisions on one side is 64, 64*64*64=262,144 vertices are required.
However, in Unity 2017.1.1f1, the maximum number of vertices in one mesh is 65,535.
Therefore, each mesh is divided so that the number of vertices is within 65,535.
//list[kaiware_create_mesh_cs][Mesh creation part][cs]{
void Initialize()
{
vertexMax = segmentNum * segmentNum * segmentNum;
Debug.Log("VertexMax " + vertexMax);
// Divide the size of 1 Cube by segmentNum to determine the size for rendering
renderScale = 1f / segmentNum;
CreateMesh();
// Initialization of constant array for Marching Cubes used in shader
mcDefines = new MarchingCubesDefines();
}
void CreateMesh()
{
// Since the upper limit of the number of vertices of the mesh is 65535, divide the mesh
int vertNum = 65535;
int meshNum = Mathf.CeilToInt((float)vertexMax / vertNum); // Number of meshes to split
Debug.Log("meshNum " + meshNum );
meshs = new Mesh[meshNum];
materials = new Material[meshNum];
// Bounce calculation for Mesh
Bounds bounds = new Bounds(
transform.position,
new Vector3(segmentNum, segmentNum, segmentNum) * renderScale
);
int id = 0;
for (int i = 0; i < meshNum; i++)
{
// 頂点作成
Vector3[] vertices = new Vector3[vertNum];
int[] indices = new int[vertNum];
for(int j = 0; j < vertNum; j++)
{
vertices[j].x = id % segmentNum;
vertices[j].y = (id / segmentNum) % segmentNum;
vertices[j].z = (id / (segmentNum * segmentNum)) % segmentNum;
indices[j] = j;
id++;
}
// Mesh作成
meshs[i] = new Mesh();
meshs[i].vertices = vertices;
// Mesh Topology is Points because polygons are created with Geometry Shader
meshs[i].SetIndices(indices, MeshTopology.Points, 0);
meshs[i].bounds = bounds;
materials[i] = new Material(mat);
}
}
//}
=== ComputeBufferの初期化
@<b>{MarchingCubesDefinces.cs} In the source, a constant array used in the rendering of the marching cubes method and a ComputeBuffer for passing the constant array to the shader are defined.
ComputeBuffer is a buffer that stores the data used by the shader. Since the data is stored in the memory on the GPU side, access from the shader is fast.
Actually, it is possible to define the constant array used in the rendering of the Marching Cubes method on the shader side.
However, the reason why the constant array used by the shader is initialized on the C# side is that the shader has a limitation that the number of literal values (directly written values) can be registered up to 4096. If you define a huge array of constants in your shader, you will quickly hit the upper limit on the number of literal values.
Therefore, by storing it in ComputeShader and passing it, it will not be a literal value, so it will not hit the upper limit.
Because of this the processing will increase a little, but I am trying to store the constant array in ComputeBuffer on the C# side and pass it to the shader.
//list[kaiware_computebuffer_init][ComputeBuffer Initialization part of][cs]{
void Initialize()
{
vertexMax = segmentNum * segmentNum * segmentNum;
Debug.Log("VertexMax " + vertexMax);
// 1Cube The size of segmentNum Divide by to determine the size for rendering
renderScale = 1f / segmentNum;
CreateMesh();
// Initialization of constant array for Marching Cubes used in shader
mcDefines = new MarchingCubesDefines();
}
//}
In the Initialize() function above, the MarchingCubesDefines are initialized.
=== rendering
Next is a function that calls the rendering process. @<br>{}
This time we will use Graphics.DrawMesh() to render multiple meshes at once and be able to be influenced by Unity's lighting. The meaning of DiffuseColor etc. defined by public variables will be explained in the explanation on the shader side.
ComputeBuffers of the MarchingCubesDefines class in the previous section are passed to the shader with material.setBuffer.
//list[kaiware_rendermesh][Rendering part][cs]{
void RenderMesh()
{
Vector3 halfSize = new Vector3(segmentNum, segmentNum, segmentNum)
* renderScale * 0.5f;
Matrix4x4 trs = Matrix4x4.TRS(
transform.position,
transform.rotation,
transform.localScale
);
for (int i = 0; i < meshs.Length; i++)
{
materials[i].SetPass(0);
materials[i].SetInt("_SegmentNum", segmentNum);
materials[i].SetFloat("_Scale", renderScale);
materials[i].SetFloat("_Threashold", threashold);
materials[i].SetFloat("_Metallic", metallic);
materials[i].SetFloat("_Glossiness", glossiness);
materials[i].SetFloat("_EmissionIntensity", EmissionIntensity);
materials[i].SetVector("_HalfSize", halfSize);
materials[i].SetColor("_DiffuseColor", DiffuseColor);
materials[i].SetColor("_EmissionColor", EmissionColor);
materials[i].SetMatrix("_Matrix", trs);
Graphics.DrawMesh(meshs[i], Matrix4x4.identity, materials[i], 0);
}
}
//}
== call
//list[kaiware_update][Call part][cs]{
// Use this for initialization
void Start ()
{
Initialize();
}
void Update()
{
RenderMesh();
}
//}
Start()でInitialize()To generate a mesh, Update()RenderMesh with a function()Call to render.@<br>{}
Update()でRenderMesh()The reason for calling Graphics.DrawMesh()This is because it doesn't draw immediately, but it's like "register once for rendering".@<br>{}
By registering, Unity will adapt the lights and shadows. There's a similar function, Graphics.DrawMeshNow(), but it draws instantly, so Unity's lights and shadows don't apply. Also, it should be called in OnRenderObject(), OnPostRender(), etc. instead of Update().
== Shader side implementation
The shader this time is roughly divided into two parts: @<b>{"Entity rendering part"} and @<b>{"Shadow rendering part"}.
In addition, three shader functions are executed within each: vertex shader, geometry shader, and fragment shader.
Since the shader source is long, let's have a look at the sample project for the entire implementation, and explain only the essential points.
The shader file to explain is GPU ArchingCubesRenderMesh.shader.
=== Variable declaration
The upper part of the shader defines the structure used for rendering.
//list[kaiware_shader_struct_define][Definition part of structure][cs]{
// Vertex data coming from the mesh
struct appdata
{
float4 vertex : POSITION; // Vertex coordinates
};
//Data passed from the vertex shader to the geometry shader
struct v2g
{
float4 pos : SV_POSITION; // Vertex coordinates
};
// Data passed from the geometry shader to the fragment shader during entity rendering
struct g2f_light
{
float4 pos : SV_POSITION; // Local coordinates
float3 normal : NORMAL; // Normal
float4 worldPos : TEXCOORD0; // World coordinates
half3 sh : TEXCOORD3; // SH
};
// Data to pass from the geometry shader to the fragment shader when rendering shadows
struct g2f_shadow
{
float4 pos : SV_POSITION; // 座標
float4 hpos : TEXCOORD1;
};
//}
Next we define the variables.@<br>{}
//list[kaiware_shader_arguments_define][Variable definition part][cs]{
int _SegmentNum;
float _Scale;
float _Threashold;
float4 _DiffuseColor;
float3 _HalfSize;
float4x4 _Matrix;
float _EmissionIntensity;
half3 _EmissionColor;
half _Glossiness;
half _Metallic;
StructuredBuffer<float3> vertexOffset;
StructuredBuffer<int> cubeEdgeFlags;
StructuredBuffer<int2> edgeConnection;
StructuredBuffer<float3> edgeDirection;
StructuredBuffer<int> triangleConnectionTable;
//}
The contents of various variables defined here are passed by the material.Set○○ function in the RenderMesh() function on the C# side.
ComputeBuffers of MarchingCubesDefines class have changed the type name as StructuredBuffer<○○>.
=== Vertex shader
The vertex shader is pretty simple, as most of the processing is done by the geometry shader. It simply passes the vertex data passed from the mesh directly to the geometry shader.
//list[kaiware_shader_vertex][Implementation part of vertex shader][cs]{
// Vertex data coming from the mesh
struct appdata
{
float4 vertex : POSITION; // 頂点座標
};
// Data passed from the vertex shader to the geometry shader
struct v2g
{
float4 pos : SV_POSITION; // 座標
};
// Vertex shader
v2g vert(appdata v)
{
v2g o = (v2g)0;
o.pos = v.vertex;
return o;
}
//}
By the way, the vertex shader is common to the substance and the shadow.
=== Entity geometry shader
Since it is long, I will explain it while dividing it.
//list[kaiware_shader_geometry_1][Function declaration part of geometry shader][cs]{
// Entity geometry shader
[maxvertexcount(15)] // Definition of maximum number of vertices output from shader
void geom_light(point v2g input[1],
inout TriangleStream<g2f_light> outStream)
//}
First, the declarative part of the geometry shader.
#@#//emlist[][cs]{
#@#[maxvertexcount(15)]
#@#//}
@<code>{[maxvertexcount(15)]}Is the definition of the maximum number of vertices output from the shader. In the marching cubes algorithm this time, up to 5 triangular polygons can be created per grid, so a total of 15 vertices will be output in 3*5.@<br>{}
Therefore, enter 15 in () of maxvertexcount.
//list[kaiware_shader_geometry_2][Scalar value acquisition part of the eight corners of the grid][cs]{
float cubeValue[8]; // Array for getting scalar values of the eight corners of the grid
// Get the scalar values of the eight corners of the grid
for (i = 0; i < 8; i++) {
cubeValue[i] = Sample(
pos.x + vertexOffset[i].x,
pos.y + vertexOffset[i].y,
pos.z + vertexOffset[i].z
);
}
//}
pos contains the coordinates of the vertices placed in grid space when creating the mesh. vertexOffset is an array of offset coordinates to add to pos as the name implies.
This loop gets the scalar value in the volume data of the coordinates of 1 corner = 8 corners of 1 grid.
vertexOffset refers to the order of the corners of the grid.
//image[marching_cubes_005@4x][Order of the coordinates of the corners of the grid]{
//}
//list[kaiware_shader_sampling][Sampling function part][cs]{
// Sampling function
float Sample(float x, float y, float z) {
// Are the coordinates outside the grid space?
if ((x <= 1) ||
(y <= 1) ||
(z <= 1) ||
(x >= (_SegmentNum - 1)) ||
(y >= (_SegmentNum - 1)) ||
(z >= (_SegmentNum - 1))
)
return 0;
float3 size = float3(_SegmentNum, _SegmentNum, _SegmentNum);
float3 pos = float3(x, y, z) / size;
float3 spPos;
float result = 0;
// Distance function of three spheres
for (int i = 0; i < 3; i++) {
float sp = -sphere(
pos - float3(0.5, 0.25 + 0.25 * i, 0.5),
0.1 + (sin(_Time.y * 8.0 + i * 23.365) * 0.5 + 0.5) * 0.025) + 0.5;
result = smoothMax(result, sp, 14);
}
return result;
}
//}
It is a function to get the scalar value of the specified coordinate from the volume data. This time, instead of enormous 3D volume data, a simple algorithm that uses the distance function is used to calculate the scalar value.
====[column] About distance function
The 3D shape drawn by the marching cubes method this time is@<b>{「距離関数」}It is defined using
Roughly speaking, the distance function here is a "function that satisfies the distance condition."
For example, the distance function for a sphere is
//list[kaiware_distance_function][Sphere distance function][cs]{
inline float sphere(float3 pos, float radius)
{
return length(pos) - radius;
}
//}
pos The coordinates are entered in, but the center coordinates of the sphere are the origin.(0,0,0)I will think in that case. radius is the radius.
The length is calculated with length(pos), but this is the distance from the origin to pos, and it is subtracted with the radius radius, so if the length is less than or equal to the radius, it will be a natural but negative value.
In other words, if you pass the coordinate pos and a negative value is returned, you can judge that "the coordinate is inside the sphere".
The advantage of the distance function is that the program can be made smaller because the figure can be expressed with a simple calculation formula of a few lines. You can find a lot of information about other distance functions on Inigo Quilez's site.
@<href>{http://iquilezles.org/www/articles/distfunctions/distfunctions.htm,http://iquilezles.org/www/articles/distfunctions/distfunctions.htm}
====[/column]
//list[kaiware_sphere][Composite of distance functions of three spheres][cs]{
// Distance function of three spheres
for (int i = 0; i < 3; i++) {
float sp = -sphere(
pos - float3(0.5, 0.25 + 0.25 * i, 0.5),
0.1 + (sin(_Time.y * 8.0 + i * 23.365) * 0.5 + 0.5) * 0.025) + 0.5;
result = smoothMax(result, sp, 14);
}
//}
This time, I use 8 corners (vertices) of 1 grid as pos. The distance from the center of the sphere is directly treated as the density of volume data.
As will be described later, the sign is inverted because polygons are created when the threshold value is 0.5 or more. Also, the coordinates are delicately shifted to find the distances to the three spheres.
//list[kaiware_shader_smoothmax][smoothMax関数][cs]{
float smoothMax(float d1, float d2, float k)
{
float h = exp(k * d1) + exp(k * d2);
return log(h) / k;
}
//}
smoothMax is a function that blends the results of distance functions nicely. You can use this to fuse three spheres like a metaball.
//list[kaiware_shader_flagindex][Threshold check][cs]{
// Check if the values of the eight corners of the grid exceed the threshold
for (i = 0; i < 8; i++) {
if (cubeValue[i] <= _Threashold) {
flagIndex |= (1 << i);
}
}
int edgeFlags = cubeEdgeFlags[flagIndex];
// Draw nothing if empty or completely filled
if ((edgeFlags == 0) || (edgeFlags == 255)) {
return;
}
//}
If the scalar value at the corner of the grid exceeds the threshold, set a bit in flagIndex. Using that flagIndex as an index, the information for generating polygons is extracted from the cubeEdgeFlags array and stored in edgeFlags.
If all the corners of the grid are below the threshold or above the threshold, the polygon is not generated because it is completely inside or outside.
//list[kaiware_shader_offset][Calculation of polygon vertex coordinates][cs]{
float offset = 0.5;
float3 vertex;
for (i = 0; i < 12; i++) {
if ((edgeFlags & (1 << i)) != 0) {
// Get threshold offset between corners
offset = getOffset(
cubeValue[edgeConnection[i].x],
cubeValue[edgeConnection[i].y], _
Threashold
);
// Complement the coordinates of the vertices based on the offset
vertex = vertexOffset[edgeConnection[i].x]
+ offset * edgeDirection[i];
edgeVertices[i].x = pos.x + vertex.x * _Scale;
edgeVertices[i].y = pos.y + vertex.y * _Scale;
edgeVertices[i].z = pos.z + vertex.z * _Scale;
// Normal calculation (in order to resample, the vertex coordinates before scaling are required)
edgeNormals[i] = getNormal(
defpos.x + vertex.x,
defpos.y + vertex.y,
defpos.z + vertex.z
);
}
}
//}
This is where the vertex coordinates of the polygon are calculated. By looking at the bit of edgeFlags, we are calculating the vertex coordinates of the polygon placed on the side of the grid.
getOffset outputs the ratio (offset) from the current corner to the next corner from the scalar value and threshold of the two corners of the grid. By shifting from the coordinates of the current corner to the direction of the next corner by offset, the final polygon becomes a smooth polygon.
In getNormal, the normal line is calculated by re-sampling and giving the slope.
//list[kaiware_shader_make_polygon][Make polygons by connecting vertices][cs]{
// Create polygon by connecting vertices
int vindex = 0;
int findex = 0;
// Up to 5 triangles
for (i = 0; i < 5; i++) {
findex = flagIndex * 16 + 3 * i;
if (triangleConnectionTable[findex] < 0)
break;
// Make a triangle
for (j = 0; j < 3; j++) {
vindex = triangleConnectionTable[findex + j];
// Multiply the Transform matrix to convert to world coordinates
float4 ppos = mul(_Matrix, float4(edgeVertices[vindex], 1));
o.pos = UnityObjectToClipPos(ppos);
float3 norm = UnityObjectToWorldNormal(
normalize(edgeNormals[vindex])
);
o.normal = normalize(mul(_Matrix, float4(norm,0)));
outStream.Append(o); // Add vertices to strip
}
outStream.RestartStrip(); // Break and start the next primitive strip
}
//}
This is the place where the polygons are made by connecting the vertex coordinate groups obtained earlier. Contains the indices of the vertices that connect to the triangleConnectionTable array. It is converted to world coordinates by multiplying the vertex coordinates by the Transform matrix, and then converted to screen coordinates with UnityObjectToClipPos().
In addition, the normal line is also converted to the world coordinate system with UnityObjectToWorldNormal(). These vertices and normals will be used for lighting in the following fragment shader.
TriangleStream.Append() and RestartStrip() are special functions for geometry shaders.
Append()Adds vertex data to the current strip. RestartStrip()Creates a new strip. Since it is a Triangle Stream, it is an image to append up to 3 to one strip.
=== Entity fragment shader
In order to reflect lighting such as GI (Global Illumination) of Unity, the lighting processing part of Surface Shader after Generate code is ported.
//list[kaiware_shader_fragment_define][Fragment shader definition][cs]{
// Entity fragment shader
void frag_light(g2f_light IN,
out half4 outDiffuse : SV_Target0,
out half4 outSpecSmoothness : SV_Target1,
out half4 outNormal : SV_Target2,
out half4 outEmission : SV_Target3)
//}
Output to output to G-Buffer(SV_Target)There are four.
//list[kaiware_shader_surface][Initialize SurfaceOutputStandard structure][cs]{
#ifdef UNITY_COMPILER_HLSL
SurfaceOutputStandard o = (SurfaceOutputStandard)0;
#else
SurfaceOutputStandard o;
#endif
o.Albedo = _DiffuseColor.rgb;
o.Emission = _EmissionColor * _EmissionIntensity;
o.Metallic = _Metallic;
o.Smoothness = _Glossiness;
o.Alpha = 1.0;
o.Occlusion = 1.0;
o.Normal = normal;
//}
Set parameters such as color and gloss in the SurfaceOutputStandard structure that will be used later.
//list[kaiware_shader_light][GI related processing][cs]{
// Setup lighting environment
UnityGI gi;
UNITY_INITIALIZE_OUTPUT(UnityGI, gi);
gi.indirect.diffuse = 0;
gi.indirect.specular = 0;
gi.light.color = 0;
gi.light.dir = half3(0, 1, 0);
gi.light.ndotl = LambertTerm(o.Normal, gi.light.dir);
// Call GI (lightmaps/SH/reflections) lighting function
UnityGIInput giInput;
UNITY_INITIALIZE_OUTPUT(UnityGIInput, giInput);
giInput.light = gi.light;
giInput.worldPos = worldPos;
giInput.worldViewDir = worldViewDir;
giInput.atten = 1.0;
giInput.ambient = IN.sh;
giInput.probeHDR[0] = unity_SpecCube0_HDR;
giInput.probeHDR[1] = unity_SpecCube1_HDR;
#if UNITY_SPECCUBE_BLENDING || UNITY_SPECCUBE_BOX_PROJECTION
// .w holds lerp value for blending
giInput.boxMin[0] = unity_SpecCube0_BoxMin;
#endif
#if UNITY_SPECCUBE_BOX_PROJECTION
giInput.boxMax[0] = unity_SpecCube0_BoxMax;
giInput.probePosition[0] = unity_SpecCube0_ProbePosition;
giInput.boxMax[1] = unity_SpecCube1_BoxMax;
giInput.boxMin[1] = unity_SpecCube1_BoxMin;
giInput.probePosition[1] = unity_SpecCube1_ProbePosition;
#endif
LightingStandard_GI(o, giInput, gi);
//}
This is a GI-related process. Put the initial value in UnityGIInput and write the GI result calculated by LightnintStandard_GI() to UnityGI.
//list[kaiware_shader_gi][Calculation of light reflection][cs]{
// call lighting function to output g-buffer
outEmission = LightingStandard_Deferred(o, worldViewDir, gi,
outDiffuse,
outSpecSmoothness,
outNormal);
outDiffuse.a = 1.0;
#ifndef UNITY_HDR_ON
outEmission.rgb = exp2(-outEmission.rgb);
#endif
//}
Pass the various calculation results to LightingStandard_Deferred() to calculate the light reflection level and write it to the Emission buffer. In the case of HDR, write after inserting the part compressed by exp.
=== Shadow geometry shader
It is almost the same as the physical geometry shader. Only the differences will be explained.
//list[kaiware_shader_geometry_shadow][Shadow geometry shader][cs]{
int vindex = 0;
int findex = 0;
for (i = 0; i < 5; i++) {
findex = flagIndex * 16 + 3 * i;
if (triangleConnectionTable[findex] < 0)
break;
for (j = 0; j < 3; j++) {
vindex = triangleConnectionTable[findex + j];
float4 ppos = mul(_Matrix, float4(edgeVertices[vindex], 1));
float3 norm;
norm = UnityObjectToWorldNormal(normalize(edgeNormals[vindex]));
float4 lpos1 = mul(unity_WorldToObject, ppos);
o.pos = UnityClipSpaceShadowCasterPos(lpos1,
normalize(
mul(_Matrix,
float4(norm, 0)
)
)
);
o.pos = UnityApplyLinearShadowBias(o.pos);
o.hpos = o.pos;
outStream.Append(o);
}
outStream.RestartStrip();
}
//}
UnityClipSpaceShadowCasterPos()とUnityApplyLinearShadowBias() Convert the vertex coordinates to the shadow projection coordinates with.
=== Shadow fragment shader
//list[kaiware_shader_fragment_shadow][Shadow fragment shader][cs]{
// Shadow fragment shader
fixed4 frag_shadow(g2f_shadow i) : SV_Target
{
return i.hpos.z / i.hpos.w;
}
//}
It's too short to explain. Actually return 0; But the shadow is drawn normally. Is Unity doing a good job inside?
== carry out
When you execute it, a picture like this should appear.
//image[marching_cubes_006][Swell][scale=0.25]{
//}
Also, various shapes can be created by combining distance functions.
//image[marching_cubes_007][Kaiwarei][scale=0.25]{
//}
== Summary
I used the distance function for simplification this time, but I think that the marching cubes method can be used for other things such as writing 3D texture with volume data and visualizing various 3D data. ..@<br>{}
For gaming purposes, you might be able to make games like ASTORONEER@<fn>{astroneer} where you can dig and dig the terrain. @<br>{}
Everyone, try exploring various expressions using the Marching Cubes method!
== reference
* Polygonising a scalar field - http://paulbourke.net/geometry/polygonise/
* modeling with distance functions -
http://iquilezles.org/www/articles/distfunctions/distfunctions.htm
//footnote[astroneer][ASTRONEER http://store.steampowered.com/app/361420/ASTRONEER/?l=japanese]