OpenGL-Learning-3

OpenGL的一些高级特性,也就是核心的一些技巧和流程,主要包括深度测试、模板测试等内容。

高级OpenGL

depth testing

color buffer绿色是OpenGL对象
同样,depth buffer绿色,也是对象,而且跟colorbuffer有同样的宽度和高度,系统自动生成的。fs运行后,深度测试是在屏幕空间内进行的(实际上是在模板测试以后的)。屏幕空间坐标直接由glviewport决定,能够通过fs 的内建gl_FraCoord得到,xy坐标分别代表屏幕内的坐标,还有一个z表示该像素的实际深度值,这个z值是跟depth buffer的值比较的那个值。

  • 现在gpu通常也容许在fs之前的深度测试,如果确定一个像素肯定不可见被遮挡的话,直接忽略这个片元。
  • 不要直接写入片元的深度值,不然提前深度测试就不可能了
  1. 默认是gl_LESS,也就是说保留了深度值小的片元,深度值小意味着?????靠的近啊SB
  2. 如果换成gl_always,等于没有开深度测试

    深度值到底是什么

    depth buffer绿色的值是一个0-1的数值。物体的z值(在view space,也就是离真实观察者的距离)跟他比较。但是显然没法直接比较,用一个公式转换成0-1
    Fdepth = )(z- near)/(far -near)
    简单示意,实际上并不是,因为没有考虑距离上的效果,远的模糊近的清晰。
    问题:
    void main()
    {
    ​ FragColor = vec4(vec3(gl_FragCoord.z), 1.0);
    }
    这里fragColor最后一个分量1.0 表示的是RGBA的A,是透明度,跟深度测试没有关系。

    zfight

    两个面离得太近,最后转化得到的深度值精度值不够,没法判断哪个在前哪个在后。
    离得近了,物体考的特别近才会比较明显,但是离得远的物体,两个物体发生打架的可能更大。
    怎么避免呢?
  3. 手动调整
  4. 将near平面放的尽可能远些
  5. 提高depthbuffer 的精度,显然这是要吃内存的哈哈

    模板测试

    先模板测试,再深度测试;
    在fragment shader处理片元之后,进行模板测试,可以舍弃掉一些片元,模板缓冲区stencil buffer是被容许直接在渲染的时候改变的;
    每一个片元的stencil buffer值有8bits,也就是可以设置256个不同的值;

    stenciil functions

  • glStencilMask(0xFF);
    // each bit is written to the stencil buffer as is
    glStencilMask(0x00); // each bit ends up as 0 in the stencil buffer (disabling writes)

  • glStencilFunc(GLenum func, GLint ref, GLuint mask)
    分别是:
    选项,比较buffer的值和ref的值;(buffer的值就是片元的stencil
    value???????)
    比较值;
    测试比较这两个值(ref 值和存储的buffer值)之前加上去的一个mask

  • glStencilOp(GLenum sfail, GLenum dpfail, GLenum dppass)
    分别是:
    模板测试失败;
    模板测试通过,深度测试失败;
    两个测试都失败;
    默认两个测试的话都是GL_KEEP,不会改变stencil的值

    外发光shader

    卧槽,原来外发光是这么实现的,这个shader很早就在用了,但是一直没仔细研究过。思路挺巧妙:
  1. Set the stencil func to GL_ALWAYS before drawing the (to be outlined) objects, updating the stencil buffer with 1s wherever the objects’ fragments are rendered.
  2. Render the objects.
  3. Disable stencil writing and depth testing.
  4. Scale each of the objects by a small amount.
  5. Use a different fragment shader that outputs a single (border) color.
  6. Draw the objects again, but only if their fragments’ stencil values are not equal to
  7. Enable stencil writing and depth testing again.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
glEnable(GL_DEPTH_TEST);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);

glStencilMask(0x00); // make sure we don't update the stencil buffer while drawing the floor
normalShader.use();
DrawFloor()

glStencilFunc(GL_ALWAYS, 1, 0xFF); // all fragments should update the stencil buffer
//change ref value to 1
glStencilMask(0xFF); // enable writing to the stencil buffer
DrawTwoContainers();

glStencilFunc(GL_NOTEQUAL, 1, 0xFF);
glStencilMask(0x00); // disable writing to the stencil buffer
glDisable(GL_DEPTH_TEST);
shaderSingleColor.use();
DrawTwoScaledUpContainers();
glStencilMask(0xFF);
glEnable(GL_DEPTH_TEST);

见图learn3-001

Blending

这个怎么理解呢,彩色的玻璃,嗯,你就知道大致实现的是一个什么样的目标了。全透明、半透明。这个核心就在于color数据RGB后面的A。

区域部分透明Discarding Fragments

不是全透明也不是全不透明。
比如对于有透明通道的草的图片,如图001,首先opengl会自动加载RGBA的方式,然后shader里面也做出相应的改变:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);

1
2
3
4
5
6
7
8
9
10
11
#version 330 core
out vec4 FragColor;
in vec2 TexCoords;
uniform sampler2D texture1;
void main()
{
vec4 texColor = texture(texture1, TexCoords);
if(texColor.a < 0.1)
discard;
FragColor = texColor;
}

注意渲染透明材质的时候,边缘的处理可能会出现一些花边出现。将wrappingmode设置成GL_CLAMP_TO_EDGE可能会好一些。

混合

混合的计算公式如002.
des通常指的是远处的颜色在颜色缓冲区的颜色,source指的是近处的颜色/纹理的颜色???

渲染半透明物体

首先
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

1
2
3
4
5
6
7
8
#version 330 core
out vec4 FragColor;
in vec2 TexCoords;
uniform sampler2D texture1;
void main()
{
FragColor = texture(texture1, TexCoords);
}

看上去是这样:003
这是因为深度测试的问题。如果是全透明的,没问题。半透明的,就会有渲染顺序的问题。话说回来,这里还是上面的那个问题,那个是前景色那个是背景色的问题,opengl到底如何处理的。
After the fragment shader has run and all the tests have passed, this blend equation is let loose on the fragment’s color output and with whatever is currently in the color buffer (previous fragment color stored before the current fragment).
这句话特么的说清楚了,理论上可以说是先渲染元的地方的,这样他就是在color buffer里面。近处的透明物体就可以根据alpha值来混合之前buffer里面的颜色值。
所以说,这个是要注意渲染顺序的。

顺序

  1. 绘制所有的不透明物体
  2. 对所有的透明物体进行排序
  3. 按照(观察相机)从远到近的顺序进行渲染透明-半透明物体

在这期间,因为透明的处理关键在1贴图的透明度,2 渲染的顺序,基本简单的这种处理跟深度测试无关,深度测试正常开启,默认即可。

最后,渲染透明物体还是一件很麻烦的事,还有一些其他的技术比如order independent transparency绿色。

Face culling

开启以后,按照渲染三角形的顺序,可以避免渲染背后的内容,节省显存的资源?还是内存的资源???不确定
还是减少两个交互的资源,也就是drawcall的次数???这里先都留问号吧。

frame buffer

关于内存的问题

OpenGL关于内存这块其实比较模糊,自己维护了一套内存和显存的交互机制?反正不是特别清晰,这个可能需要慢慢渗入才可以了解到更多。这里是两个说OpenGL内存的帖子:
https://stackoverflow.com/questions/15138483/where-and-what-is-opengl-memory-used-by-vbos-etc
https://stackoverflow.com/questions/16854825/confusion-regarding-memory-management-in-opengl

创建framebuffer

colorbuffer,depthbuffer以及stencilbuffer的结合体叫做framebuffer,存在memory里(具体是那里的memory先不管)。默认的default frambuffer绿色是在你创建glfw窗口的时候创建和配置好的,主要作用就是做一些postprocessing处理。
主要方式:
create - bind- do-unbind
unsigned int fbo;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
绑定的framebuffer对象通常有三种,read/write/both,一般不用管默认就好。在使用framebuffer之前还有一些事要做:

  1. We have to attach at least one buffer (color, depth or stencil buffer).
  2. There should be at least one color attachment.
  3. All attachments should be complete as well (reserved memory).
  4. Each buffer should have the same number of samples.
    因为创建的buffer不是默认的buffer,所以任渲染的命令都不会影响我们的渲染输出。这叫off-screen rendering。要让他渲染在窗口中,需要绑定到2来激活他:
    glBindFramebuffer(GL_FRAMEBUFFER, 0);
    完了再删除:
    glDeleteFramebuffers(1, &fbo);

现在为了让它有实际意义,需要给它添加至少一个attachment绿色。An attachment is a memory location that can act as a buffer for the framebuffer, think of it as an image.
有两种方法:textures或者renderbuffer绿色对象。

texture attachments

When attaching a texture to a framebuffer, all rendering commands will write to the texture as if it was a normal color/depth or stencil buffer. The advantage of using textures is that the result of all rendering operations will be stored as a texture image that we can then easily use in our shaders.

  1. create texture for framebuffer
    1
    2
    3
    4
    5
    6
    unsigned int texture;
    glGenTextures(1, &texture);
    glBindTexture(GL_TEXTURE_2D, texture);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 800, 600, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

大小和屏幕一样大。指定了内存区域,但是并没有填充,我们在渲染到framebuffer的时候填充texture。注意如果要将你的整个屏幕渲染到一个更小或者更大的尺寸,你需要在渲染到framebuffer之前调用vlviewport红色。

  1. attach texture给 framebuffer
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
    参数也可以是:
    GL_DEPTH_ATTACHMENT—GL_DEPTH_COMPONENT紫色
    GL_STENCIL_ATTACHMENT—GL_STENCIL_INDEX紫色
    也可以组合:GL_DEPTH_STENCIL_ATTACHMENT紫色

framebuffer 对象attachments

framebuffer的存储方式都比较快速,格式比较原声,在拷贝给其他buffer或者写入的时候速度比较突出。所以对于swapbuffers这类的操作,renderbuffer 对象很适合。

  1. 创建
    unsigned int rbo;
    glGenRenderbuffers(1, &rbo);
  2. 绑定
    glBindRenderbuffer(GL_RENDERBUFFER, rbo);
  3. 其他操作
    RBO是只写的,所以一般在深度和模板测试中用的比较多,,因为这些数据我们一般不需要采样,只需要关注深度和模板测试,创建深度和模板EBO:
    glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, 800, 600);
  4. attach RBO
    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_RENDERBUFFER, rbo);

什么时候用RBO,什么时候用texture呢,简单的原则就是,你不需要从一个特定的buffer里面读取/采样数据的话,用RBO来作为attachment。反之如果你需要从一个特定buffer采样比如颜色值或者深度值,那就用texture attachment代替。

渲染到texture-使用framebuffer

大致流程是:
创建framebuffer,然后将整个场景渲染到附着在framebuffer的一个color texture上。然后将这个texture绘制到一个简单的quad上。流程跟一般渲染一样。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
// first pass
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // we're not using the stencil buffer now
glEnable(GL_DEPTH_TEST);
DrawScene();

// second pass
glBindFramebuffer(GL_FRAMEBUFFER, 0); // back to default
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);

screenShader.use();
glBindVertexArray(quadVAO);
glDisable(GL_DEPTH_TEST);
glBindTexture(GL_TEXTURE_2D, textureColorbuffer);
glDrawArrays(GL_TRIANGLES, 0, 6);

完整的流程和代码解释

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
///准备工作
Shader shader("framebuffers.vs", "framebuffers.fs");
Shader screenShader("framebuffers_screen.vs","framebuffers_screen.fs");

float cubeVertices[] = {
// positions // texture Coords
-0.5f, -0.5f, -0.5f, 0.0f, 0.0f,
0.5f, -0.5f, -0.5f, 1.0f, 0.0f,
0.5f, 0.5f, -0.5f, 1.0f, 1.0f,
0.5f, 0.5f, -0.5f, 1.0f, 1.0f,
-0.5f, 0.5f, -0.5f, 0.0f, 1.0f,
-0.5f, -0.5f, -0.5f, 0.0f, 0.0f,//...
};
float planeVertices[] = {
// positions // texture Coords
5.0f, -0.5f, 5.0f, 2.0f, 0.0f,
-5.0f, -0.5f, 5.0f, 0.0f, 0.0f,
-5.0f, -0.5f, -5.0f, 0.0f, 2.0f,

5.0f, -0.5f, 5.0f, 2.0f, 0.0f,
-5.0f, -0.5f, -5.0f, 0.0f, 2.0f,
5.0f, -0.5f, -5.0f, 2.0f, 2.0f
};
float quadVertices[] = { // vertex attributes for a quad that fills the entire screen in Normalized Device Coordinates.
// positions // texCoords
-1.0f, 1.0f, 0.0f, 1.0f,
-1.0f, -1.0f, 0.0f, 0.0f,
1.0f, -1.0f, 1.0f, 0.0f,

-1.0f, 1.0f, 0.0f, 1.0f,
1.0f, -1.0f, 1.0f, 0.0f,
1.0f, 1.0f, 1.0f, 1.0f
};

///VAO VBOs
//cube VAO
unsigned int cubeVAO, cubeVBO;
glGenVertexArrays(1, &cubeVAO);
glGenBuffers(1, &cubeVBO);
glBindVertexArray(cubeVAO);
glBindBuffer(GL_ARRAY_BUFFER, cubeVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(cubeVertices), &cubeVertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(float), (void*)0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(float), (void*)(3 * sizeof(float)));
// plane VAO
unsigned int planeVAO, planeVBO;
glGenVertexArrays(1, &planeVAO);
glGenBuffers(1, &planeVBO);
glBindVertexArray(planeVAO);
glBindBuffer(GL_ARRAY_BUFFER, planeVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(planeVertices), &planeVertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(float), (void*)0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(float), (void*)(3 * sizeof(float)));
// screen quad VAO
unsigned int quadVAO, quadVBO;
glGenVertexArrays(1, &quadVAO);
glGenBuffers(1, &quadVBO);
glBindVertexArray(quadVAO);
glBindBuffer(GL_ARRAY_BUFFER, quadVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(quadVertices), &quadVertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), (void*)0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), (void*)(2 * sizeof(float)));

///加载贴图
unsigned int cubeTexture = loadTexture(getPath("marble.jpg").c_str());
unsigned int floorTexture =loadTexture(getPath("metal.png").c_str());

/// 配置shader
shader.use();
shader.setInt("texture1", 0);
screenShader.use();
screenShader.setInt("screenTexture", 0);

///创建framebuffer对象
//framebuffer configuration
//-------------------------
unsigned int framebuffer;
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
///创建一个texture图像,作为赋给frambuffer对象的color buffer
unsigned int textureColorbuffer;
glGenTextures(1, &textureColorbuffer);
glBindTexture(GL_TEXTURE_2D, textureColorbuffer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureColorbuffer, 0);
///深度和模板测试,用RBO即可,赋给framebuffer,因为不需要读取
unsigned int rbo;
glGenRenderbuffers(1, &rbo);
glBindRenderbuffer(GL_RENDERBUFFER, rbo);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, SCR_WIDTH, SCR_HEIGHT); // use a single renderbuffer object for both a depth AND stencil buffer.
///在完成framebuffer之前,将depth和stencil赋给framebuffer
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_RENDERBUFFER, rbo); // now actually attach it
// now that we actually created the framebuffer and added all attachments we want to check if it is actually complete now
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
cout << "Framebuffer is not complete!" << endl;
///(已经完成allocate memory)unbind
glBindFramebuffer(GL_FRAMEBUFFER, 0);
///-----------------------------------------renderloop
///draw the scene给这个framebuffer

///1.当前绑定的framebuffer为激活的buffer,跟往常一样绘制
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glEnable(GL_DEPTH_TEST); // enable depth testing (is disabled for rendering screen-space quad)
// make sure we clear the framebuffer's content
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

shader.use();
glm::mat4 model = glm::mat4(1.0f);
glm::mat4 view = camera.GetViewMatrix();
glm::mat4 projection = glm::perspective(glm::radians(camera.Zoom), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f);
shader.setMat4("view", view);
shader.setMat4("projection", projection);
// cubes
glBindVertexArray(cubeVAO);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, cubeTexture);
model = glm::translate(model, glm::vec3(-1.0f, 0.0f, -1.0f));
shader.setMat4("model", model);
glDrawArrays(GL_TRIANGLES, 0, 36);
model = glm::mat4(1.0f);
model = glm::translate(model, glm::vec3(2.0f, 0.0f, 0.0f));
shader.setMat4("model", model);
glDrawArrays(GL_TRIANGLES, 0, 36);
// floor
glBindVertexArray(planeVAO);
glBindTexture(GL_TEXTURE_2D, floorTexture);
shader.setMat4("model", glm::mat4(1.0f));
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindVertexArray(0);

///2.绑定默认framebuffer
//now bind back to default framebuffer and draw a quad plane with the attached framebuffer color texture
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glDisable(GL_DEPTH_TEST); // disable depth test so screen-space quad isn't discarded due to depth test.
// clear all relevant buffers
glClearColor(1.0f, 1.0f, 1.0f, 1.0f); // set clear color to white (not really necessery actually, since we won't be able to see behind the quad anyways)
glClear(GL_COLOR_BUFFER_BIT);

///3.用shader绘制一个quad,将新的framebuffer的color buffer当作它的texture
screenShader.use();
glBindVertexArray(quadVAO);
glBindTexture(GL_TEXTURE_2D, textureColorbuffer); // use the color attachment texture as the texture of the quad plane
glDrawArrays(GL_TRIANGLES, 0, 6);
///最后
// glfw: swap buffers and poll IO events (keys pressed/released, mouse moved etc.)
glfwSwapBuffers(window);
glfwPollEvents();
///-----------------------------------------renderloop end
glDeleteVertexArrays(1, &cubeVAO);
glDeleteVertexArrays(1, &planeVAO);
glDeleteVertexArrays(1, &quadVAO);
glDeleteBuffers(1, &cubeVBO);
glDeleteBuffers(1, &planeVBO);
glDeleteBuffers(1, &quadVBO);

glfwTerminate();

///4个shader
//frameshader.vs
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec2 aTexCoords;
out vec2 TexCoords;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main(){
TexCoords = aTexCoords;
gl_Position = projection * view * model * vec4(aPos, 1.0);}

//frameshader.fs
#version 330 core
out vec4 FragColor;
in vec2 TexCoords;
uniform sampler2D texture1;
void main(){
FragColor = texture(texture1, TexCoords);}

//frameshaderscreen.vs
#version 330 core
layout (location = 0) in vec2 aPos;
layout (location = 1) in vec2 aTexCoords;
out vec2 TexCoords;
void main(){
TexCoords = aTexCoords;
gl_Position = vec4(aPos.x, aPos.y, 0.0, 1.0); }

//frameshaderscreen.fs
#version 330 core
out vec4 FragColor;
in vec2 TexCoords;
uniform sampler2D screenTexture;
void main(){
vec3 col = texture(screenTexture, TexCoords).rgb;
FragColor = vec4(col, 1.0);}

post-processing

这个时候就soeasy,知道了每个图像像素点的value,直接利用图像处理就OK了,能做的算法啥的忒多了,就不一一介绍,很多滤波算法、边缘检测算法,甚至可以将流行的图像处理结合起来进行识别和进行智能游戏识别?注意主要处理的是frameshaderscreen就可以。
常用的几个算法和效果包括:

  • 取反
  • 灰度/二值化
  • kernel effect 比如锐化
  • blur模糊
    举个例子:
    1
    2
    3
    4
    5
    6
    //blur模糊
    float kernel[9] = float[](
    1.0 / 16, 2.0 / 16, 1.0 / 16,
    2.0 / 16, 4.0 / 16, 2.0 / 16,
    1.0 / 16, 2.0 / 16, 1.0 / 16
    );