OpenGL-Learning-1

这篇主要讲的是OpenGL的一些基础知识。

Getting Started

Camera

movement speed

这里还是先说一下渲染实际上做的就是一张张图片,但是根据计算机或者显卡的计算能力,每渲染两张图片的时间间隔在不同的硬件上是不以一样的,所以如果做一些交互处理的时候,比如在unity里每帧做移动交互或者OpenGL里面渲染循环里做按键、鼠标检测。
所以解决方案就是利用用每帧的deltatime来计算交互速度:

1
2
3
4
5
6
7
8
9
10
11
Camera camera(glm::vec3(0.0f, 0.0f, 3.0f));

float currentFrame = glfwGetTime();
deltaTime = currentFrame - lastFrame;
lastFrame = currentFrame;
void processInput(GLFWwindow *window)
{
float cameraSpeed = 2.5f * deltaTime;
if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS)
cameraPos += cameraSpeed * cameraFront;
}

怎么交互

glfw系统里传入回调函数就好

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
void processInput(GLFWwindow *window)
{
if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)
glfwSetWindowShouldClose(window, true);
if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS)
camera.ProcessKeyboard(FORWARD, deltaTime);
}
// glfw: whenever the window size changed (by OS or user resize) this callback function executes
// ---------------------------------------------------------------------------------------------
void framebuffer_size_callback(GLFWwindow* window, int width, int height)
{
// make sure the viewport matches the new window dimensions; note that width and
// height will be significantly larger than specified on retina displays.
glViewport(0, 0, width, height);
}


// glfw: whenever the mouse moves, this callback is called
// -------------------------------------------------------
void mouse_callback(GLFWwindow* window, double xpos, double ypos)
{
if (firstMouse)
{lastX = xpos;
lastY = ypos;
firstMouse = false;}
float xoffset = xpos - lastX;
float yoffset = lastY - ypos; // reversed since y-coordinates go from bottom to top
lastX = xpos;
lastY = ypos;
camera.ProcessMouseMovement(xoffset, yoffset);
}
// glfw: whenever the mouse scroll wheel scrolls, this callback is called
// --------------------------------------------------------------------
void scroll_callback(GLFWwindow* window, double xoffset, double yoffset)
{camera.ProcessMouseScroll(yoffset);}

相机类的实现:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
#ifndef CAMERA_H
#define CAMERA_H
#include <glad/glad.h>
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <vector>
// Defines several possible options for camera movement. Used as abstraction to stay away from window-system specific input methods
enum Camera_Movement {
FORWARD,
BACKWARD,
LEFT,
RIGHT
};
// Default camera values
const float YAW = -90.0f;
const float PITCH = 0.0f;
const float SPEED = 2.5f;
const float SENSITIVITY = 0.1f;
const float ZOOM = 45.0f;
// An abstract camera class that processes input and calculates the corresponding Euler Angles, Vectors and Matrices for use in OpenGL
class Camera
{
public:
// Camera Attributes
glm::vec3 Position;
glm::vec3 Front;
glm::vec3 Up;
glm::vec3 Right;
glm::vec3 WorldUp;
// Euler Angles
float Yaw;
float Pitch;
// Camera options
float MovementSpeed;
float MouseSensitivity;
float Zoom;

// Constructor with vectors
Camera(glm::vec3 position = glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3 up = glm::vec3(0.0f, 1.0f, 0.0f), float yaw = YAW, float pitch = PITCH) : Front(glm::vec3(0.0f, 0.0f, -1.0f)), MovementSpeed(SPEED), MouseSensitivity(SENSITIVITY), Zoom(ZOOM)
{
Position = position;
WorldUp = up;
Yaw = yaw;
Pitch = pitch;
updateCameraVectors();
}
// Constructor with scalar values
Camera(float posX, float posY, float posZ, float upX, float upY, float upZ, float yaw, float pitch) : Front(glm::vec3(0.0f, 0.0f, -1.0f)), MovementSpeed(SPEED), MouseSensitivity(SENSITIVITY), Zoom(ZOOM)
{
Position = glm::vec3(posX, posY, posZ);
WorldUp = glm::vec3(upX, upY, upZ);
Yaw = yaw;
Pitch = pitch;
updateCameraVectors();
}

// Returns the view matrix calculated using Euler Angles and the LookAt Matrix
glm::mat4 GetViewMatrix()
{return glm::lookAt(Position, Position + Front, Up);}

// Processes input received from any keyboard-like input system. Accepts input parameter in the form of camera defined ENUM (to abstract it from windowing systems)
void ProcessKeyboard(Camera_Movement direction, float deltaTime)
{
float velocity = MovementSpeed * deltaTime;
if (direction == FORWARD)
Position += Front * velocity;
}

// Processes input received from a mouse input system. Expects the offset value in both the x and y direction.
void ProcessMouseMovement(float xoffset, float yoffset, GLboolean constrainPitch = true)
{
xoffset *= MouseSensitivity;
yoffset *= MouseSensitivity;
Yaw += xoffset;
Pitch += yoffset;
// Make sure that when pitch is out of bounds, screen doesn't get flipped
if (constrainPitch)
{
if (Pitch > 89.0f)
Pitch = 89.0f;
if (Pitch < -89.0f)
Pitch = -89.0f;
}
// Update Front, Right and Up Vectors using the updated Euler angles
updateCameraVectors();
}
// Processes input received from a mouse scroll-wheel event. Only requires input on the vertical wheel-axis
void ProcessMouseScroll(float yoffset)
{
if (Zoom >= 1.0f && Zoom <= 45.0f)
Zoom -= yoffset;
if (Zoom <= 1.0f)
Zoom = 1.0f;
if (Zoom >= 45.0f)
Zoom = 45.0f;
}
private:
// Calculates the front vector from the Camera's (updated) Euler Angles
void updateCameraVectors()
{
// Calculate the new Front vector
glm::vec3 front;
front.x = cos(glm::radians(Yaw)) * cos(glm::radians(Pitch));
front.y = sin(glm::radians(Pitch));
front.z = sin(glm::radians(Yaw)) * cos(glm::radians(Pitch));
Front = glm::normalize(front);
// Also re-calculate the Right and Up vector
Right = glm::normalize(glm::cross(Front, WorldUp)); // Normalize the vectors, because their length gets closer to 0 the more you look up or down which results in slower movement.
Up = glm::normalize(glm::cross(Right, Front));
}
};
#endif

lighting

colors

一个物体颜色,实际上定义了可以反射的颜色
要给光线颜色,定义了环境光。
最终物体的颜色,简单来说

1
2
3
glm::vec3 lightColor(0.33f, 0.42f, 0.18f);
glm::vec3 toyColor(1.0f, 0.5f, 0.31f);
glm::vec3 result = lightColor * toyColor; // = (0.33f, 0.21f, 0.06f);

这个计算的shader 实际上就是环境光的颜色 。
一个例子:
两组shader,一个表示用来画光lightingshader,一个用来用来画物体lampshader;
两个VAO,一个跟灯相关,一个跟物体相关;
一个VBO,实际上只存储的是一个cube的顶点数据;
图004 VAOVBO流程

这里再复习一下VBO VAO EBO的使用。。。
unsigned int VBO, VAO, EBO;
glGenVertexArrays(1, &VAO);//生成VAO对象
glGenBuffers(1, &VBO);//生成buffer对象
glGenBuffers(1, &EBO);

glBindVertexArray(VAO);
//绑定一个VAO对象,任何后续的VBOEBO glVertexAttriPointer 或者 glEnableVertexAttribArray调用都存在当前绑定的VAO对象中。

glBindBuffer(GL_ARRAY_BUFFER, VBO);
//把VBO绑定给OpenGL GL_ARRAY_BUFFER对象
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
//把数据喂给VBO,也就是给GL_ARRAY_BUFFER

glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 sizeof(float), (void)0);
glEnableVertexAttribArray(0);
//告诉OpenGL怎么解释当前绑定的VBO的数据区域

basic lighting

Phong Lighting = combine(ambient, diffuse, specular)环境光,漫反射,高光/镜面反射

https://blog.csdn.net/u014767384/article/details/81810512

ambient light

使用一个很小的常量(光照)颜色,添加到物体片段的最终颜色中,这样子的话即便场景中没有直接的光源也能看起来存在有一些发散的光。

fragment shader:

需要环境光强度参数、环境光颜色;

1
2
3
4
5
6
7
8
void main()
{
float ambientStrength = 0.1;
vec3 ambient = ambientStrength * lightColor;

vec3 result = ambient * objectColor;
FragColor = vec4(result, 1.0);
}

diffuse color

需要顶点的法向量、光源位置、片段位置(为了计算定向的光线)。

漫反射光照使物体上与光线方向越接近的片段能从光源处获得更多的亮度。

float diff = max(dot(norm, lightDir), 0.0);
vec3 diffuse = diff * lightColor;

vec3 result = (ambient + diffuse) * objectColor;
FragColor = vec4(result, 1.0);

specular color

考虑反射系数,观察者的观察方向,这就是镜面反射。都不细细展开。
计算比较简单,弄清楚传入的参数和计算方法就好
最后的shader:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#version 330 core
out vec4 FragColor;

in vec3 Normal;
in vec3 FragPos;

uniform vec3 lightPos;
uniform vec3 viewPos;
uniform vec3 lightColor;
uniform vec3 objectColor;

void main()
{
// ambient
float ambientStrength = 0.1;
vec3 ambient = ambientStrength * lightColor;
// diffuse
vec3 norm = normalize(Normal);
vec3 lightDir = normalize(lightPos - FragPos);
float diff = max(dot(norm, lightDir), 0.0);
vec3 diffuse = diff * lightColor;
// specular
float specularStrength = 0.5;
vec3 viewDir = normalize(viewPos - FragPos);
vec3 reflectDir = reflect(-lightDir, norm);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), 32);
vec3 specular = specularStrength * spec * lightColor;

vec3 result = (ambient + diffuse + specular) * objectColor;
FragColor = vec4(result, 1.0);
}

materials

基本还是在上一节三个光效果的shader基础上进行的渲染,不过,这里我们考虑的是,尽管对同一个光源,材质不同也决定了物体看起来不同,我们用每个物体的不同属性来定义材质;
这样会出现什么问题呢,每个因子都以1的密度反应到物体表面。所以我们再次改进,加入光的属性;

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
#version 330 core
out vec4 FragColor;
struct Material {
vec3 ambient;
vec3 diffuse;
vec3 specular;
float shininess;
};
struct Light {
vec3 position;
vec3 ambient;
vec3 diffuse;
vec3 specular;
};

in vec3 FragPos;
in vec3 Normal;

uniform vec3 viewPos;
uniform Material material;
uniform Light light;

void main()
{
// ambient
vec3 ambient = light.ambient * material.ambient;

// diffuse
vec3 norm = normalize(Normal);
vec3 lightDir = normalize(light.position - FragPos);
float diff = max(dot(norm, lightDir), 0.0);
vec3 diffuse = light.diffuse * (diff * material.diffuse);

// specular
vec3 viewDir = normalize(viewPos - FragPos);
vec3 reflectDir = reflect(-lightDir, norm);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), material.shininess);
vec3 specular = light.specular * (spec * material.specular);

vec3 result = ambient + diffuse + specular;
FragColor = vec4(result, 1.0);
}

lighting maps

Diffusemap 就是纹理颜色贴图
shader大体上类似,要做的就是把贴图的颜色texture确保传给shader,核心函数是什么呢,就是glUniform1i(glGetUniformLocation(ourShader.ID, “texture1”), 0)
或者用我们之前定义好的类ourShader.setInt(“texture2”, 1);
给出现在的shader:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
#version 330 core
out vec4 FragColor;
struct Material {
sampler2D diffuse;
vec3 specular;
float shininess;};
struct Light {
vec3 position;
vec3 ambient;
vec3 diffuse;
vec3 specular;};

in vec3 FragPos;
in vec3 Normal;
in vec2 TexCoords;
uniform vec3 viewPos;
uniform Material material;
uniform Light light;
void main()
{
// ambient
vec3 ambient = light.ambient * texture(material.diffuse, TexCoords).rgb;
// diffuse
vec3 norm = normalize(Normal);
vec3 lightDir = normalize(light.position - FragPos);
float diff = max(dot(norm, lightDir), 0.0);
vec3 diffuse = light.diffuse * diff * texture(material.diffuse, TexCoords).rgb;
// specular
vec3 viewDir = normalize(viewPos - FragPos);
vec3 reflectDir = reflect(-lightDir, norm);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), material.shininess);
vec3 specular = light.specular * (spec * material.specular);

vec3 result = ambient + diffuse + specular;
FragColor = vec4(result, 1.0);
}

如图:005

同样,这样也有问题,diffusemap对了,但是高光还不对,不可能同一个物体木头和铁都是同样的反射强度,所以呢,此时我们就特么地再做一个specular贴图,根据这张图象的像素,区分出高光不同的地方。同样地传给shader:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
unsigned int diffuseMap = loadTexture(FileSystem::getPath("resources/textures/container2.png").c_str());
unsigned int specularMap = loadTexture(FileSystem::getPath("resources/textures/container2_specular.png").c_str());
// shader configuration
// --------------------
lightingShader.use();
lightingShader.setInt("material.diffuse", 0);
lightingShader.setInt("material.specular", 1);
/////////分割以上是cpp
#version 330 core
out vec4 FragColor;
struct Material {
sampler2D diffuse;
sampler2D specular;
float shininess;};

struct Light {
vec3 position;
vec3 ambient;
vec3 diffuse;
vec3 specular;};

in vec3 FragPos;
in vec3 Normal;
in vec2 TexCoords;
uniform vec3 viewPos;
uniform Material material;
uniform Light light;
void main()
{
// ambient
vec3 ambient = light.ambient * texture(material.diffuse, TexCoords).rgb;
// diffuse
vec3 norm = normalize(Normal);
vec3 lightDir = normalize(light.position - FragPos);
float diff = max(dot(norm, lightDir), 0.0);
vec3 diffuse = light.diffuse * diff * texture(material.diffuse, TexCoords).rgb;

// specular
vec3 viewDir = normalize(viewPos - FragPos);
vec3 reflectDir = reflect(-lightDir, norm);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), material.shininess);
vec3 specular = light.specular * spec * texture(material.specular, TexCoords).rgb;

vec3 result = ambient + diffuse + specular;
FragColor = vec4(result, 1.0);
}

Light casters

几种光线,其实弄清楚怎么定义的,也就弄清楚了怎么根据数据去计算shader。

  • direction light
    一个方向 vec3 direction
  • point light
    一个位置 vec3 position
    三个参数 constant linear quadratic 分别是表示随着距离衰减的参数

    1
    2
    float distance    = length(light.position - FragPos);
    float attenuation = 1.0 / (light.constant + light.linear * distance+light.quadratic * (distance * distance));
  • spotlight
    三个参数 position direction range-angle:
    For each fragment we calculate if the fragment is between the spotlight’s cutoff directions (thus in its cone) and if so, we lit the fragment accordingly.

  • flashlight
    就是spotlight,不过一直随着观察者变化,
  • 边缘柔化
    其实就是设置两个范围,一个全亮,一个全黑,两条线中间过度

    光线类实现方案

    实现6个光源场景
    shader,c编程,将原来功能比较复杂的代码做一个整理,编写成一些常用的函数。。其实再往远了讲可以写成库自己掉,进而发展成自己的小引擎也不是没有可能。嗯。想想就好,有必要实现也可以。
    注意,fs里面需要的一些输入参数,跟顶点有关的数据,从vs里面拿:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    //这是vertex shader
    #version 330 core
    layout (location = 0) in vec3 aPos;
    layout (location = 1) in vec3 aNormal;
    layout (location = 2) in vec2 aTexCoords;

    out vec3 FragPos;
    out vec3 Normal;
    out vec2 TexCoords;

    uniform mat4 model;
    uniform mat4 view;
    uniform mat4 projection;
    void main()
    {
    FragPos = vec3(model * vec4(aPos, 1.0));
    Normal = mat3(transpose(inverse(model))) * aNormal;
    TexCoords = aTexCoords;

    gl_Position = projection * view * vec4(FragPos, 1.0);
    }

这是fragmentshader,注意看一下几种类型的数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#version 330 core
out vec4 FragColor;
struct Material {
sampler2D diffuse;
sampler2D specular;
float shininess;
};

struct DirLight {
vec3 direction;

vec3 ambient;
vec3 diffuse;
vec3 specular;
};
//...
in vec3 FragPos;
in vec3 Normal;
in vec2 TexCoords;

uniform vec3 viewPos;
uniform DirLight dirLight;
uniform PointLight pointLights[NR_POINT_LIGHTS];
uniform SpotLight spotLight;
uniform Material material;

// function prototypes
vec3 CalcDirLight(DirLight light, vec3 normal, vec3 viewDir);
vec3 CalcPointLight(PointLight light, vec3 normal, vec3 fragPos, vec3 viewDir);
vec3 CalcSpotLight(SpotLight light, vec3 normal, vec3 fragPos, vec3 viewDir);

总结

截止目前为止,我们讨论的都是Phong Lighting模型,环境光、颜色漫反射以及高光。
计算光需要的东西,可以基本完整地看出来,确定好这些概念是什么,基本就没有什么大问题了。

基本概念

几个点

1 vertex shader
输入:
顶点,程序给;
法向量,程序给,或者用法向量贴图?(unity常用);
纹理坐标,程序给;

vertex输入是模型坐标系的顶点啊之类的
输出,gl_positon,就是最终给fragmentshader 的Fragpos输入

//可以计算:片元位置、法向量、纹理坐标

2 fragment shader
输入:
顶点坐标/法向量/纹理坐标,vs自动传入
材质/光结构体,程序直接给;
uniforms,程序直接给,不能动态分配;

几种变量

1 uniform
uniform变量是外部application程序传递给(vertex和fragment)shader的变量。因此它是application通过函数glUniformXX()函数赋值的。在(vertex和fragment)shader程序内部,uniform变量就像是C语言里面的常量(const ),它不能被shader程序修改。
通过getuniformlocation来定位?
程序cpu传入给shader(vsfs)的变量,其实更类似于常量,shader只能用,不能改;
如果uniform在vs和fs中申明完全一样,则两者中可以通用相当于共享的全局变量;
一般用来表示矩阵、材质、光照参数、颜色信息等。
通过gluniform赋值;

uniform mat4 viewProjMatrix; //投影+视图矩阵
uniform mat4 viewMatrix; //视图矩阵
uniform vec3 lightPosition; //光源位置

// texture samplers
uniform sampler2D texture1;
uniform sampler2D texture2;

=======
3 Sampler
一种特殊的uniform一般用来进行纹理采样

2 attribute
attribute变量是只能在vertex shader中使用的变量。(它不能在fragment shader中声明attribute变量,也不能被fragment shader中使用)
一般用attribute变量来表示一些顶点的数据,如:顶点坐标,法线,纹理坐标,顶点颜色等。
在application中,一般用函数glBindAttribLocation()来绑定每个attribute变量的位置,然后用函数glVertexAttribPointer()为每个attribute变量赋值。

uniform mat4 u_matViewProjection;
attribute vec4 a_position;
attribute vec2 a_texCoord0;
varying vec2 v_texCoord;
void main(void)
{
gl_Position = u_matViewProjection * a_position;
v_texCoord = a_texCoord0;
}

3 varying

varying变量是vertex和fragment shader之间做数据传递用的。一般vertex shader修改varying变量的值,然后fragment shader使用该varying变量的值。因此varying变量在vertex和fragment shader二者之间的声明必须是一致的。application不能使用此变量。
// Vertex shader
uniform mat4 u_matViewProjection;
attribute vec4 a_position;
attribute vec2 a_texCoord0;
varying vec2 v_texCoord; // Varying in vertex shader
void main(void)
{
gl_Position = u_matViewProjection * a_position;
v_texCoord = a_texCoord0;
}

// Fragment shader
precision mediump float;
varying vec2 v_texCoord; // Varying in fragment shader
uniform sampler2D s_baseMap;
uniform sampler2D s_lightMap;
void main()
{
vec4 baseColor;
vec4 lightColor;
baseColor = texture2D(s_baseMap, v_texCoord);
lightColor = texture2D(s_lightMap, v_texCoord);
gl_FragColor = baseColor * (lightColor + 0.25);
}