OGRE 阴影技术官方文档

本文详细介绍了Ogre引擎中阴影映射技术的实现细节,包括项目与透视映射伪影的对比、实施步骤以及优化方法。通过实践代码展示了如何定制阴影映射,实现高质量的阴影效果。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

转自

Shadow Mapping in Ogre
Hamilton Chong
Aug 2006

 

 

 

 

3.3 Projective versus Perspective Aliasing
The terms perspective and projective aliasing appeared in the Perspective ShadowMaps
[8] paper and has since been used extensively by those who work on improving shadow
heuristics. Often it is claimed that methods ameliorate perspective aliasing while projective
aliasing is either unavoidable or must be addressed via completely separate
means. However, the distinction between the two is somewhat artificial. Both result
from not allocating enough shadow map samples to regions that matter to the viewer.
As the Plane Optimal algorithm demonstrates, it is possible to completely remove projective
aliasing (as well as perspective aliasing) in certain scenes. In general, there
should be one combined measure of aliasing and algorithms must minimize this quantity.
See [2] for a unified notion of aliasing.
4 Implementation
Ogre provides a powerful framework that allows us to do a lot of shadow map customization.
In Ogre, we turn on custom shadow mapping through the scene manager
(here, sceneMgr). It is recommended that this happen early as it may affect how certain
resources are loaded.

// Use Ogre’s custom shadow mapping ability
sceneMgr->setShadowTexturePixelFormat(PF_FLOAT32_R);
sceneMgr->setShadowTechnique( SHADOWTYPE_TEXTURE_ADDITIVE );
sceneMgr->setShadowTextureCasterMaterial("CustomShadows/ShadowCaster");
sceneMgr->setShadowTextureReceiverMaterial("CustomShadows/ShadowReceiver");
sceneMgr->setShadowTextureSelfShadow(true);
sceneMgr->setShadowTextureSize(512);


 

The setShadowTechnique call is all that is required for Ogre’s default shadow mapping.
In the code above, we have told Ogre to use the R channel of a floating point
texture to store depth values. This tends to be a very portable method (over graphics
cards and APIs). The sample sticks to using Ogre’s default of 512x512 shadow maps.
Self-shadowing is turned on, but be warned that this will only work properly if appropriate
depth biasing is also used. The example code will manually account for depth
biasing via the method described above in section 1.2. The shadow caster and shadow
receiver materials are defined in a materials script. They tell Ogre which shaders to use
when rendering shadow casters into the shadow map and rendering shadow receivers
during shadow determination.
The CustomShadows.material material script is given below:

// Shadow Caster __________________________________________________
vertex_program CustomShadows/ShadowCasterVP/Cg cg
{
source customshadowcastervp.cg
8
entry_point main
profiles arbvp1 vs_2_0
default_params
{
param_named_auto uModelViewProjection worldviewproj_matrix
}
}
fragment_program CustomShadows/ShadowCasterFP/Cg cg
{
source customshadowcasterfp.cg
entry_point main
profiles arbfp1 ps_2_0
default_params
{
param_named uDepthOffset float 1.0
param_named uSTexWidth float 512.0
param_named uSTexHeight float 512.0
param_named_auto uInvModelViewProjection inverse_worldviewproj_matrix
param_named_auto uProjection projection_matrix
}
}
vertex_program CustomShadows/ShadowCasterVP/GLSL glsl
{
source customshadowcastervp.vert
default_params
{
param_named_auto uModelViewProjection worldviewproj_matrix
}
}
fragment_program CustomShadows/ShadowCasterFP/GLSL glsl
{
source customshadowcasterfp.frag
default_params
{
param_named uDepthOffset float 1.0
param_named uSTexWidth float 512.0
param_named uSTexHeight float 512.0
param_named_auto uInvModelViewProjection inverse_worldviewproj_matrix
param_named_auto uProjection projection_matrix
9
}
}
vertex_program CustomShadows/ShadowCasterVP/HLSL hlsl
{
source customshadowcastervp.hlsl
entry_point main
target vs_2_0
default_params
{
param_named_auto uModelViewProjection worldviewproj_matrix
}
}
fragment_program CustomShadows/ShadowCasterFP/HLSL hlsl
{
source customshadowcasterfp.hlsl
entry_point main
target ps_2_0
default_params
{
param_named uDepthOffset float 1.0
param_named uSTexWidth float 512.0
param_named uSTexHeight float 512.0
param_named_auto uInvModelViewProjection inverse_worldviewproj_matrix
param_named_auto uProjection projection_matrix
}
}
material CustomShadows/ShadowCaster
{
technique glsl
{
// Z-write only pass
pass Z-write
{
vertex_program_ref CustomShadows/ShadowCasterVP/GLSL
{
}
fragment_program_ref CustomShadows/ShadowCasterFP/GLSL
{
}
}
}
10
technique hlsl
{
// Z-write only pass
pass Z-write
{
//Instead of using depth_bias, we’ll be implementing it manually
vertex_program_ref CustomShadows/ShadowCasterVP/HLSL
{
}
fragment_program_ref CustomShadows/ShadowCasterFP/HLSL
{
}
}
}
technique cg
{
// Z-write only pass
pass Z-write
{
//Instead of using depth_bias, we’ll be implementing it manually
vertex_program_ref CustomShadows/ShadowCasterVP/Cg
{
}
fragment_program_ref CustomShadows/ShadowCasterFP/Cg
{
}
}
}
}
// Shadow Receiver ________________________________________________
vertex_program CustomShadows/ShadowReceiverVP/Cg cg
{
source customshadowreceivervp.cg
entry_point main
profiles arbvp1 vs_2_0
default_params
{
11
param_named_auto uModelViewProjection worldviewproj_matrix
param_named_auto uLightPosition light_position_object_space 0
param_named_auto uModel world_matrix
param_named_auto uTextureViewProjection texture_viewproj_matrix
}
}
fragment_program CustomShadows/ShadowReceiverFP/Cg cg
{
source customshadowreceiverfp.cg
entry_point main
profiles arbfp1 ps_2_x
default_params
{
param_named uSTexWidth float 512.0
param_named uSTexHeight float 512.0
}
}
vertex_program CustomShadows/ShadowReceiverVP/GLSL glsl
{
source customshadowreceiver.vert
default_params
{
param_named_auto uModelViewProjection worldviewproj_matrix
param_named_auto uLightPosition light_position_object_space 0
param_named_auto uModel world_matrix
param_named_auto uTextureViewProjection texture_viewproj_matrix
}
}
fragment_program CustomShadows/ShadowReceiverFP/GLSL glsl
{
source customshadowreceiver.frag
default_params
{
param_named uSTexWidth float 512.0
param_named uSTexHeight float 512.0
}
}
vertex_program CustomShadows/ShadowReceiverVP/HLSL hlsl
{
12
source customshadowreceivervp.hlsl
entry_point main
target vs_2_0
default_params
{
param_named_auto uModelViewProjection worldviewproj_matrix
param_named_auto uLightPosition light_position_object_space 0
param_named_auto uModel world_matrix
param_named_auto uTextureViewProjection texture_viewproj_matrix
}
}
fragment_program CustomShadows/ShadowReceiverFP/HLSL hlsl
{
source customshadowreceiverfp.hlsl
entry_point main
target ps_3_0
default_params
{
param_named uSTexWidth float 512.0
param_named uSTexHeight float 512.0
}
}
material CustomShadows/ShadowReceiver
{
technique glsl
{
pass lighting
{
vertex_program_ref CustomShadows/ShadowReceiverVP/GLSL
{
}
fragment_program_ref CustomShadows/ShadowReceiverFP/GLSL
{
param_named uShadowMap int 0
}
texture_unit ShadowMap
{
tex_address_mode clamp
filtering none
}
13
}
}
technique hlsl
{
pass lighting
{
vertex_program_ref CustomShadows/ShadowReceiverVP/HLSL
{
}
fragment_program_ref CustomShadows/ShadowReceiverFP/HLSL
{
}
// we won’t rely on hardware specific filtering of z-tests
texture_unit ShadowMap
{
tex_address_mode clamp
filtering none
}
}
}
technique cg
{
pass lighting
{
vertex_program_ref CustomShadows/ShadowReceiverVP/Cg
{
}
fragment_program_ref CustomShadows/ShadowReceiverFP/Cg
{
}
// we won’t rely on hardware specific filtering of z-tests
texture_unit ShadowMap
{
tex_address_mode clamp
filtering none
}
}
}
}
14
Three techniques are presented, one for GLSL, one for HLSL, and one for Cg.
We’ll present the GLSL code below. Note that while most of the shader files are direct
translations of each other, DirectX HLSL shaders must handle percentage closest
filtering slightly differently from OpenGL. OpenGL chooses the convention of having
integers index sample centers whereas DirectX chooses integers to index sample corners.
Also note the variable names in the shaders presented below are slightly different
from those presented earlier in this document. This is due in part to the awkwardness
of expressing subscripts in variable names and also in part because u3 is less evocative
of depth than z, etc. With minimal effort one can match the shader equations with
those presented earlier. The code is presented here mostly to demonstrate how things
fit together.
//////////////////////////////////////////////////////////////////
//
// shadowcastervp.vert
//
// This is an example vertex shader for shadow caster objects.
//
//////////////////////////////////////////////////////////////////
// I N P U T V A R I A B L E S /////////////////////////////////
uniform mat4 uModelViewProjection; // modelview projection matrix
// O U T P U T V A R I A B L E S ///////////////////////////////
varying vec4 pPosition; // post projection position coordinates
varying vec4 pNormal; // normal in object space (to be interpolated)
varying vec4 pModelPos; // position in object space (to be interpolated)
// M A I N ///////////////////////////////////////////////////////
void main()
{
// Transform vertex position into post projective (homogenous screen) space.
gl_Position = uModelViewProjection * gl_Vertex;
pPosition = uModelViewProjection * gl_Vertex;
// copy over data to interpolate using perspective correct interpolation
pNormal = vec4(gl_Normal.x, gl_Normal.y, gl_Normal.z, 0.0);
pModelPos = gl_Vertex;
}
This is a pretty standard vertex shader.
/////////////////////////////////////////////////////////////////////////////////
15
//
// shadowcasterfp.frag
//
// This is an example fragment shader for shadow caster objects.
//
/////////////////////////////////////////////////////////////////////////////////
// I N P U T V A R I A B L E S ////////////////////////////////////////////////
// uniform constants
uniform float uDepthOffset; // offset amount (constant in eye space)
uniform float uSTexWidth; // shadow map texture width
uniform float uSTexHeight; // shadow map texture height
uniform mat4 uInvModelViewProjection;// inverse model-view-projection matrix
uniform mat4 uProjection; // projection matrix
// per fragment inputs
varying vec4 pPosition; // position of fragment (in homogeneous coordinates)
varying vec4 pNormal; // un-normalized normal in object space
varying vec4 pModelPos; // coordinates of model in object space at this point
// M A I N //////////////////////////////////////////////////////////////////////
void main(void)
{
// compute the "normalized device coordinates" (no viewport applied yet)
vec4 postProj = pPosition / pPosition.w;
// get the normalized normal of the geometry seen at this point
vec4 normal = normalize(pNormal);
// -- Computing Depth Bias Quantities -----------------------------
// We want to compute the "depth slope" of the polygon.
// This is the change in z value that accompanies a change in x or y on screen
// such that the coordinates stay on the triangle.
// The depth slope, dzlen below, is a measure of the uncertainty in our z value
// Roughly, these equations come from re-arrangement of the product rule:
// d(uq) = d(u)q + u d(q) --> d(u) = 1/q * (d(uq) - u d(q))
vec4 duqdx = uInvModelViewProjection * vec4(1.0/uSTexWidth,0.0,0.0,0.0);
vec4 dudx = pPosition.w * (duqdx - (pModelPos * duqdx.w));
vec4 duqdy = uInvModelViewProjection * vec4(0.0,1.0/uSTexHeight,0.0,0.0);
vec4 dudy = pPosition.w * (duqdy - (pModelPos * duqdy.w));
vec4 duqdz = uInvModelViewProjection * vec4(0.0,0.0,1.0,0.0);
16
vec4 dudz = pPosition.w * (duqdz - (pModelPos * duqdz.w));
// The next relations come from the requirement dot(normal, displacement) = 0
float denom = 1.0 / dot(normal.xyz, dudz.xyz);
vec2 dz = - vec2( dot(normal.xyz, dudx.xyz) * denom ,
dot(normal.xyz, dudy.xyz) * denom );
float dzlen = max(abs(dz.x), abs(dz.y));
// We now compute the change in z that would signify a push in the z direction
// by 1 unit in eye space. Note that eye space z is related in a nonlinear way to
// screen space z, so this is not just a constant.
// ddepth below is how much screen space z at this point would change for that push.
// NOTE: computation of ddepth likely differs from OpenGL’s glPolygonOffset "unit"
// computation, which is allowed to be vendor specific.
vec4 dpwdz = uProjection * vec4(0.0, 0.0, 1.0, 0.0);
vec4 dpdz = (dpwdz - (postProj * dpwdz.w)) / pPosition.w;
float ddepth = abs(dpdz.z);
// -- End depth bias helper section --------------------------------
// We now compute the depth of the fragment. This is the actual depth value plus
// our depth bias. The depth bias depends on how uncertain we are about the z value
// plus some constant push in the z direction. The exact coefficients to use are
// up to you, but at least it should be somewhat intuitive now what the tradeoffs are.
float depthval = postProj.z + (0.5 * dzlen)+ (uDepthOffset * ddepth);
depthval = (0.5 * depthval) + 0.5; // put into [0,1] range instead of [-1,1]
gl_FragColor = vec4(depthval, depthval, depthval, 0.0);
}
This shader computes the two depth bias pieces described in section 1.2. These are
used to offset the stored depth value. This is where the notation differs from above, but
the translation is quite straightforward.
//////////////////////////////////////////////////////////////////
//
// shadowreceiver.vert
//
//////////////////////////////////////////////////////////////////
// I N P U T V A R I A B L E S /////////////////////////////////
uniform mat4 uModelViewProjection; // modelview projection matrix
uniform mat4 uModel; // model matrix
uniform mat4 uTextureViewProjection; // shadow map’s view projection matrix
17
uniform vec4 uLightPosition; // light position in object space
// O U T P U T V A R I A B L E S ///////////////////////////////
varying vec4 pShadowCoord; // vertex position in shadow map coordinates
varying float pDiffuse; // diffuse shading value
// M A I N ///////////////////////////////////////////////////////
void main()
{
// compute diffuse shading
vec3 lightDirection = normalize(uLightPosition.xyz - gl_Vertex.xyz);
pDiffuse = dot(gl_Normal.xyz, lightDirection);
// compute shadow map lookup coordinates
pShadowCoord = uTextureViewProjection * (uModel * gl_Vertex);
// compute vertex’s homogenous screen-space coordinates
// Use following line if other passes use shaders
//gl_Position = uModelViewProjection * gl_Vertex;
gl_Position = ftransform(); // uncomment if other passes use fixed function pipeline
}
This is a pretty standard vertex shader as well. The ftransform() function guarantees
the output matches the fixed function pipeline. If the objects you render use shaders
instead of fixed function, then you should do so here as well.
/////////////////////////////////////////////////////////////////////////////////
//
// shadowreceiver.frag
//
/////////////////////////////////////////////////////////////////////////////////
// I N P U T V A R I A B L E S ////////////////////////////////////////////////
// uniform constants
uniform sampler2D uShadowMap;
uniform float uSTexWidth;
uniform float uSTexHeight;
// per fragment inputs
varying vec4 pShadowCoord; // vertex position in shadow map coordinates
varying float pDiffuse; // diffuse shading value
18
// M A I N //////////////////////////////////////////////////////////////////////
void main(void)
{
// compute the shadow coordinates for texture lookup
// NOTE: texture_viewproj_matrix maps z into [0,1] range, not [-1,1], so
// have to make sure shadow caster stores depth values with same convention.
vec4 scoord = pShadowCoord / pShadowCoord.w;
// -- "Percentage Closest Filtering" -----------------------------------------
// One could use scoord.xy to look up the shadow map for depth testing, but
// we’ll be implementing a simple "percentage closest filtering" algorithm instead.
// This mimics the behavior of turning on bilinear filtering on NVIDIA hardware
// when also performing shadow comparisons. This causes bilinear filtering of
// depth tests. Note that this is NOT the same as bilinear filtering the depth
// values and then doing the depth comparison. The two operations are not
// commutative. PCF is explicitly about filtering the test values since
// testing filtered z values is often meaningless.
// Real percentage closest filtering should sample from the entire footprint
// on the shadow map, not just seek the closest four sample points. Such
// an improvement is for future work.
// NOTE: Assuming OpenGL convention for texture lookups with integers in centers.
// DX convention is to have integers mark sample corners
vec2 tcoord;
tcoord.x = (scoord.x * uSTexWidth) - 0.5;
tcoord.y = (scoord.y * uSTexHeight) - 0.5;
float x0 = floor(tcoord.x);
float x1 = ceil(tcoord.x);
float fracx = fract(tcoord.x);
float y0 = floor(tcoord.y);
float y1 = ceil(tcoord.y);
float fracy = fract(tcoord.y);
// sample coordinates in [0,1]^2 domain
vec2 t00, t01, t10, t11;
float invWidth = 1.0 / uSTexWidth;
float invHeight = 1.0 / uSTexHeight;
t00 = float2((x0+0.5) * invWidth, (y0+0.5) * invHeight);
t10 = float2((x1+0.5) * invWidth, (y0+0.5) * invHeight);
t01 = float2((x0+0.5) * invWidth, (y1+0.5) * invHeight);
t11 = float2((x1+0.5) * invWidth, (y1+0.5) * invHeight);
19
// grab the samples
float z00 = texture2D(uShadowMap, t00).x;
float viz00 = (z00 <= scoord.z) ? 0.0 : 1.0;
float z01 = texture2D(uShadowMap, t01).x;
float viz01 = (z01 <= scoord.z) ? 0.0 : 1.0;
float z10 = texture2D(uShadowMap, t10).x;
float viz10 = (z10 <= scoord.z) ? 0.0 : 1.0;
float z11 = texture2D(uShadowMap, t11).x;
float viz11 = (z11 <= scoord.z) ? 0.0 : 1.0;
// determine that all geometry outside the shadow test frustum is lit
viz00 = ((abs(t00.x - 0.5) <= 0.5) && (abs(t00.y - 0.5) <= 0.5)) ? viz00 : 1.0;
viz01 = ((abs(t01.x - 0.5) <= 0.5) && (abs(t01.y - 0.5) <= 0.5)) ? viz01 : 1.0;
viz10 = ((abs(t10.x - 0.5) <= 0.5) && (abs(t10.y - 0.5) <= 0.5)) ? viz10 : 1.0;
viz11 = ((abs(t11.x - 0.5) <= 0.5) && (abs(t11.y - 0.5) <= 0.5)) ? viz11 : 1.0;
// bilinear filter test results
float v0 = (1.0 - fracx) * viz00 + fracx * viz10;
float v1 = (1.0 - fracx) * viz01 + fracx * viz11;
float visibility = (1.0 - fracy) * v0 + fracy * v1;
// ------------------------------------------------------------------------------
// Non-PCF code (comment out above section and uncomment the following three lines)
//float zvalue = texture2D(uShadowMap, scoord.xy).x;
//float visibility = (zvalue <= scoord.z) ? 0.0 : 1.0;
//visibility = ((abs(scoord.x - 0.5) <= 0.5) && (abs(scoord.y - 0.5) <= 0.5))
// ? visibility : 1.0;
// ------------------------------------------------------------------------------
visibility *= pDiffuse;
gl_FragColor = vec4(visibility, visibility, visibility, 0.0);
}


 
This file implements percentage closest filtering. To use unfiltered shadow mapping,
comment out the PCF block as noted and uncomment the Non-PCF block. Note that
after doing this, the uSTexWidth and uSTexHeight variables are likely to be optimized
away and so you should uncomment these variables in the materials script as well.
The following shows how to activate plane optimal shadow mapping given some
pointer to a MovablePlane and a pointer to a light.

PlaneOptimalShadowCameraSetup *planeOptShadowCamera =
new PlaneOptimalShadowCameraSetup(movablePlane);
20
Entity *movablePlaneEntity = sceneMgr->createEntity( "movablePlane", "plane.mesh" );
SceneNode *movablePlaneNode =
sceneMgr->getRootSceneNode()->createChildSceneNode("MovablePlaneNode");
movablePlaneNode->attachObject(movablePlaneEntity);
SharedPtr<ShadowCameraSetup> planeOptPtr(planeOptShadowCamera);
light->setCustomShadowCameraSetup(planeOptPtr);


 

References
[1] Hamilton Y. Chong and Steven J. Gortler. A lixel for every pixel. In Proceedings
of the Eurographics Symposium on Rendering. Eurographics Association, 2004.
[2] Hamilton Y. Chong and Steven J. Gortler. Scene optimized shadow maps. In
Harvard Technical Report TR-11-06, 2006.
[3] William Donnelly and Andrew Lauritzen. Variance shadow maps. In SI3D ’06:
Proceedings of the 2006 symposium on Interactive 3D graphics and games, pages
161–165, New York, NY, USA, 2006. ACM Press.
[4] Randima Fernando, Sebastian Fernandez, Kavita Bala, and Donald P. Greenberg.
Adaptive shadow maps. In SIGGRAPH ’01: Proceedings of the 28th annual con-
ference on Computer graphics and interactive techniques, pages 387–390, New
York, NY, USA, 2001. ACM Press.
[5] Tom Lokovic and Eric Veach. Deep shadow maps. In SIGGRAPH ’00: Proceed-
ings of the 27th annual conference on Computer graphics and interactive tech-
niques, New York, NY, USA, 2000. ACM Press.
[6] Tobias Martin and Tiow-Seng Tan. Anti-aliasing and continuity with trapezoidal
shadowmaps. In Proceedings of the Eurographics Symposium on Rendering, pages
153–160. Eurographics Association, 2004.
[7] William T. Reeves, David H. Salesin, and Robert L. Cook. Rendering antialiased
shadows with depth maps. In SIGGRAPH ’87: Proceedings of the 14th annual
conference on Computer graphics and interactive techniques, pages 283–291, New
York, NY, USA, 1987. ACM Press.
[8] Marc Stamminger and George Drettakis. Perspective shadow maps. In SIGGRAPH
’02: Proceedings of the 29th annual conference on Computer graphics and inter-
active techniques, pages 557–562, New York, NY, USA, 2002. ACM Press.
[9] Lance Williams. Casting curved shadows on curved surfaces. In SIGGRAPH ’78:
Proceedings of the 5th annual conference on Computer graphics and interactive
techniques, pages 270–274, New York, NY, USA, 1978. ACM Press.
21

### Pandas 文件格式读写操作教程 #### 1. CSV文件的读取与保存 Pandas 提供了 `read_csv` 方法用于从 CSV 文件中加载数据到 DataFrame 中。同样,也可以使用 `to_csv` 将 DataFrame 数据保存为 CSV 文件。 以下是具体的代码示例: ```python import pandas as pd # 读取CSV文件 df = pd.read_csv('file.csv') # 加载本地CSV文件 [^1] # 保存DataFrame为CSV文件 df.to_csv('output.csv', index=False) # 不保存行索引 [^1] ``` --- #### 2. JSON文件的读取与保存 对于JSON格式的数据,Pandas 支持通过 `read_json` 和 `to_json` 进行读取和存储。无论是本地文件还是远程 URL 都支持。 具体实现如下所示: ```python # 读取本地JSON文件 df = pd.read_json('data.json') # 自动解析为DataFrame对象 [^3] # 从URL读取JSON数据 url = 'https://example.com/data.json' df_url = pd.read_json(url) # 直接从网络地址获取数据 # 保存DataFrame为JSON文件 df.to_json('output.json', orient='records') ``` --- #### 3. Excel文件的读取与保存 针对Excel文件操作Pandas 使用 `read_excel` 来读取 `.xls` 或 `.xlsx` 格式的文件,并提供 `to_excel` 方法导出数据至 Excel 表格。 注意:需要安装额外依赖库 `openpyxl` 或 `xlrd` 才能正常运行这些功能。 ```python # 安装必要模块 (如果尚未安装) !pip install openpyxl xlrd # 读取Excel文件 df_excel = pd.read_excel('file.xlsx', sheet_name='Sheet1') # 导出DataFrame为Excel文件 df.to_excel('output.xlsx', sheet_name='Sheet1', index=False) ``` --- #### 4. SQL数据库的交互 当涉及关系型数据库时,Pandas 可借助 SQLAlchemy 库连接各种类型的数据库(如 SQLite, MySQL)。它允许直接查询并将结果作为 DataFrame 返回;或者反过来把现有 DataFrame 插入到指定表中。 下面是基于SQLite的一个例子: ```python from sqlalchemy import create_engine # 创建引擎实例 engine = create_engine('sqlite:///database.db') # 查询SQL语句并返回DataFrame query = "SELECT name, salary, department FROM employees" sql_df = pd.read_sql(query, engine) # 计算各部门平均工资 avg_salary_by_dept = sql_df.groupby('department')['salary'].mean() # 将DataFrame存回SQL表 avg_salary_by_dept.to_sql(name='average_salaries_per_department', con=engine, if_exists='replace', index=True) ``` 上述片段说明了如何执行基本SQL命令以及后续数据分析流程[^4]。 --- #### 5. 多层次索引(MultiIndex)的应用场景 除了常规单维度索引外,在某些复杂情况下可能需要用到多级索引结构。这时可以依靠 MultiIndex 构建更加灵活的数据模型。 例如定义一个多层列名体系: ```python arrays = [['A','A','B','B'], ['foo','bar','foo','bar']] tuples = list(zip(*arrays)) index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second']) df_multi_indexed = pd.DataFrame([[0,1,2,3], [4,5,6,7]], columns=index) print(df_multi_indexed) ``` 这段脚本演示了怎样构建一个具有双重分类标签的表格布局[^2]。 --- ### 总结 综上所述,Pandas 是一种强大而易用的数据处理工具包,适用于多种常见文件类型之间的相互转换及其高级特性应用开发之中。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值