猿代码 — 科研/AI模型/高性能计算
0

同样的网格的Jacobi迭代目前不能做校正,也可能是编码不对

摘要: 1)原始Jacobi迭代,收敛挺好的。init ok!it= 100, real error=1.666565, matrix error=0.010373it= 200, real error=1.339508, matrix error=0.004651it= 300, real error=1.136802, matrix error=0.002839it= 400, ...
1)原始Jacobi迭代,收敛挺好的。
init ok!
it= 100, real error=1.666565, matrix error=0.010373
it= 200, real error=1.339508, matrix error=0.004651
it= 300, real error=1.136802, matrix error=0.002839
it= 400, real error=0.993261, matrix error=0.001976
it= 500, real error=0.881123, matrix error=0.001483
it= 600, real error=0.788313, matrix error=0.001170
it= 700, real error=0.706898, matrix error=0.000962
it= 800, real error=0.633495, matrix error=0.000815
it= 900, real error=0.566826, matrix error=0.000705
it=1000, real error=0.506387, matrix error=0.000619
it=1100, real error=0.451818, matrix error=0.000546
it=1200, real error=0.402616, matrix error=0.000484
it=1300, real error=0.358500, matrix error=0.000429
it=1400, real error=0.319111, matrix error=0.000381
it=1500, real error=0.283842, matrix error=0.000338
it=1600, real error=0.252400, matrix error=0.000300
it=1700, real error=0.224579, matrix error=0.000266
it=1800, real error=0.199754, matrix error=0.000236
it=1900, real error=0.177632, matrix error=0.000210
it=2000, real error=0.157939, matrix error=0.000186
it=2100, real error=0.140438, matrix error=0.000165
it=2200, real error=0.124890, matrix error=0.000147
it=2300, real error=0.111116, matrix error=0.000131
it=2400, real error=0.098856, matrix error=0.000116
it=2500, real error=0.087946, matrix error=0.000103
it=2600, real error=0.078238, matrix error=0.000092
it=2700, real error=0.069601, matrix error=0.000082
it=2800, real error=0.061918, matrix error=0.000072
it=2900, real error=0.055083, matrix error=0.000064
it=3000, real error=0.049002, matrix error=0.000057
it=3100, real error=0.043594, matrix error=0.000051
it=3200, real error=0.038782, matrix error=0.000045
it=3300, real error=0.034502, matrix error=0.000040
it=3400, real error=0.030694, matrix error=0.000036
it=3500, real error=0.027307, matrix error=0.000032
it=3600, real error=0.024294, matrix error=0.000028
it=3700, real error=0.021614, matrix error=0.000025
it=3800, real error=0.019229, matrix error=0.000022
it=3900, real error=0.017107, matrix error=0.000020
it=4000, real error=0.015220, matrix error=0.000018
it=4100, real error=0.013541, matrix error=0.000016
it=4200, real error=0.012047, matrix error=0.000014
it=4300, real error=0.010718, matrix error=0.000013
it=4400, real error=0.009536, matrix error=0.000011
it=4500, real error=0.008484, matrix error=0.000010
it=4600, real error=0.007548, matrix error=0.000009
it=4700, real error=0.006716, matrix error=0.000008
it=4800, real error=0.005975, matrix error=0.000007
it=4900, real error=0.005316, matrix error=0.000006
it=5000, real error=0.004730, matrix error=0.000006
Time: 137.088s   global max error = 0.004730

2)
加了自认为 的correct后,速度慢,且发散了
init ok!
it= 100, real error=1.596056, matrix error=1.235165
it= 200, real error=1.207422, matrix error=1.235229
it= 300, real error=0.948430, matrix error=1.235239
it= 400, real error=0.752270, matrix error=1.235242
it= 500, real error=0.672308, matrix error=1.235243
it= 600, real error=0.672074, matrix error=1.235243
it= 700, real error=0.671922, matrix error=1.235244
it= 800, real error=0.671817, matrix error=1.235244
it= 900, real error=0.671741, matrix error=1.235244
it=1000, real error=0.671684, matrix error=1.235244
it=1100, real error=0.671639, matrix error=1.235244
it=1200, real error=0.671604, matrix error=1.235244
it=1300, real error=0.671575, matrix error=1.235244
it=1400, real error=0.671552, matrix error=1.235244
it=1500, real error=0.671532, matrix error=1.235244
it=1600, real error=0.671515, matrix error=1.235244
it=1700, real error=0.671501, matrix error=1.235244
it=1800, real error=0.671489, matrix error=1.235244
it=1900, real error=0.671479, matrix error=1.235244
it=2000, real error=0.671470, matrix error=1.235244
it=2100, real error=0.671463, matrix error=1.235244
it=2200, real error=0.671456, matrix error=1.235244
it=2300, real error=0.671450, matrix error=1.235244
it=2400, real error=0.671445, matrix error=1.235244
it=2500, real error=0.671441, matrix error=1.235244
it=2600, real error=0.682293, matrix error=1.235244
it=2700, real error=0.695402, matrix error=1.235244
it=2800, real error=0.707053, matrix error=1.235244
it=2900, real error=0.717408, matrix error=1.235244
it=3000, real error=0.726621, matrix error=1.235244
it=3100, real error=0.734912, matrix error=1.235244
it=3200, real error=0.742282, matrix error=1.235244
it=3300, real error=0.748835, matrix error=1.235244
it=3400, real error=0.754662, matrix error=1.235244
it=3500, real error=0.759842, matrix error=1.235244
it=3600, real error=0.764449, matrix error=1.235244
it=3700, real error=0.768545, matrix error=1.235244
it=3800, real error=0.772188, matrix error=1.235244
it=3900, real error=0.775427, matrix error=1.235244
it=4000, real error=0.778308, matrix error=1.235244
it=4100, real error=0.780869, matrix error=1.235244
it=4200, real error=0.783148, matrix error=1.235244
it=4300, real error=0.785174, matrix error=1.235244
it=4400, real error=0.786976, matrix error=1.235244
it=4500, real error=0.788578, matrix error=1.235244
it=4600, real error=0.790004, matrix error=1.235244
it=4700, real error=0.791272, matrix error=1.235244
it=4800, real error=0.792399, matrix error=1.235244
it=4900, real error=0.793402, matrix error=1.235244
it=5000, real error=0.794294, matrix error=1.235244
Time: 403.488s   global max error = 1.580396

for(i=1; i<=N-2; i++)
for(j=1; j<=N-2; j++)
for(k=1; k<=N-2; k++)
{
R1[i][j][k] =A[i][j][k] - B[i][j][k]; 
}

3)
for(i=1; i<=N-2; i++)
for(j=1; j<=N-2; j++)
for(k=1; k<=N-2; k++)
{
//R1[i][j][k] =A[i][j][k] - B[i][j][k]; 
R1[i][j][k] = (A[i-1][j][k]+A[i+1][j][k]+A[i][j-1][k]+A[i][j+1][k]+A[i][j][k-1]+A[i][j][k+1] -6*A[i][j][k] - F[i][j][k]*dxyz*dxyz);
}

init ok!
it= 100, real error=1116241435711945121564693890686914761414148677023705317166739464503937486039800283321124341983425331200.000000, matrix error=2224682862064052040830095112834993306872665090893115309201935421926064282899029131562835876919346462720.000000
it= 200, real error=291033708157889013266078360170225726593135232358558800823170683057032249174792667468046776639417686069834836645800190750054065740723722821098430291387127374342879484160823797575771127159488781324562828342350315520.000000, matrix error=580800869713372115889902522737091066998755697112839447856971829797420225696625978720738509093511954704676743711780868669833625888411540762583757883853043837910055474224873836327251390996805674900233897072731881472.000000
it= 300, real error=nan, matrix error=nan
it= 400, real error=nan, matrix error=nan
直接NAN了

4)
for(i=1; i<=N-2; i++)
for(j=1; j<=N-2; j++)
for(k=1; k<=N-2; k++)
{
//R1[i][j][k] =A[i][j][k] - B[i][j][k]; 
R1[i][j][k] = (A[i-1][j][k]+A[i+1][j][k]+A[i][j-1][k]+A[i][j+1][k]+A[i][j][k-1]+A[i][j][k+1] -6*A[i][j][k])/(dxyz*dxyz) - F[i][j][k];
}

更加发散了

init ok!
it= 100, real error=nan, matrix error=nan
it= 200, real error=nan, matrix error=nan
it= 300, real error=nan, matrix error=nan
it= 400, real error=nan, matrix error=nan

5)
可能还是要用多重网格的方法,粗化可以消除低频误差

6)
由于多重网格能够有效提高数值计算的计算效率、以及通过不同层的网格实现消除误差的有点,被广泛应用于流体动力学问题,特别是对N-S方程组( Navier-Stokes equations)的数值求解。目前很多的商业软件采用了多重网格算法(multigrid algorithm),例如Fluent、Comsol、Star ccm+等。

Fluent包含代数多重网格(Algebraic Multigrid,AMG)和全近似存储(Full-Approximation Storage ,FAS)多重网格两类多重网格方法,其中代数多重网格是用于隐式求解器,而全近似存储多重网格则用于显式求解器,当然实际使用中标量的求解应该使用的是代数多重网格。全近似存储(Full-Approximation Storage ,FAS)多重网格又可以被称为几何多重网格,几何多重网格需要对粗网格进行构建和存储来获得“粗”方程组,而代数多重网格获得“粗”方程组无需使用任何几何或在粗网格上重新离散化。

7)

多重网格算法是一种迭代算法,其基本思想是:在不同规模网格上求解同一个问题,细网格负责消除误差的高频分量,而误差低频分量的消除则由粗网格负责完成,细网格光滑、粗网格校正、嵌套迭代技术是该算法的三个关键点,在细网格上作光滑迭代消除误差的高频分量,然后对误差的低频分量在更粗网格上进行校正,嵌套迭代技术通过限制和延拓算子连接所有层共同求解同一个问题,细网格光淆与粗网格校正过程的有机结合构成了多重网格

这个写得好!



说点什么...

已有0条评论

最新评论...

本文作者
2024-2-22 03:25
  • 0
    粉丝
  • 504
    阅读
  • 0
    回复
资讯幻灯片
热门评论
热门专题
排行榜
Copyright   ©2015-2023   猿代码-超算人才智造局 高性能计算|并行计算|人工智能      ( 京ICP备2021026424号-2 )