-
Notifications
You must be signed in to change notification settings - Fork 645
Some problems I found in Vitis AI 1.4.1 and Vitis AI 2.x with quantization and compilation step #997
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We 8000 ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @vaan2010 , |
Hi @vaan2010 , Can floating point and quantized models and code be provided? We need these files to analyze the problem. Regards |
Hi @zhenzhen-AMD, The following attchment is my model files: Floating point model: yolov4-tiny-float.h5 And I put the quantized code and my float model at Xilinx Vitis AI 1.4 Lab location: Looking forward your reply and solution. BR, |
Hi @zhenzhen-AMD, Is there any progress in these problems about Vitis AI 1.4.1 and Vitis AI 2.x with quantization and compilation step? |
Hi @zhenzhen-AMD , Thanks. |
Hi @vaan2010 , Sorry for the late reply, I have been busy developing recently. This will be dealt with later. Best Regards, |
Hi @vaan2010 , [English Version]
A new docker will be provided later. Please use the new docker. thank you very much. [Chinese Version] 稍后新的 docker 将会被提供。请您使用新的docker。非常感谢。 Regards, |
Hi @zhenzhen-AMD, Thanks for your reply! Looking forward your reply! BR, |
Hi @vaan2010 , Leaky ReLU has been supported in vai-1.4. With the version update, we improved the support for it. It is recommended to use the latest release of VIitis AI 3.0 docker to run your model. Get the latest docker reference documentation: If you still have problems using the latest 3.0 docker, please give us feedback in time, thank you. |
Hi @zhenzhen-AMD,
That means using Leaky ReLU will cut xmodel graph to multiple subgraphs, just like this issue: |
* update url of replace_pytorch.sh * fix the docker install html link (Xilinx#994) Co-authored-by: qiuyuny <qiuyuny@xilinx.com> * 3.0 docker fix (Xilinx#996) Co-authored-by: qiuyuny <qiuyuny@xilinx.com> Former-commit-id: 2057c58
Hi @vaan2010 , Here I use the following code to delete the fix between conv2d and leaky-relu manually... For detailed information about why the quantization team insert the fix between the conv2d and leaky-relu. @zhenzhen-AMD please provide more help. Many Thanks import xir
g = xir.Graph.deserialize("quantized-yolov4-tiny.xmodel")
ops = g.toposort()
relu_ops = [op for op in ops if op.get_fanout_num() >= 1 and op.get_fanout_ops()[0].get_type() == "leaky-relu"]
for op in relu_ops:
succ = op.get_fanout_ops()[0]
succ.replace_input_ops(op, op.get_input_ops()["input"][0])
for op in relu_ops:
g.remove_op(op)
g.serialize("quantized-yolov4-tiny_modify.xmodel")
|
Hi @vaan2010 , |
[English Version]
Hi, I found the problem of quantize and compile in Vitis AI 1.4.1 and Vitis AI 2.x respectively
First mention the part of Vitis AI 1.4.1 or 1.4
The environment and training model sources I use is the following:
Let me mention again, because the original reference Yolov4-tiny has tf.split operator, it must be changed to conv 1x1 to convert xmodel, because Vitis AI does not support tf.split yet
After training the model, I use the following code to quantize:
After quantize the model, I use the following command to compile:
The result of compile is as follows:
The compiled model in Netron is as follows:
It can be seen that Vitis AI 1.4.1 does not seem to support the operation of Leaky ReLU, so the DPU operation is divided into many subgraphs during the compilation process, and the compiled xmodel cannot be run on KV260, the following error will be displayed:
But I have successfully compiled Leaky ReLU before and let the DPU form a single graph. The previous model is as follows:
You can see that Leaky ReLU is successfully included in Conv2d and supports DPU
Also using Vitis AI 1.4.1 for conversion, why not now? Does the code in my quantize have anything to do with it?
So I tried to use Vitis AI 2.x for quantize and compile
Unfortunately, Vitis AI 2.x still has its own problems
I refer the solution from this github
But during the quantize process of Vitis AI 2.x, the following error message keeps appearing
I checked the overall model architecture and did not find the shape [14, 14, 256], and I also checked that there is nothing wrong with the operation of Concat, so I think Vitis AI 2.x has a bug in quantize for Concat
Conclusion:
In the end I replaced Leaky ReLU with ReLU and retrained the model, here is the image after retraining and converted to xmodel:
and can run successfully on KV260, but
3. Compared with Leaky ReLU, ReLU is obviously much slower in execution speed. Is this another bug in the DPU conversion process?
Attached below is the xmodel file that I successfully and failed to compile in Vitis AI 1.4.x and run on KV260:
Success: yolov4-tiny_success.xmodel
Fail: yolov4-tiny_fail.xmodel
Hope someone could give me some suggestions and solutions, thanks!
BR,
Norris
[Chinese Version]
嗨,我在Vitis AI 1.4.1和Vitis AI 2.x的版本中,各自发现了quantize和compile的问题
先提Vitis AI 1.4.1或是1.4的部分
我使用的环境和训练model来源
再提一下,因为原本参考的Yolov4-tiny里头有tf.split的操作算子,必须改成conv 1x1才能进行xmodel的转换,原因是Vitis AI还不支援tf.split
训练好model之后,我使用以下code进行quantize:
quantize结束后的model,我使用以下的指令进行compile:
compile的结果如下:
而compile后的model在Netron里的图如下:
可以看到Vitis AI 1.4.1似乎不支援Leaky ReLU的运算,因此在compile的过程中将DPU的运算分成了许多subgraph,并且这个compile后的xmodel是不能在KV260上运行的,会显示以下错误:
但我之前有成功compile过Leaky ReLU并让DPU形成单一graph,之前的model形式如下:
你可以看到Leaky ReLU是有成功包含在Conv2d里面并支持DPU的
同样是使用Vitis AI 1.4.1进行转换,为何现在不行了?是否在我quantize里面的code有关系呢?
因此我尝试了使用Vitis AI 2.x来做quantize和compile
可惜的是,Vitis AI 2.x依旧有自己的问题存在
我参考了这篇的解决方式
但在Vitis AI 2.x的quantize过程中,一直出现下面的错误讯息
我查看了整体的model架构,并没有找到[14, 14, 256]这个shape,并且也察看过Concat这个运算中是没有错的,因此我认为Vitis AI 2.x在quantize中对于Concat是有bug的
结论:
最后我将Leaky ReLU换成了ReLU并重新训练模型,下面是重新训练后并转换成xmodel的图:
并可以成功在KV260上运行,但是
3. 相比于Leaky ReLU来说,ReLU在执行速度上很明显的慢了许多,这是否又是一个在DPU转换过程中的bug呢?
以下附件是我在Vitis AI 1.4.x中compile後成功跟失败在KV260上运行的xmodel档案:
成功: yolov4-tiny_success.xmodel
失败: yolov4-tiny_fail.xmodel
希望有人能给我一些建议和解决方式,感谢!
BR,
Norris
The text was updated successfully, but these errors were encountered: