-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathatom.xml
188 lines (101 loc) · 45.7 KB
/
atom.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title>KagariのBlog</title>
<link href="https://kagari306.github.io/atom.xml" rel="self"/>
<link href="https://kagari306.github.io/"/>
<updated>2025-01-17T07:17:45.003Z</updated>
<id>https://kagari306.github.io/</id>
<author>
<name>Kagari 306</name>
</author>
<generator uri="https://hexo.io/">Hexo</generator>
<entry>
<title>AttributeError: module 'tensorflow' has no attribute 'compat'</title>
<link href="https://kagari306.github.io/2025/01/17/AttributeError-module-tensorflow-has-no-attribute-compat/"/>
<id>https://kagari306.github.io/2025/01/17/AttributeError-module-tensorflow-has-no-attribute-compat/</id>
<published>2025-01-17T07:09:56.000Z</published>
<updated>2025-01-17T07:17:45.003Z</updated>
<content type="html"><![CDATA[<h1 id="AttributeError-module-‘tensorflow’-has-no-attribute-‘compat’"><a href="#AttributeError-module-‘tensorflow’-has-no-attribute-‘compat’" class="headerlink" title="AttributeError: module ‘tensorflow’ has no attribute ‘compat’"></a>AttributeError: module ‘tensorflow’ has no attribute ‘compat’</h1><h2 id="问题描述"><a href="#问题描述" class="headerlink" title="问题描述"></a>问题描述</h2><p>在运行代码时,出现如下错误:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">**balabala**</span><br><span class="line">AttributeError: module <span class="string">'tensorflow'</span> has no attribute <span class="string">'compat'</span></span><br></pre></td></tr></table></figure><span id="more"></span><h2 id="解决方法"><a href="#解决方法" class="headerlink" title="解决方法"></a>解决方法</h2><h3 id="来自tensorflow的github-issue"><a href="#来自tensorflow的github-issue" class="headerlink" title="来自tensorflow的github issue"></a>来自<a href="https://github.com/tensorflow/tensorflow/issues/40422">tensorflow的github issue</a></h3><p>ISSUE FIXED for me</p><p>Indeed the command “conda install tensorflow-gpu==2.1.0” installed version 2.2.0 of tensorflow-estimator.<br>After “conda install tensorflow-estimator==2.1.0” everything works fine</p>]]></content>
<summary type="html"><h1 id="AttributeError-module-‘tensorflow’-has-no-attribute-‘compat’"><a href="#AttributeError-module-‘tensorflow’-has-no-attribute-‘compat’" class="headerlink" title="AttributeError: module ‘tensorflow’ has no attribute ‘compat’"></a>AttributeError: module ‘tensorflow’ has no attribute ‘compat’</h1><h2 id="问题描述"><a href="#问题描述" class="headerlink" title="问题描述"></a>问题描述</h2><p>在运行代码时,出现如下错误:</p>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">**balabala**</span><br><span class="line">AttributeError: module <span class="string">&#x27;tensorflow&#x27;</span> has no attribute <span class="string">&#x27;compat&#x27;</span></span><br></pre></td></tr></table></figure></summary>
<category term="problem" scheme="https://kagari306.github.io/categories/problem/"/>
<category term="problem" scheme="https://kagari306.github.io/tags/problem/"/>
<category term="python" scheme="https://kagari306.github.io/tags/python/"/>
<category term="conda" scheme="https://kagari306.github.io/tags/conda/"/>
<category term="tensorflow" scheme="https://kagari306.github.io/tags/tensorflow/"/>
</entry>
<entry>
<title>wsl: 检测到 localhost 代理配置,但未镜像到 WSL。NAT 模式下的 WSL 不支持 localhost 代理。</title>
<link href="https://kagari306.github.io/2025/01/14/wsl-%E6%A3%80%E6%B5%8B%E5%88%B0-localhost-%E4%BB%A3%E7%90%86%E9%85%8D%E7%BD%AE%EF%BC%8C%E4%BD%86%E6%9C%AA%E9%95%9C%E5%83%8F%E5%88%B0-WSL%E3%80%82NAT-%E6%A8%A1%E5%BC%8F%E4%B8%8B%E7%9A%84-WSL-%E4%B8%8D%E6%94%AF%E6%8C%81-localhost-%E4%BB%A3%E7%90%86%E3%80%82/"/>
<id>https://kagari306.github.io/2025/01/14/wsl-%E6%A3%80%E6%B5%8B%E5%88%B0-localhost-%E4%BB%A3%E7%90%86%E9%85%8D%E7%BD%AE%EF%BC%8C%E4%BD%86%E6%9C%AA%E9%95%9C%E5%83%8F%E5%88%B0-WSL%E3%80%82NAT-%E6%A8%A1%E5%BC%8F%E4%B8%8B%E7%9A%84-WSL-%E4%B8%8D%E6%94%AF%E6%8C%81-localhost-%E4%BB%A3%E7%90%86%E3%80%82/</id>
<published>2025-01-13T16:09:43.000Z</published>
<updated>2025-01-17T07:10:54.712Z</updated>
<content type="html"><![CDATA[<h1 id="wsl-检测到-localhost-代理配置,但未镜像到-WSL。NAT-模式下的-WSL-不支持-localhost-代理。"><a href="#wsl-检测到-localhost-代理配置,但未镜像到-WSL。NAT-模式下的-WSL-不支持-localhost-代理。" class="headerlink" title="wsl: 检测到 localhost 代理配置,但未镜像到 WSL。NAT 模式下的 WSL 不支持 localhost 代理。"></a>wsl: 检测到 localhost 代理配置,但未镜像到 WSL。NAT 模式下的 WSL 不支持 localhost 代理。</h1><p>新配置完wsl之后,启动WSL的时候出现了一个提示</p><figure class="highlight cmd"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="function">wsl: 检测到 <span class="title">localhost</span> 代理配置,但未镜像到 <span class="title">WSL</span>。</span></span><br><span class="line"><span class="function"><span class="title">NAT</span> 模式下的 <span class="title">WSL</span> 不支持 <span class="title">localhost</span> 代理。</span></span><br></pre></td></tr></table></figure><span id="more"></span><p>遂google,得知解决方法如下</p><h2 id="解决方法"><a href="#解决方法" class="headerlink" title="解决方法"></a>解决方法</h2><p>在Windows中的<code>C:\Users\<your_username></code>目录下创建一个<code>.wslconfig</code>文件,然后在文件中写入如下内容</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">[experimental]</span><br><span class="line">autoMemoryReclaim=gradual </span><br><span class="line">networkingMode=mirrored</span><br><span class="line">dnsTunneling=true</span><br><span class="line">firewall=true</span><br><span class="line">autoProxy=true</span><br></pre></td></tr></table></figure><p>然后用wsl –shutdown关闭WSL,之后再重启,问题就解决了</p>]]></content>
<summary type="html"><h1 id="wsl-检测到-localhost-代理配置,但未镜像到-WSL。NAT-模式下的-WSL-不支持-localhost-代理。"><a href="#wsl-检测到-localhost-代理配置,但未镜像到-WSL。NAT-模式下的-WSL-不支持-localhost-代理。" class="headerlink" title="wsl: 检测到 localhost 代理配置,但未镜像到 WSL。NAT 模式下的 WSL 不支持 localhost 代理。"></a>wsl: 检测到 localhost 代理配置,但未镜像到 WSL。NAT 模式下的 WSL 不支持 localhost 代理。</h1><p>新配置完wsl之后,启动WSL的时候出现了一个提示</p>
<figure class="highlight cmd"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="function">wsl: 检测到 <span class="title">localhost</span> 代理配置,但未镜像到 <span class="title">WSL</span>。</span></span><br><span class="line"><span class="function"><span class="title">NAT</span> 模式下的 <span class="title">WSL</span> 不支持 <span class="title">localhost</span> 代理。</span></span><br></pre></td></tr></table></figure></summary>
<category term="problem" scheme="https://kagari306.github.io/categories/problem/"/>
<category term="windows" scheme="https://kagari306.github.io/tags/windows/"/>
<category term="os" scheme="https://kagari306.github.io/tags/os/"/>
<category term="wsl" scheme="https://kagari306.github.io/tags/wsl/"/>
<category term="代理" scheme="https://kagari306.github.io/tags/%E4%BB%A3%E7%90%86/"/>
</entry>
<entry>
<title>autodl服务器抢购脚本</title>
<link href="https://kagari306.github.io/2025/01/14/autodl%E6%9C%8D%E5%8A%A1%E5%99%A8%E6%8A%A2%E8%B4%AD%E8%84%9A%E6%9C%AC/"/>
<id>https://kagari306.github.io/2025/01/14/autodl%E6%9C%8D%E5%8A%A1%E5%99%A8%E6%8A%A2%E8%B4%AD%E8%84%9A%E6%9C%AC/</id>
<published>2025-01-13T16:02:30.000Z</published>
<updated>2025-01-13T16:07:56.809Z</updated>
<content type="html"><![CDATA[<p>最近autodl的gpu有点难抢,所以写了个脚本,自动抢gpu)<br>实际上也没什么技术含量,就是模拟点击,然后判断价格是否在指定区间内,如果是就点击购买,否则就跳过<br>复制代码贴到f12的console里,然后按回车就可以自动抢购了</p><span id="more"></span><figure class="highlight javascript"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">var</span> money_range = [<span class="number">0.5</span>,<span class="number">0.66</span>]; <span class="comment">// 设置价格区间(直接修改数字就行,这里是抢1080和TITAN Xp)</span></span><br><span class="line"><span class="keyword">var</span> region_range = [<span class="string">'重庆A区'</span>,<span class="string">'北京B区'</span>, <span class="string">'北京A区 H20'</span>, <span class="string">'佛山区'</span>, <span class="string">'内蒙A区'</span>]; <span class="comment">// 设置过滤地区(这个列表是排除在外的地区列表)</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">var</span> congrats = <span class="literal">false</span>;</span><br><span class="line"><span class="keyword">var</span> node_list = <span class="variable language_">document</span>.<span class="title function_">querySelector</span>(<span class="string">".list-filter .filter-item .el-radio-group"</span>).<span class="property">children</span>;</span><br><span class="line"><span class="keyword">var</span> start = <span class="keyword">function</span> <span class="title function_">start</span>(<span class="params">i</span>) {</span><br><span class="line"> <span class="keyword">if</span> (congrats) <span class="keyword">return</span>;</span><br><span class="line"> <span class="keyword">if</span> (region_range.<span class="title function_">indexOf</span>(node_list[i].<span class="property">textContent</span>.<span class="title function_">trim</span>()) !== -<span class="number">1</span>) {</span><br><span class="line"> <span class="keyword">if</span> (i >= node_list.<span class="property">length</span> - <span class="number">1</span>) {</span><br><span class="line"> <span class="title function_">start</span>(<span class="number">0</span>);</span><br><span class="line"> } <span class="keyword">else</span> {</span><br><span class="line"> <span class="title function_">start</span>(++i);</span><br><span class="line"> }</span><br><span class="line"> <span class="keyword">return</span>;</span><br><span class="line"> }</span><br><span class="line"> <span class="keyword">var</span> node = node_list[i];</span><br><span class="line"> node.<span class="title function_">click</span>();</span><br><span class="line"> <span class="keyword">var</span> timer = <span class="built_in">setTimeout</span>(<span class="keyword">function</span> (<span class="params"></span>) {</span><br><span class="line"> <span class="keyword">var</span> _document$querySelect;</span><br><span class="line"> <span class="keyword">var</span> num = ((_document$querySelect = <span class="variable language_">document</span>.<span class="title function_">querySelector</span>(<span class="string">".pay-wrap .pay-right .price .sum"</span>).<span class="property">children</span>[<span class="number">1</span>].<span class="title function_">getElementsByClassName</span>(<span class="string">"num"</span>)[<span class="number">0</span>]) === <span class="literal">null</span> || _document$querySelect === <span class="keyword">void</span> <span class="number">0</span> ? <span class="keyword">void</span> <span class="number">0</span> : _document$querySelect.<span class="property">textContent</span>) || -<span class="number">1</span>;</span><br><span class="line"> <span class="keyword">if</span> (<span class="title class_">Number</span>(num) >= money_range[<span class="number">0</span>] && <span class="title class_">Number</span>(num) <= money_range[<span class="number">1</span>]) {</span><br><span class="line"> <span class="variable language_">document</span>.<span class="title function_">querySelector</span>(<span class="string">".operation .el-button--primary"</span>).<span class="title function_">click</span>();</span><br><span class="line"> congrats = <span class="literal">true</span>;</span><br><span class="line"> <span class="variable language_">console</span>.<span class="title function_">log</span>(<span class="string">"恭喜!抢到了"</span>);</span><br><span class="line"> <span class="keyword">return</span>;</span><br><span class="line"> }</span><br><span class="line"> <span class="variable language_">console</span>.<span class="title function_">log</span>(node.<span class="property">textContent</span> + <span class="string">"暂时没有,下一个"</span>);</span><br><span class="line"> <span class="keyword">if</span> (i >= node_list.<span class="property">length</span> - <span class="number">1</span>) {</span><br><span class="line"> <span class="title function_">start</span>(<span class="number">0</span>);</span><br><span class="line"> } <span class="keyword">else</span> {</span><br><span class="line"> <span class="title function_">start</span>(++i);</span><br><span class="line"> }</span><br><span class="line"> <span class="built_in">clearTimeout</span>(timer)</span><br><span class="line"> }, <span class="number">1000</span>);</span><br><span class="line">};</span><br><span class="line"><span class="title function_">start</span>(<span class="number">0</span>);</span><br></pre></td></tr></table></figure>]]></content>
<summary type="html"><p>最近autodl的gpu有点难抢,所以写了个脚本,自动抢gpu)<br>实际上也没什么技术含量,就是模拟点击,然后判断价格是否在指定区间内,如果是就点击购买,否则就跳过<br>复制代码贴到f12的console里,然后按回车就可以自动抢购了</p></summary>
<category term="script" scheme="https://kagari306.github.io/categories/script/"/>
<category term="code" scheme="https://kagari306.github.io/tags/code/"/>
<category term="javascript" scheme="https://kagari306.github.io/tags/javascript/"/>
<category term="autodl" scheme="https://kagari306.github.io/tags/autodl/"/>
<category term="服务器" scheme="https://kagari306.github.io/tags/%E6%9C%8D%E5%8A%A1%E5%99%A8/"/>
</entry>
<entry>
<title>error: ‘AT_CHECK’ was not declared in this scope</title>
<link href="https://kagari306.github.io/2024/12/31/error-%E2%80%98AT-CHECK%E2%80%99-was-not-declared-in-this-scope/"/>
<id>https://kagari306.github.io/2024/12/31/error-%E2%80%98AT-CHECK%E2%80%99-was-not-declared-in-this-scope/</id>
<published>2024-12-30T17:13:23.000Z</published>
<updated>2025-01-17T07:11:15.752Z</updated>
<content type="html"><![CDATA[<p>今天安装复现某个包含CUDA依赖的老cpp项目的时候编译报错 <code>error: ‘AT_CHECK’ was not declared in this scope</code></p><span id="more"></span><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br></pre></td><td class="code"><pre><span class="line">...无用信息</span><br><span class="line">/root/autodl-tmp/OrientedRepPoints_DOTA/mmdet/ops/nms/src/nms_cuda.cpp:4:23: error: ‘AT_CHECK’ was not declared <span class="keyword">in</span> this scope</span><br><span class="line"> <span class="comment">#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x, " must be a CUDAtensor ")</span></span><br><span class="line"> ^</span><br><span class="line">/root/autodl-tmp/OrientedRepPoints_DOTA/mmdet/ops/nms/src/nms_cuda.cpp:9:3: note: <span class="keyword">in</span> expansion of macro ‘CHECK_CUDA’</span><br><span class="line"> CHECK_CUDA(dets);</span><br><span class="line"> ^~~~~~~~~~</span><br><span class="line">/root/autodl-tmp/OrientedRepPoints_DOTA/mmdet/ops/nms/src/nms_cuda.cpp:4:23: note: suggested alternative: ‘DCHECK’</span><br><span class="line"> <span class="comment">#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x, " must be a CUDAtensor ")</span></span><br><span class="line"> ^</span><br><span class="line">/root/autodl-tmp/OrientedRepPoints_DOTA/mmdet/ops/nms/src/nms_cuda.cpp:9:3: note: <span class="keyword">in</span> expansion of macro ‘CHECK_CUDA’</span><br><span class="line"> CHECK_CUDA(dets);</span><br><span class="line"> ^~~~~~~~~~</span><br><span class="line">[2/2] /usr/local/cuda/bin/nvcc -DWITH_CUDA -I/root/miniconda3/lib/python3.8/site-packages/torch/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/TH -I/root/miniconda3/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/root/miniconda3/include/python3.8 -c -c /root/autodl-tmp/OrientedRepPoints_DOTA/mmdet/ops/nms/src/nms_kernel.cu -o /root/autodl-tmp/OrientedRepPoints_DOTA/build/temp.linux-x86_64-3.8/mmdet/ops/nms/src/nms_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options <span class="string">''</span><span class="string">"'"</span><span class="string">'-fPIC'</span><span class="string">"'"</span><span class="string">''</span> -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=nms_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=<span class="built_in">arch</span>=compute_75,code=sm_75 -std=c++14</span><br><span class="line">/root/autodl-tmp/OrientedRepPoints_DOTA/mmdet/ops/nms/src/nms_kernel.cu: In <span class="keyword">function</span> ‘at::Tensor nms_cuda(at::Tensor, <span class="built_in">float</span>)’:</span><br><span class="line">/root/autodl-tmp/OrientedRepPoints_DOTA/mmdet/ops/nms/src/nms_kernel.cu:77:62: warning: ‘at::DeprecatedTypeProperties& at::Tensor::<span class="built_in">type</span>() const’ is deprecated: Tensor.<span class="built_in">type</span>() is deprecated. Instead use Tensor.options(), <span class="built_in">which</span> <span class="keyword">in</span> many cases (e.g. <span class="keyword">in</span> a constructor) is a drop-in replacement. If you were using data from <span class="built_in">type</span>(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]</span><br><span class="line"> AT_ASSERTM(boxes.type().is_cuda(), <span class="string">"boxes must be a CUDA tensor"</span>);</span><br><span class="line"> ^</span><br><span class="line">/root/miniconda3/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:277:1: note: declared here</span><br><span class="line"> DeprecatedTypeProperties & <span class="built_in">type</span>() const {</span><br><span class="line"> ^ ~~</span><br><span class="line">/root/autodl-tmp/OrientedRepPoints_DOTA/mmdet/ops/nms/src/nms_kernel.cu:86:50: warning: ‘T* at::Tensor::data() const [with T = <span class="built_in">float</span>]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]</span><br><span class="line"> scalar_t* boxes_dev = boxes_sorted.data<scalar_t>();</span><br><span class="line"> ^</span><br><span class="line">/root/miniconda3/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:363:1: note: declared here</span><br><span class="line"> T * data() const {</span><br><span class="line"> ^ ~~</span><br><span class="line">/root/autodl-tmp/OrientedRepPoints_DOTA/mmdet/ops/nms/src/nms_kernel.cu:117:46: warning: ‘T* at::Tensor::data() const [with T = long int]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]</span><br><span class="line"> int64_t* keep_out = keep.data<int64_t>();</span><br><span class="line"> ^</span><br><span class="line">/root/miniconda3/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:363:1: note: declared here</span><br><span class="line"> T * data() const {</span><br><span class="line"> ^ ~~</span><br><span class="line">ninja: build stopped: subcommand failed.</span><br><span class="line">Traceback (most recent call last):</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/site-packages/torch/utils/cpp_extension.py"</span>, line 1516, <span class="keyword">in</span> _run_ninja_build</span><br><span class="line"> subprocess.run(</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/subprocess.py"</span>, line 516, <span class="keyword">in</span> run</span><br><span class="line"> raise CalledProcessError(retcode, process.args,</span><br><span class="line">subprocess.CalledProcessError: Command <span class="string">'['</span>ninja<span class="string">', '</span>-v<span class="string">']'</span> returned non-zero <span class="built_in">exit</span> status 1.</span><br><span class="line"></span><br><span class="line">The above exception was the direct cause of the following exception:</span><br><span class="line"></span><br><span class="line">Traceback (most recent call last):</span><br><span class="line"> File <span class="string">"setup.py"</span>, line 194, <span class="keyword">in</span> <module></span><br><span class="line"> setup(</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/site-packages/setuptools/__init__.py"</span>, line 153, <span class="keyword">in</span> setup</span><br><span class="line"> <span class="built_in">return</span> distutils.core.setup(**attrs)</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/distutils/core.py"</span>, line 148, <span class="keyword">in</span> setup</span><br><span class="line"> dist.run_commands()</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/distutils/dist.py"</span>, line 966, <span class="keyword">in</span> run_commands</span><br><span class="line"> self.run_command(cmd)</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/distutils/dist.py"</span>, line 985, <span class="keyword">in</span> run_command</span><br><span class="line"> cmd_obj.run()</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/site-packages/setuptools/command/install.py"</span>, line 67, <span class="keyword">in</span> run</span><br><span class="line"> self.do_egg_install()</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/site-packages/setuptools/command/install.py"</span>, line 109, <span class="keyword">in</span> do_egg_install</span><br><span class="line"> self.run_command(<span class="string">'bdist_egg'</span>)</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/distutils/cmd.py"</span>, line 313, <span class="keyword">in</span> run_command</span><br><span class="line"> self.distribution.run_command(<span class="built_in">command</span>)</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/distutils/dist.py"</span>, line 985, <span class="keyword">in</span> run_command</span><br><span class="line"> cmd_obj.run()</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/site-packages/setuptools/command/bdist_egg.py"</span>, line 164, <span class="keyword">in</span> run</span><br><span class="line"> cmd = self.call_command(<span class="string">'install_lib'</span>, warn_dir=0)</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/site-packages/setuptools/command/bdist_egg.py"</span>, line 150, <span class="keyword">in</span> call_command</span><br><span class="line"> self.run_command(cmdname)</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/distutils/cmd.py"</span>, line 313, <span class="keyword">in</span> run_command</span><br><span class="line"> self.distribution.run_command(<span class="built_in">command</span>)</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/distutils/dist.py"</span>, line 985, <span class="keyword">in</span> run_command</span><br><span class="line"> cmd_obj.run()</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/site-packages/setuptools/command/install_lib.py"</span>, line 11, <span class="keyword">in</span> run</span><br><span class="line"> self.build()</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/distutils/command/install_lib.py"</span>, line 107, <span class="keyword">in</span> build</span><br><span class="line"> self.run_command(<span class="string">'build_ext'</span>)</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/distutils/cmd.py"</span>, line 313, <span class="keyword">in</span> run_command</span><br><span class="line"> self.distribution.run_command(<span class="built_in">command</span>)</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/distutils/dist.py"</span>, line 985, <span class="keyword">in</span> run_command</span><br><span class="line"> cmd_obj.run()</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/site-packages/setuptools/command/build_ext.py"</span>, line 79, <span class="keyword">in</span> run</span><br><span class="line"> _build_ext.run(self)</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/distutils/command/build_ext.py"</span>, line 340, <span class="keyword">in</span> run</span><br><span class="line"> self.build_extensions()</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/site-packages/torch/utils/cpp_extension.py"</span>, line 653, <span class="keyword">in</span> build_extensions</span><br><span class="line"> build_ext.build_extensions(self)</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/distutils/command/build_ext.py"</span>, line 449, <span class="keyword">in</span> build_extensions</span><br><span class="line"> self._build_extensions_serial()</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/distutils/command/build_ext.py"</span>, line 474, <span class="keyword">in</span> _build_extensions_serial</span><br><span class="line"> self.build_extension(ext)</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/site-packages/setuptools/command/build_ext.py"</span>, line 196, <span class="keyword">in</span> build_extension</span><br><span class="line"> _build_ext.build_extension(self, ext)</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/site-packages/Cython/Distutils/build_ext.py"</span>, line 135, <span class="keyword">in</span> build_extension</span><br><span class="line"> super(build_ext, self).build_extension(ext)</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/distutils/command/build_ext.py"</span>, line 528, <span class="keyword">in</span> build_extension</span><br><span class="line"> objects = self.compiler.compile(sources,</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/site-packages/torch/utils/cpp_extension.py"</span>, line 473, <span class="keyword">in</span> unix_wrap_ninja_compile</span><br><span class="line"> _write_ninja_file_and_compile_objects(</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/site-packages/torch/utils/cpp_extension.py"</span>, line 1233, <span class="keyword">in</span> _write_ninja_file_and_compile_objects</span><br><span class="line"> _run_ninja_build(</span><br><span class="line"> File <span class="string">"/root/miniconda3/lib/python3.8/site-packages/torch/utils/cpp_extension.py"</span>, line 1538, <span class="keyword">in</span> _run_ninja_build</span><br><span class="line"> raise RuntimeError(message) from e</span><br><span class="line">RuntimeError: Error compiling objects <span class="keyword">for</span> extension</span><br></pre></td></tr></table></figure><p>首先看error,报错为 <code>error: ‘AT_CHECK’ was not declared in this scope</code><br>观察法得将 <code>AT_CHECK</code> 替换为 <code>TORCH_CHECK</code>就可以解决问题((((</p><h2 id="解决方法"><a href="#解决方法" class="headerlink" title="解决方法"></a>解决方法</h2><p>由以上结果得bash指令</p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">find . -<span class="built_in">type</span> f -name <span class="string">"*.cpp"</span> -<span class="built_in">exec</span> sed -i <span class="string">'s/AT_CHECK/TORCH_CHECK/g'</span> {} \;</span><br></pre></td></tr></table></figure><p>运行后重新编译</p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">python setup.py build_ext --inplace</span><br></pre></td></tr></table></figure><p>编译通过,轻松秒杀</p><h2 id="错误原因"><a href="#错误原因" class="headerlink" title="错误原因"></a>错误原因</h2><p>该错误源于 <code>PyTorch</code> 和 <code>CUDA</code> 代码中已弃用且不兼容的宏和 API 使用。(人话,代码放过期了</p><p>在现代 <code>PyTorch</code> 版本中 <code>AT_CHECK</code> 已被替换为 <code>TORCH_CHECK</code>,并且一些张量方法(如)<code>.data<T>()</code> 已被弃用。<br>所以说只要把 <code>AT_CHECK</code> 统一换成 <code>TORCH_CHECK</code> 就可以解决这个问题了</p>]]></content>
<summary type="html"><p>今天安装复现某个包含CUDA依赖的老cpp项目的时候编译报错 <code>error: ‘AT_CHECK’ was not declared in this scope</code></p></summary>
<category term="problem" scheme="https://kagari306.github.io/categories/problem/"/>
<category term="problem" scheme="https://kagari306.github.io/tags/problem/"/>
<category term="python" scheme="https://kagari306.github.io/tags/python/"/>
<category term="cuda" scheme="https://kagari306.github.io/tags/cuda/"/>
<category term="cpp" scheme="https://kagari306.github.io/tags/cpp/"/>
<category term="code" scheme="https://kagari306.github.io/tags/code/"/>
</entry>
<entry>
<title>The detected CUDA version (x.x) mismatches the version that was used to compile PyTorch (x.x).</title>
<link href="https://kagari306.github.io/2024/12/28/The-detected-CUDA-version-1x-x-mismatches-the-version-that-was-used-to-compile-PyTorch-1x-x/"/>
<id>https://kagari306.github.io/2024/12/28/The-detected-CUDA-version-1x-x-mismatches-the-version-that-was-used-to-compile-PyTorch-1x-x/</id>
<published>2024-12-27T16:30:53.000Z</published>
<updated>2025-01-17T07:10:56.912Z</updated>
<content type="html"><![CDATA[<p>今天在安装某个库的时候报错</p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">$ pip install -e .</span><br><span class="line">(一些无关紧要的输出)</span><br><span class="line">The detected CUDA version (10.2) mismatches the version that was used to compile</span><br><span class="line">PyTorch (11.7). Please make sure to use the same CUDA versions.</span><br><span class="line">(还是一些无关紧要的输出)</span><br></pre></td></tr></table></figure><p>但是我是用conda配置的环境,而且11.7的cudatoolkit和cudatoolkit-dev都已经安装了</p><span id="more"></span><p>由于我的<code>pytorch</code>是用 <a href="https://pytorch.org/get-started/previous-versions/">Previous PyTorch Versions</a> 给出的pip指令重新安装的对应cuda版本的torch,所以编译torch的cuda版本也是11.7</p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">$ python -c <span class="string">"import torch; print(torch.__version__)"</span></span><br><span class="line">1.13.1+cu117</span><br></pre></td></tr></table></figure><p>也就是说问题出在pip忽略掉了目前conda环境内的cuda版本,从别的地方随便找了个cuda</p><p>遂google, 发现解决方案如下:</p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">pip install --no-build-isolation -e .</span><br></pre></td></tr></table></figure><p>成功安装</p>]]></content>
<summary type="html"><p>今天在安装某个库的时候报错</p>
<figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">$ pip install -e .</span><br><span class="line">(一些无关紧要的输出)</span><br><span class="line">The detected CUDA version (10.2) mismatches the version that was used to compile</span><br><span class="line">PyTorch (11.7). Please make sure to use the same CUDA versions.</span><br><span class="line">(还是一些无关紧要的输出)</span><br></pre></td></tr></table></figure>
<p>但是我是用conda配置的环境,而且11.7的cudatoolkit和cudatoolkit-dev都已经安装了</p></summary>
<category term="problem" scheme="https://kagari306.github.io/categories/problem/"/>
<category term="python" scheme="https://kagari306.github.io/tags/python/"/>
<category term="pytorch" scheme="https://kagari306.github.io/tags/pytorch/"/>
<category term="conda" scheme="https://kagari306.github.io/tags/conda/"/>
<category term="cuda" scheme="https://kagari306.github.io/tags/cuda/"/>
</entry>
<entry>
<title>PowerShell:因为在此系统上禁止运行脚本</title>
<link href="https://kagari306.github.io/2024/12/28/PowerShell%EF%BC%9A%E5%9B%A0%E4%B8%BA%E5%9C%A8%E6%AD%A4%E7%B3%BB%E7%BB%9F%E4%B8%8A%E7%A6%81%E6%AD%A2%E8%BF%90%E8%A1%8C%E8%84%9A%E6%9C%AC/"/>
<id>https://kagari306.github.io/2024/12/28/PowerShell%EF%BC%9A%E5%9B%A0%E4%B8%BA%E5%9C%A8%E6%AD%A4%E7%B3%BB%E7%BB%9F%E4%B8%8A%E7%A6%81%E6%AD%A2%E8%BF%90%E8%A1%8C%E8%84%9A%E6%9C%AC/</id>
<published>2024-12-27T16:04:14.000Z</published>
<updated>2025-01-17T07:11:06.472Z</updated>
<content type="html"><![CDATA[<h2 id="报错详情"><a href="#报错详情" class="headerlink" title="报错详情"></a>报错详情</h2><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">PlaceHolder(先欠着)</span><br></pre></td></tr></table></figure><span id="more"></span><p>这个错误意味着执行策略很可能是 Restricted(默认设置)。</p><p><code>Restricted</code> 执行策略不允许任何脚本运行。<br><code>AllSigned</code> 和 <code>RemoteSigned</code> 执行策略可防止 Windows PowerShell 运行没有数字签名的脚本。</p><p>执行<code>get-executionpolicy</code>查看执行策略</p><figure class="highlight powershell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="built_in">PS</span> C:\WINDOWS\system32> <span class="built_in">get-executionpolicy</span></span><br><span class="line">Restricted</span><br></pre></td></tr></table></figure><h2 id="解决方法"><a href="#解决方法" class="headerlink" title="解决方法"></a>解决方法</h2><p>以管理员身份打开 PowerShell 执行 <code>set-executionpolicy remotesigned</code></p><figure class="highlight powershell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"><span class="built_in">PS</span> C:\WINDOWS\system32> <span class="built_in">set-executionpolicy</span> remotesigned</span><br><span class="line"></span><br><span class="line">执行策略更改</span><br><span class="line">执行策略可帮助你防止执行不信任的脚本。更改执行策略可能会产生安全风险,如 https:/go.microsoft.com/fwlink/?LinkID=<span class="number">135170</span></span><br><span class="line">中的 about_Execution_Policies 帮助主题所述。是否要更改执行策略?</span><br><span class="line">[<span class="type">Y</span>] 是(Y) [<span class="type">A</span>] 全是(A) [<span class="type">N</span>] 否(N) [<span class="type">L</span>] 全否(L) [<span class="type">S</span>] 暂停(S) [?] 帮助 (默认值为“N”): y</span><br><span class="line"><span class="built_in">PS</span> C:\WINDOWS\system32> <span class="built_in">get-executionpolicy</span></span><br><span class="line">RemoteSigned</span><br></pre></td></tr></table></figure>]]></content>
<summary type="html"><h2 id="报错详情"><a href="#报错详情" class="headerlink" title="报错详情"></a>报错详情</h2><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">PlaceHolder(先欠着)</span><br></pre></td></tr></table></figure></summary>
<category term="problem" scheme="https://kagari306.github.io/categories/problem/"/>
<category term="powershell" scheme="https://kagari306.github.io/tags/powershell/"/>
<category term="windows" scheme="https://kagari306.github.io/tags/windows/"/>
<category term="problem" scheme="https://kagari306.github.io/tags/problem/"/>
<category term="os" scheme="https://kagari306.github.io/tags/os/"/>
</entry>
</feed>