arxiv.org.rss.10.xml - sfeed_tests - sfeed tests and RSS and Atom files
HTML git clone git://git.codemadness.org/sfeed_tests
DIR Log
DIR Files
DIR Refs
DIR README
DIR LICENSE
---
arxiv.org.rss.10.xml (863940B)
---
1 <?xml version="1.0" encoding="UTF-8"?>
2
3 <rdf:RDF
4 xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
5 xmlns="http://purl.org/rss/1.0/"
6 xmlns:content="http://purl.org/rss/1.0/modules/content/"
7 xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/"
8 xmlns:dc="http://purl.org/dc/elements/1.1/"
9 xmlns:syn="http://purl.org/rss/1.0/modules/syndication/"
10 xmlns:admin="http://webns.net/mvcb/"
11 >
12
13 <channel rdf:about="http://fr.arxiv.org/">
14 <title>cs updates on arXiv.org</title>
15 <link>http://fr.arxiv.org/</link>
16 <description rdf:parseType="Literal">Computer Science (cs) updates on the arXiv.org e-print archive</description>
17 <dc:language>en-us</dc:language>
18 <dc:date>2020-10-29T20:30:00-05:00</dc:date>
19 <dc:publisher>www-admin@arxiv.org</dc:publisher>
20 <dc:subject>Computer Science</dc:subject>
21 <syn:updateBase>1901-01-01T00:00+00:00</syn:updateBase>
22 <syn:updateFrequency>1</syn:updateFrequency>
23 <syn:updatePeriod>daily</syn:updatePeriod>
24 <items>
25 <rdf:Seq>
26 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15120" />
27 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15138" />
28 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15149" />
29 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15153" />
30 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15155" />
31 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15156" />
32 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15157" />
33 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15158" />
34 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15162" />
35 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15166" />
36 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15169" />
37 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15171" />
38 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15174" />
39 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15187" />
40 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15193" />
41 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15195" />
42 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15196" />
43 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15201" />
44 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15203" />
45 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15206" />
46 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15209" />
47 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15210" />
48 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15211" />
49 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15217" />
50 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15218" />
51 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15221" />
52 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15222" />
53 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15225" />
54 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15229" />
55 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15232" />
56 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15233" />
57 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15234" />
58 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15236" />
59 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15237" />
60 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15239" />
61 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15240" />
62 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15245" />
63 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15250" />
64 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15251" />
65 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15255" />
66 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15258" />
67 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15260" />
68 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15261" />
69 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15266" />
70 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15268" />
71 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15269" />
72 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15271" />
73 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15272" />
74 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15274" />
75 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15275" />
76 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15277" />
77 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15280" />
78 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15283" />
79 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15288" />
80 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15289" />
81 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15296" />
82 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15297" />
83 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15300" />
84 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15302" />
85 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15303" />
86 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15306" />
87 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15311" />
88 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15313" />
89 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15314" />
90 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15315" />
91 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15316" />
92 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15317" />
93 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15320" />
94 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15322" />
95 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15327" />
96 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15329" />
97 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15335" />
98 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15336" />
99 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15338" />
100 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15343" />
101 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15344" />
102 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15346" />
103 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15347" />
104 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15350" />
105 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15352" />
106 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15353" />
107 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15354" />
108 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15356" />
109 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15358" />
110 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15360" />
111 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15363" />
112 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15364" />
113 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15365" />
114 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15366" />
115 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15371" />
116 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15372" />
117 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15376" />
118 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15377" />
119 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15378" />
120 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15379" />
121 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15382" />
122 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15388" />
123 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15389" />
124 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15390" />
125 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15391" />
126 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15392" />
127 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15393" />
128 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15394" />
129 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15396" />
130 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15399" />
131 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15404" />
132 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15411" />
133 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15413" />
134 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15415" />
135 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15417" />
136 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15421" />
137 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15423" />
138 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15425" />
139 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15426" />
140 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15427" />
141 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15434" />
142 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15435" />
143 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15436" />
144 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15437" />
145 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15438" />
146 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15440" />
147 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15441" />
148 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15444" />
149 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15446" />
150 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15453" />
151 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15454" />
152 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15455" />
153 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15456" />
154 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15457" />
155 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15458" />
156 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15461" />
157 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15464" />
158 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15466" />
159 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15469" />
160 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15470" />
161 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15476" />
162 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15479" />
163 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15482" />
164 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15487" />
165 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15490" />
166 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15491" />
167 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15492" />
168 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15502" />
169 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15504" />
170 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15506" />
171 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15507" />
172 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15508" />
173 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15509" />
174 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15510" />
175 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15511" />
176 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15521" />
177 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15524" />
178 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15525" />
179 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15526" />
180 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15527" />
181 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15528" />
182 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15530" />
183 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15531" />
184 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15533" />
185 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15534" />
186 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15535" />
187 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15538" />
188 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15541" />
189 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15545" />
190 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15549" />
191 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15550" />
192 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15551" />
193 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15552" />
194 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15556" />
195 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15559" />
196 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15560" />
197 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15561" />
198 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15562" />
199 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15571" />
200 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15572" />
201 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15577" />
202 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15578" />
203 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15579" />
204 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15581" />
205 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15582" />
206 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15583" />
207 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15584" />
208 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15585" />
209 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15586" />
210 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15588" />
211 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15590" />
212 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15594" />
213 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15596" />
214 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15597" />
215 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15598" />
216 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15599" />
217 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15600" />
218 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15601" />
219 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15602" />
220 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15603" />
221 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15604" />
222 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15605" />
223 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15606" />
224 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15607" />
225 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15614" />
226 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15618" />
227 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15620" />
228 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15622" />
229 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15623" />
230 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15638" />
231 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15639" />
232 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15643" />
233 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15647" />
234 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15651" />
235 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15653" />
236 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15654" />
237 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15658" />
238 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15662" />
239 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15665" />
240 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15668" />
241 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15669" />
242 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15670" />
243 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15671" />
244 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15672" />
245 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15673" />
246 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15674" />
247 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15675" />
248 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15676" />
249 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15678" />
250 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15679" />
251 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15680" />
252 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15682" />
253 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15683" />
254 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15684" />
255 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15687" />
256 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15689" />
257 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15690" />
258 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15692" />
259 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15694" />
260 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15697" />
261 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15698" />
262 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15703" />
263 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15711" />
264 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15716" />
265 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15718" />
266 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15727" />
267 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15728" />
268 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15729" />
269 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15738" />
270 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15740" />
271 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15745" />
272 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15750" />
273 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15755" />
274 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15760" />
275 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15761" />
276 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15764" />
277 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15768" />
278 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15770" />
279 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15772" />
280 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15773" />
281 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15775" />
282 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15776" />
283 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15778" />
284 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15784" />
285 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15785" />
286 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15786" />
287 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15792" />
288 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15793" />
289 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15794" />
290 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15801" />
291 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15803" />
292 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15805" />
293 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15809" />
294 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15811" />
295 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15814" />
296 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15819" />
297 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15820" />
298 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15821" />
299 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15823" />
300 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15824" />
301 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15831" />
302 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15832" />
303 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1602.05829" />
304 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1605.09124" />
305 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1608.03533" />
306 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1712.06431" />
307 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1801.07485" />
308 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1810.00635" />
309 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1901.07849" />
310 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1902.06626" />
311 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1906.01786" />
312 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1906.05586" />
313 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1906.06642" />
314 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1906.06836" />
315 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1907.02237" />
316 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1907.06226" />
317 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1907.06630" />
318 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1907.08813" />
319 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1908.01146" />
320 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1908.06634" />
321 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1909.05176" />
322 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1909.09318" />
323 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1909.12473" />
324 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1910.04267" />
325 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1910.08845" />
326 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1910.13067" />
327 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1911.02711" />
328 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1911.03849" />
329 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1911.03875" />
330 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1911.04209" />
331 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1911.09565" />
332 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1912.00187" />
333 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1912.02290" />
334 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1912.05320" />
335 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1912.05699" />
336 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1912.08026" />
337 <rdf:li rdf:resource="http://fr.arxiv.org/abs/1912.10321" />
338 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2001.10477" />
339 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2002.04025" />
340 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2002.06195" />
341 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2002.08247" />
342 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2002.12165" />
343 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2003.01367" />
344 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2003.02960" />
345 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2003.03824" />
346 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2003.03977" />
347 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2003.06475" />
348 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2003.08196" />
349 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2003.09946" />
350 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2004.00499" />
351 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2004.03096" />
352 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2004.04685" />
353 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2004.11362" />
354 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2004.12130" />
355 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2004.13363" />
356 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2004.14632" />
357 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2005.00858" />
358 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2005.01192" />
359 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2005.02683" />
360 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2005.03482" />
361 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2005.09635" />
362 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2005.10963" />
363 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2005.12451" />
364 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2005.12889" />
365 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2005.13969" />
366 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2005.14435" />
367 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2005.14441" />
368 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2006.02080" />
369 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2006.03267" />
370 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2006.03829" />
371 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2006.03992" />
372 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2006.06459" />
373 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2006.06648" />
374 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2006.06677" />
375 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2006.07214" />
376 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2006.07225" />
377 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2006.08205" />
378 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2006.09859" />
379 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2006.10085" />
380 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2006.10498" />
381 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2006.12681" />
382 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2006.13258" />
383 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2006.14950" />
384 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2007.00124" />
385 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2007.00772" />
386 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2007.00796" />
387 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2007.01293" />
388 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2007.02261" />
389 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2007.02835" />
390 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2007.06267" />
391 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2007.06271" />
392 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2007.07632" />
393 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2007.09483" />
394 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2007.10497" />
395 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2007.11078" />
396 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2007.12153" />
397 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2007.12159" />
398 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2007.13404" />
399 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2008.00226" />
400 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2008.02464" />
401 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2008.02834" />
402 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2008.04717" />
403 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2008.09293" />
404 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2008.11370" />
405 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2008.12775" />
406 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2008.13567" />
407 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2009.00110" />
408 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2009.00142" />
409 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2009.01194" />
410 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2009.03133" />
411 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2009.05524" />
412 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2009.07165" />
413 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2009.07203" />
414 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2009.07253" />
415 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2009.08276" />
416 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2009.11329" />
417 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2009.12729" />
418 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2009.12829" />
419 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.00182" />
420 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.02480" />
421 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.02510" />
422 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.02519" />
423 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.04831" />
424 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.05446" />
425 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.05768" />
426 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.06351" />
427 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.07485" />
428 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.08182" />
429 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.08321" />
430 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.08841" />
431 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.09843" />
432 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.10436" />
433 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.10695" />
434 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.10742" />
435 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.10759" />
436 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.11150" />
437 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.11175" />
438 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.11505" />
439 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.11775" />
440 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.11925" />
441 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.12191" />
442 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.12234" />
443 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.12674" />
444 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.12899" />
445 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.12931" />
446 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.13119" />
447 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.13178" />
448 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.13273" />
449 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.13285" />
450 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.13956" />
451 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.14367" />
452 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.14501" />
453 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.14571" />
454 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.14584" />
455 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.14771" />
456 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.14919" />
457 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15003" />
458 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15032" />
459 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.15058" />
460 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.14544" />
461 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.14734" />
462 <rdf:li rdf:resource="http://fr.arxiv.org/abs/2010.14746" />
463 </rdf:Seq>
464 </items>
465 <image rdf:resource="http://fr.arxiv.org/icons/sfx.gif" />
466 </channel>
467 <image rdf:about="http://fr.arxiv.org/icons/sfx.gif">
468 <title>arXiv.org</title>
469 <url>http://fr.arxiv.org/icons/sfx.gif</url>
470 <link>http://fr.arxiv.org/</link>
471 </image>
472 <item rdf:about="http://fr.arxiv.org/abs/2010.15120">
473 <title>Raw Audio for Depression Detection Can Be More Robust Against Gender Imbalance than Mel-Spectrogram Features. (arXiv:2010.15120v1 [cs.SD])</title>
474 <link>http://fr.arxiv.org/abs/2010.15120</link>
475 <description rdf:parseType="Literal"><p>Depression is a large-scale mental health problem and a challenging area for
476 machine learning researchers in terms of the detection of depression. Datasets
477 such as the Distress Analysis Interview Corpus - Wizard of Oz have been created
478 to aid research in this area. However, on top of the challenges inherent in
479 accurately detecting depression, biases in datasets may result in skewed
480 classification performance. In this paper we examine gender bias in the
481 DAIC-WOZ dataset using audio-based deep neural networks. We show that gender
482 biases in DAIC-WOZ can lead to an overreporting of performance, which has been
483 overlooked in the past due to the same gender biases being present in the test
484 set. By using raw audio and different concepts from Fair Machine Learning, such
485 as data re-distribution, we can mitigate against the harmful effects of bias.
486 </p>
487 </description>
488 <dc:creator> <a href="http://fr.arxiv.org/find/cs/1/au:+Bailey_A/0/1/0/all/0/1">Andrew Bailey</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Plumbley_M/0/1/0/all/0/1">Mark D. Plumbley</a></dc:creator>
489 </item>
490 <item rdf:about="http://fr.arxiv.org/abs/2010.15138">
491 <title>papaya2: 2D Irreducible Minkowski Tensor computation. (arXiv:2010.15138v1 [cs.GR])</title>
492 <link>http://fr.arxiv.org/abs/2010.15138</link>
493 <description rdf:parseType="Literal"><p>A common challenge in scientific and technical domains is the quantitative
494 description of geometries and shapes, e.g. in the analysis of microscope
495 imagery or astronomical observation data. Frequently, it is desirable to go
496 beyond scalar shape metrics such as porosity and surface to volume ratios
497 because the samples are anisotropic or because direction-dependent quantities
498 such as conductances or elasticity are of interest. Minkowski Tensors are a
499 systematic family of versatile and robust higher-order shape descriptors that
500 allow for shape characterization of arbitrary order and promise a path to
501 systematic structure-function relationships for direction-dependent properties.
502 Papaya2 is a software to calculate 2D higher-order shape metrics with a library
503 interface, support for Irreducible Minkowski Tensors and interpolated marching
504 squares. Extensions to Matlab, JavaScript and Python are provided as well.
505 While the tensor of inertia is computed by many tools, we are not aware of
506 other open-source software which provides higher-rank shape characterization in
507 2D.
508 </p>
509 </description>
510 <dc:creator> <a href="http://fr.arxiv.org/find/cs/1/au:+Schaller_F/0/1/0/all/0/1">Fabian M. Schaller</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Wagner_J/0/1/0/all/0/1">Jenny Wagner</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Kapfer_S/0/1/0/all/0/1">Sebastian C. Kapfer</a></dc:creator>
511 </item>
512 <item rdf:about="http://fr.arxiv.org/abs/2010.15149">
513 <title>DeSMOG: Detecting Stance in Media On Global Warming. (arXiv:2010.15149v1 [cs.CL])</title>
514 <link>http://fr.arxiv.org/abs/2010.15149</link>
515 <description rdf:parseType="Literal"><p>Citing opinions is a powerful yet understudied strategy in argumentation. For
516 example, an environmental activist might say, "Leading scientists agree that
517 global warming is a serious concern," framing a clause which affirms their own
518 stance ("that global warming is serious") as an opinion endorsed ("[scientists]
519 agree") by a reputable source ("leading"). In contrast, a global warming denier
520 might frame the same clause as the opinion of an untrustworthy source with a
521 predicate connoting doubt: "Mistaken scientists claim [...]." Our work studies
522 opinion-framing in the global warming (GW) debate, an increasingly partisan
523 issue that has received little attention in NLP. We introduce DeSMOG, a dataset
524 of stance-labeled GW sentences, and train a BERT classifier to study novel
525 aspects of argumentation in how different sides of a debate represent their own
526 and each other's opinions. From 56K news articles, we find that similar
527 linguistic devices for self-affirming and opponent-doubting discourse are used
528 across GW-accepting and skeptic media, though GW-skeptical media shows more
529 opponent-doubt. We also find that authors often characterize sources as
530 hypocritical, by ascribing opinions expressing the author's own view to source
531 entities known to publicly endorse the opposing view. We release our stance
532 dataset, model, and lexicons of framing devices for future work on
533 opinion-framing and the automatic detection of GW stance.
534 </p>
535 </description>
536 <dc:creator> <a href="http://fr.arxiv.org/find/cs/1/au:+Luo_Y/0/1/0/all/0/1">Yiwei Luo</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Card_D/0/1/0/all/0/1">Dallas Card</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Jurafsky_D/0/1/0/all/0/1">Dan Jurafsky</a></dc:creator>
537 </item>
538 <item rdf:about="http://fr.arxiv.org/abs/2010.15153">
539 <title>On the Optimality and Convergence Properties of the Learning Model Predictive Controller. (arXiv:2010.15153v1 [math.OC])</title>
540 <link>http://fr.arxiv.org/abs/2010.15153</link>
541 <description rdf:parseType="Literal"><p>In this technical note we analyse the performance improvement and optimality
542 properties of the Learning Model Predictive Control (LMPC) strategy for linear
543 deterministic systems. The LMPC framework is a policy iteration scheme where
544 closed-loop trajectories are used to update the control policy for the next
545 execution of the control task. We show that, when a Linear Independence
546 Constraint Qualification (LICQ) condition holds, the LMPC scheme guarantees
547 strict iterative performance improvement and optimality, meaning that the
548 closed-loop cost evaluated over the entire task converges asymptotically to the
549 optimal cost of the infinite-horizon control problem. Compared to previous
550 works this sufficient LICQ condition can be easily checked, it holds for a
551 larger class of systems and it can be used to adaptively select the prediction
552 horizon of the controller, as demonstrated by a numerical example.
553 </p>
554 </description>
555 <dc:creator> <a href="http://fr.arxiv.org/find/math/1/au:+Rosolia_U/0/1/0/all/0/1">Ugo Rosolia</a>, <a href="http://fr.arxiv.org/find/math/1/au:+Lian_Y/0/1/0/all/0/1">Yingzhao Lian</a>, <a href="http://fr.arxiv.org/find/math/1/au:+Maddalena_E/0/1/0/all/0/1">Emilio T. Maddalena</a>, <a href="http://fr.arxiv.org/find/math/1/au:+Ferrari_Trecate_G/0/1/0/all/0/1">Giancarlo Ferrari-Trecate</a>, <a href="http://fr.arxiv.org/find/math/1/au:+Jones_C/0/1/0/all/0/1">Colin N. Jones</a></dc:creator>
556 </item>
557 <item rdf:about="http://fr.arxiv.org/abs/2010.15155">
558 <title>Kernel Aggregated Fast Multipole Method: Efficient summation of Laplace and Stokes kernel functions. (arXiv:2010.15155v1 [math.NA])</title>
559 <link>http://fr.arxiv.org/abs/2010.15155</link>
560 <description rdf:parseType="Literal"><p>Many different simulation methods for Stokes flow problems involve a common
561 computationally intense task---the summation of a kernel function over $O(N^2)$
562 pairs of points. One popular technique is the Kernel Independent Fast Multipole
563 Method (KIFMM), which constructs a spatial adaptive octree and places a small
564 number of equivalent multipole and local points around each octree box, and
565 completes the kernel sum with $O(N)$ performance. However, the KIFMM cannot be
566 used directly with nonlinear kernels, can be inefficient for complicated linear
567 kernels, and in general is difficult to implement compared to less-efficient
568 alternatives such as Ewald-type methods. Here we present the Kernel Aggregated
569 Fast Multipole Method (KAFMM), which overcomes these drawbacks by allowing
570 different kernel functions to be used for specific stages of octree traversal.
571 In many cases a simpler linear kernel suffices during the most extensive stage
572 of octree traversal, even for nonlinear kernel summation problems. The KAFMM
573 thereby improves computational efficiency in general and also allows efficient
574 evaluation of some nonlinear kernel functions such as the regularized
575 Stokeslet. We have implemented our method as an open-source software library
576 STKFMM with support for Laplace kernels, the Stokeslet, regularized Stokeslet,
577 Rotne-Prager-Yamakawa (RPY) tensor, and the Stokes double-layer and traction
578 operators. Open and periodic boundary conditions are supported for all kernels,
579 and the no-slip wall boundary condition is supported for the Stokeslet and RPY
580 tensor. The package is designed to be ready-to-use as well as being readily
581 extensible to additional kernels. Massive parallelism is supported with mixed
582 OpenMP and MPI.
583 </p>
584 </description>
585 <dc:creator> <a href="http://fr.arxiv.org/find/math/1/au:+Yan_W/0/1/0/all/0/1">Wen Yan</a>, <a href="http://fr.arxiv.org/find/math/1/au:+Blackwell_R/0/1/0/all/0/1">Robert Blackwell</a></dc:creator>
586 </item>
587 <item rdf:about="http://fr.arxiv.org/abs/2010.15156">
588 <title>Diagnostic data integration using deep neural networks for real-time plasma analysis. (arXiv:2010.15156v1 [physics.comp-ph])</title>
589 <link>http://fr.arxiv.org/abs/2010.15156</link>
590 <description rdf:parseType="Literal"><p>Recent advances in acquisition equipment is providing experiments with
591 growing amounts of precise yet affordable sensors. At the same time an improved
592 computational power, coming from new hardware resources (GPU, FPGA, ACAP), has
593 been made available at relatively low costs. This led us to explore the
594 possibility of completely renewing the chain of acquisition for a fusion
595 experiment, where many high-rate sources of data, coming from different
596 diagnostics, can be combined in a wide framework of algorithms. If on one hand
597 adding new data sources with different diagnostics enriches our knowledge about
598 physical aspects, on the other hand the dimensions of the overall model grow,
599 making relations among variables more and more opaque. A new approach for the
600 integration of such heterogeneous diagnostics, based on composition of deep
601 \textit{variational autoencoders}, could ease this problem, acting as a
602 structural sparse regularizer. This has been applied to RFX-mod experiment
603 data, integrating the soft X-ray linear images of plasma temperature with the
604 magnetic state.
605 </p>
606 <p>However to ensure a real-time signal analysis, those algorithmic techniques
607 must be adapted to run in well suited hardware. In particular it is shown that,
608 attempting a quantization of neurons transfer functions, such models can be
609 modified to create an embedded firmware. This firmware, approximating the deep
610 inference model to a set of simple operations, fits well with the simple logic
611 units that are largely abundant in FPGAs. This is the key factor that permits
612 the use of affordable hardware with complex deep neural topology and operates
613 them in real-time.
614 </p>
615 </description>
616 <dc:creator> <a href="http://fr.arxiv.org/find/physics/1/au:+Garola_A/0/1/0/all/0/1">A. Rigoni Garola</a>, <a href="http://fr.arxiv.org/find/physics/1/au:+Cavazzana_R/0/1/0/all/0/1">R. Cavazzana</a>, <a href="http://fr.arxiv.org/find/physics/1/au:+Gobbin_M/0/1/0/all/0/1">M. Gobbin</a>, <a href="http://fr.arxiv.org/find/physics/1/au:+Delogu_R/0/1/0/all/0/1">R.S. Delogu</a>, <a href="http://fr.arxiv.org/find/physics/1/au:+Manduchi_G/0/1/0/all/0/1">G. Manduchi</a>, <a href="http://fr.arxiv.org/find/physics/1/au:+Taliercio_C/0/1/0/all/0/1">C. Taliercio</a>, <a href="http://fr.arxiv.org/find/physics/1/au:+Luchetta_A/0/1/0/all/0/1">A. Luchetta</a></dc:creator>
617 </item>
618 <item rdf:about="http://fr.arxiv.org/abs/2010.15157">
619 <title>Panoster: End-to-end Panoptic Segmentation of LiDAR Point Clouds. (arXiv:2010.15157v1 [cs.CV])</title>
620 <link>http://fr.arxiv.org/abs/2010.15157</link>
621 <description rdf:parseType="Literal"><p>Panoptic segmentation has recently unified semantic and instance
622 segmentation, previously addressed separately, thus taking a step further
623 towards creating more comprehensive and efficient perception systems. In this
624 paper, we present Panoster, a novel proposal-free panoptic segmentation method
625 for point clouds. Unlike previous approaches relying on several steps to group
626 pixels or points into objects, Panoster proposes a simplified framework
627 incorporating a learning-based clustering solution to identify instances. At
628 inference time, this acts as a class-agnostic semantic segmentation, allowing
629 Panoster to be fast, while outperforming prior methods in terms of accuracy.
630 Additionally, we showcase how our approach can be flexibly and effectively
631 applied on diverse existing semantic architectures to deliver panoptic
632 predictions.
633 </p>
634 </description>
635 <dc:creator> <a href="http://fr.arxiv.org/find/cs/1/au:+Gasperini_S/0/1/0/all/0/1">Stefano Gasperini</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Mahani_M/0/1/0/all/0/1">Mohammad-Ali Nikouei Mahani</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Marcos_Ramiro_A/0/1/0/all/0/1">Alvaro Marcos-Ramiro</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Navab_N/0/1/0/all/0/1">Nassir Navab</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Tombari_F/0/1/0/all/0/1">Federico Tombari</a></dc:creator>
636 </item>
637 <item rdf:about="http://fr.arxiv.org/abs/2010.15158">
638 <title>CNN Profiler on Polar Coordinate Images for Tropical Cyclone Structure Analysis. (arXiv:2010.15158v1 [cs.CV])</title>
639 <link>http://fr.arxiv.org/abs/2010.15158</link>
640 <description rdf:parseType="Literal"><p>Convolutional neural networks (CNN) have achieved great success in analyzing
641 tropical cyclones (TC) with satellite images in several tasks, such as TC
642 intensity estimation. In contrast, TC structure, which is conventionally
643 described by a few parameters estimated subjectively by meteorology
644 specialists, is still hard to be profiled objectively and routinely. This study
645 applies CNN on satellite images to create the entire TC structure profiles,
646 covering all the structural parameters. By utilizing the meteorological domain
647 knowledge to construct TC wind profiles based on historical structure
648 parameters, we provide valuable labels for training in our newly released
649 benchmark dataset. With such a dataset, we hope to attract more attention to
650 this crucial issue among data scientists. Meanwhile, a baseline is established
651 with a specialized convolutional model operating on polar-coordinates. We
652 discovered that it is more feasible and physically reasonable to extract
653 structural information on polar-coordinates, instead of Cartesian coordinates,
654 according to a TC's rotational and spiral natures. Experimental results on the
655 released benchmark dataset verified the robustness of the proposed model and
656 demonstrated the potential for applying deep learning techniques for this
657 barely developed yet important topic.
658 </p>
659 </description>
660 <dc:creator> <a href="http://fr.arxiv.org/find/cs/1/au:+Chen_B/0/1/0/all/0/1">Boyo Chen</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Chen_B/0/1/0/all/0/1">Buo-Fu Chen</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Hsiao_C/0/1/0/all/0/1">Chun-Min Hsiao</a></dc:creator>
661 </item>
662 <item rdf:about="http://fr.arxiv.org/abs/2010.15162">
663 <title>Sizeless: Predicting the optimal size of serverless functions. (arXiv:2010.15162v1 [cs.DC])</title>
664 <link>http://fr.arxiv.org/abs/2010.15162</link>
665 <description rdf:parseType="Literal"><p>Serverless functions are a cloud computing paradigm that reduces operational
666 overheads for developers, because the cloud provider takes care of resource
667 management tasks such as resource provisioning, deployment, and auto-scaling.
668 The only resource management task that developers are still in charge of is
669 resource sizing, that is, selecting how much resources are allocated to each
670 worker instance. However, due to the challenging nature of resource sizing,
671 developers often neglect it despite its significant cost and performance
672 benefits. Existing approaches aiming to automate serverless functions resource
673 sizing require dedicated performance tests, which are time consuming to
674 implement and maintain.
675 </p>
676 <p>In this paper, we introduce Sizeless -- an approach to predict the optimal
677 resource size of a serverless function using monitoring data from a single
678 resource size. As our approach requires only production monitoring data,
679 developers no longer need to implement and maintain representative performance
680 tests. Furthermore, it enables cloud providers, which cannot engage in testing
681 the performance of user functions, to implement resource sizing on a platform
682 level and automate the last resource management task associated with serverless
683 functions. In our evaluation, Sizeless was able to predict the execution time
684 of the serverless functions of a realistic server-less application with a
685 median prediction accuracy of 93.1%. Using Sizeless to optimize the memory size
686 of this application results in a speedup of 16.7% while simultaneously
687 decreasing costs by 2.5%.
688 </p>
689 </description>
690 <dc:creator> <a href="http://fr.arxiv.org/find/cs/1/au:+Eismann_S/0/1/0/all/0/1">Simon Eismann</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Bui_L/0/1/0/all/0/1">Long Bui</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Grohmann_J/0/1/0/all/0/1">Johannes Grohmann</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Abad_C/0/1/0/all/0/1">Cristina L. Abad</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Herbst_N/0/1/0/all/0/1">Nikolas Herbst</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Kounev_S/0/1/0/all/0/1">Samuel Kounev</a></dc:creator>
691 </item>
692 <item rdf:about="http://fr.arxiv.org/abs/2010.15166">
693 <title>Polymer Informatics with Multi-Task Learning. (arXiv:2010.15166v1 [cond-mat.mtrl-sci])</title>
694 <link>http://fr.arxiv.org/abs/2010.15166</link>
695 <description rdf:parseType="Literal"><p>Modern data-driven tools are transforming application-specific polymer
696 development cycles. Surrogate models that can be trained to predict the
697 properties of new polymers are becoming commonplace. Nevertheless, these models
698 do not utilize the full breadth of the knowledge available in datasets, which
699 are oftentimes sparse; inherent correlations between different property
700 datasets are disregarded. Here, we demonstrate the potency of multi-task
701 learning approaches that exploit such inherent correlations effectively,
702 particularly when some property dataset sizes are small. Data pertaining to 36
703 different properties of over $13, 000$ polymers (corresponding to over $23,000$
704 data points) are coalesced and supplied to deep-learning multi-task
705 architectures. Compared to conventional single-task learning models (that are
706 trained on individual property datasets independently), the multi-task approach
707 is accurate, efficient, scalable, and amenable to transfer learning as more
708 data on the same or different properties become available. Moreover, these
709 models are interpretable. Chemical rules, that explain how certain features
710 control trends in specific property values, emerge from the present work,
711 paving the way for the rational design of application specific polymers meeting
712 desired property or performance objectives.
713 </p>
714 </description>
715 <dc:creator> <a href="http://fr.arxiv.org/find/cond-mat/1/au:+Kunneth_C/0/1/0/all/0/1">Christopher K&#xfc;nneth</a>, <a href="http://fr.arxiv.org/find/cond-mat/1/au:+Rajan_A/0/1/0/all/0/1">Arunkumar Chitteth Rajan</a>, <a href="http://fr.arxiv.org/find/cond-mat/1/au:+Tran_H/0/1/0/all/0/1">Huan Tran</a>, <a href="http://fr.arxiv.org/find/cond-mat/1/au:+Chen_L/0/1/0/all/0/1">Lihua Chen</a>, <a href="http://fr.arxiv.org/find/cond-mat/1/au:+Kim_C/0/1/0/all/0/1">Chiho Kim</a>, <a href="http://fr.arxiv.org/find/cond-mat/1/au:+Ramprasad_R/0/1/0/all/0/1">Rampi Ramprasad</a></dc:creator>
716 </item>
717 <item rdf:about="http://fr.arxiv.org/abs/2010.15169">
718 <title>Semi-Grant-Free NOMA: Ergodic Rates Analysis with Random Deployed Users. (arXiv:2010.15169v1 [cs.IT])</title>
719 <link>http://fr.arxiv.org/abs/2010.15169</link>
720 <description rdf:parseType="Literal"><p>Semi-grant-free (Semi-GF) non-orthogonal multiple access (NOMA) enables
721 grant-free (GF) and grant-based (GB) users to share the same resource blocks,
722 thereby balancing the connectivity and stability of communications. This letter
723 analyzes ergodic rates of Semi-GF NOMA systems. First, this paper exploits a
724 Semi-GF protocol, denoted as dynamic protocol, for selecting GF users into the
725 occupied GB channels via the GB user's instantaneous received power. Under this
726 protocol, the closed-form analytical and approximated expressions for ergodic
727 rates are derived. The numerical results illustrate that the GF user (weak NOMA
728 user) has a performance upper limit, while the ergodic rate of the GB user
729 (strong NOMA user) increases linearly versus the transmit signal-to-noise
730 ratio.
731 </p>
732 </description>
733 <dc:creator> <a href="http://fr.arxiv.org/find/cs/1/au:+Zhang_C/0/1/0/all/0/1">Chao Zhang</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Liu_Y/0/1/0/all/0/1">Yuanwei Liu</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Yi_W/0/1/0/all/0/1">Wenqiang Yi</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Qin_Z/0/1/0/all/0/1">Zhijin Qin</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Ding_Z/0/1/0/all/0/1">Zhiguo Ding</a></dc:creator>
734 </item>
735 <item rdf:about="http://fr.arxiv.org/abs/2010.15171">
736 <title>Slicing a single wireless collision channel among throughput- and timeliness-sensitive services. (arXiv:2010.15171v1 [cs.IT])</title>
737 <link>http://fr.arxiv.org/abs/2010.15171</link>
738 <description rdf:parseType="Literal"><p>The fifth generation (5G) wireless system has a platform-driven approach,
739 aiming to support heterogeneous connections with very diverse requirements. The
740 shared wireless resources should be sliced in a way that each user perceives
741 that its requirement has been met. Heterogeneity challenges the traditional
742 notion of resource efficiency, as the resource usage has cater for, e.g. rate
743 maximization for one user and timeliness requirement for another user. This
744 paper treats a model for radio access network (RAN) uplink, where a
745 throughput-demanding broadband user shares wireless resources with an
746 intermittently active user that wants to optimize the timeliness, expressed in
747 terms of latency-reliability or Age of Information (AoI). We evaluate the
748 trade-offs between throughput and timeliness for Orthogonal Multiple Access
749 (OMA) as well as Non-Orthogonal Multiple Access (NOMA) with successive
750 interference cancellation (SIC). We observe that NOMA with SIC, in a
751 conservative scenario with destructive collisions, is just slightly inferior to
752 that of OMA, which indicates that it may offer significant benefits in
753 practical deployments where the capture effect is frequently encountered. On
754 the other hand, finding the optimal configuration of NOMA with SIC depends on
755 the activity pattern of the intermittent user, to which OMA is insensitive.
756 </p>
757 </description>
758 <dc:creator> <a href="http://fr.arxiv.org/find/cs/1/au:+Leyva_Mayorga_I/0/1/0/all/0/1">Israel Leyva-Mayorga</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Chiariotti_F/0/1/0/all/0/1">Federico Chiariotti</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Stefanovic_C/0/1/0/all/0/1">&#x10c;edomir Stefanovi&#x107;</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Kalor_A/0/1/0/all/0/1">Anders E. Kal&#xf8;r</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Popovski_P/0/1/0/all/0/1">Petar Popovski</a></dc:creator>
759 </item>
760 <item rdf:about="http://fr.arxiv.org/abs/2010.15174">
761 <title>Improving Perceptual Quality by Phone-Fortified Perceptual Loss for Speech Enhancement. (arXiv:2010.15174v1 [cs.SD])</title>
762 <link>http://fr.arxiv.org/abs/2010.15174</link>
763 <description rdf:parseType="Literal"><p>Speech enhancement (SE) aims to improve speech quality and intelligibility,
764 which are both related to a smooth transition in speech segments that may carry
765 linguistic information, e.g. phones and syllables. In this study, we took
766 phonetic characteristics into account in the SE training process. Hence, we
767 designed a phone-fortified perceptual (PFP) loss, and the training of our SE
768 model was guided by PFP loss. In PFP loss, phonetic characteristics are
769 extracted by wav2vec, an unsupervised learning model based on the contrastive
770 predictive coding (CPC) criterion. Different from previous deep-feature-based
771 approaches, the proposed approach explicitly uses the phonetic information in
772 the deep feature extraction process to guide the SE model training. To test the
773 proposed approach, we first confirmed that the wav2vec representations carried
774 clear phonetic information using a t-distributed stochastic neighbor embedding
775 (t-SNE) analysis. Next, we observed that the proposed PFP loss was more
776 strongly correlated with the perceptual evaluation metrics than point-wise and
777 signal-level losses, thus achieving higher scores for standardized quality and
778 intelligibility evaluation metrics in the Voice Bank--DEMAND dataset.
779 </p>
780 </description>
781 <dc:creator> <a href="http://fr.arxiv.org/find/cs/1/au:+Hsieh_T/0/1/0/all/0/1">Tsun-An Hsieh</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Yu_C/0/1/0/all/0/1">Cheng Yu</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Fu_S/0/1/0/all/0/1">Szu-Wei Fu</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Lu_X/0/1/0/all/0/1">Xugang Lu</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Tsao_Y/0/1/0/all/0/1">Yu Tsao</a></dc:creator>
782 </item>
783 <item rdf:about="http://fr.arxiv.org/abs/2010.15187">
784 <title>A Study on Efficiency in Continual Learning Inspired by Human Learning. (arXiv:2010.15187v1 [cs.LG])</title>
785 <link>http://fr.arxiv.org/abs/2010.15187</link>
786 <description rdf:parseType="Literal"><p>Humans are efficient continual learning systems; we continually learn new
787 skills from birth with finite cells and resources. Our learning is highly
788 optimized both in terms of capacity and time while not suffering from
789 catastrophic forgetting. In this work we study the efficiency of continual
790 learning systems, taking inspiration from human learning. In particular,
791 inspired by the mechanisms of sleep, we evaluate popular pruning-based
792 continual learning algorithms, using PackNet as a case study. First, we
793 identify that weight freezing, which is used in continual learning without
794 biological justification, can result in over $2\times$ as many weights being
795 used for a given level of performance. Secondly, we note the similarity in
796 human day and night time behaviors to the training and pruning phases
797 respectively of PackNet. We study a setting where the pruning phase is given a
798 time budget, and identify connections between iterative pruning and multiple
799 sleep cycles in humans. We show there exists an optimal choice of iteration
800 v.s. epochs given different tasks.
801 </p>
802 </description>
803 <dc:creator> <a href="http://fr.arxiv.org/find/cs/1/au:+Ball_P/0/1/0/all/0/1">Philip J. Ball</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Li_Y/0/1/0/all/0/1">Yingzhen Li</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Lamb_A/0/1/0/all/0/1">Angus Lamb</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Zhang_C/0/1/0/all/0/1">Cheng Zhang</a></dc:creator>
804 </item>
805 <item rdf:about="http://fr.arxiv.org/abs/2010.15193">
806 <title>Explicit stabilized multirate method for stiff stochastic differential equations. (arXiv:2010.15193v1 [math.NA])</title>
807 <link>http://fr.arxiv.org/abs/2010.15193</link>
808 <description rdf:parseType="Literal"><p>Stabilized explicit methods are particularly efficient for large systems of
809 stiff stochastic differential equations (SDEs) due to their extended stability
810 domain. However, they loose their efficiency when a severe stiffness is induced
811 by very few "fast" degrees of freedom, as the stiff and nonstiff terms are
812 evaluated concurrently. Therefore, inspired by [A. Abdulle, M. J. Grote, and G.
813 Rosilho de Souza, Preprint (2020), <a href="/abs/2006.00744">arXiv:2006.00744</a>] we introduce a stochastic
814 modified equation whose stiffness depends solely on the "slow" terms. By
815 integrating this modified equation with a stabilized explicit scheme we devise
816 a multirate method which overcomes the bottleneck caused by a few severely
817 stiff terms and recovers the efficiency of stabilized schemes for large systems
818 of nonlinear SDEs. The scheme is not based on any scale separation assumption
819 of the SDE and therefore it is employable for problems stemming from the
820 spatial discretization of stochastic parabolic partial differential equations
821 on locally refined grids. The multirate scheme has strong order 1/2, weak order
822 1 and its stability is proved on a model problem. Numerical experiments confirm
823 the efficiency and accuracy of the scheme.
824 </p>
825 </description>
826 <dc:creator> <a href="http://fr.arxiv.org/find/math/1/au:+Abdulle_A/0/1/0/all/0/1">Assyr Abdulle</a>, <a href="http://fr.arxiv.org/find/math/1/au:+Souza_G/0/1/0/all/0/1">Giacomo Rosilho de Souza</a></dc:creator>
827 </item>
828 <item rdf:about="http://fr.arxiv.org/abs/2010.15195">
829 <title>Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in First-person Simulated 3D Environments. (arXiv:2010.15195v1 [cs.LG])</title>
830 <link>http://fr.arxiv.org/abs/2010.15195</link>
831 <description rdf:parseType="Literal"><p>First-person object-interaction tasks in high-fidelity, 3D, simulated
832 environments such as the AI2Thor virtual home-environment pose significant
833 sample-efficiency challenges for reinforcement learning (RL) agents learning
834 from sparse task rewards. To alleviate these challenges, prior work has
835 provided extensive supervision via a combination of reward-shaping,
836 ground-truth object-information, and expert demonstrations. In this work, we
837 show that one can learn object-interaction tasks from scratch without
838 supervision by learning an attentive object-model as an auxiliary task during
839 task learning with an object-centric relational RL agent. Our key insight is
840 that learning an object-model that incorporates object-attention into forward
841 prediction provides a dense learning signal for unsupervised representation
842 learning of both objects and their relationships. This, in turn, enables faster
843 policy learning for an object-centric relational RL agent. We demonstrate our
844 agent by introducing a set of challenging object-interaction tasks in the
845 AI2Thor environment where learning with our attentive object-model is key to
846 strong performance. Specifically, we compare our agent and relational RL agents
847 with alternative auxiliary tasks to a relational RL agent equipped with
848 ground-truth object-information, and show that learning with our object-model
849 best closes the performance gap in terms of both learning speed and maximum
850 success rate. Additionally, we find that incorporating object-attention into an
851 object-model's forward predictions is key to learning representations which
852 capture object-category and object-state.
853 </p>
854 </description>
855 <dc:creator> <a href="http://fr.arxiv.org/find/cs/1/au:+Carvalho_W/0/1/0/all/0/1">Wilka Carvalho</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Liang_A/0/1/0/all/0/1">Anthony Liang</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Lee_K/0/1/0/all/0/1">Kimin Lee</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Sohn_S/0/1/0/all/0/1">Sungryull Sohn</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Lee_H/0/1/0/all/0/1">Honglak Lee</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Lewis_R/0/1/0/all/0/1">Richard L. Lewis</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Singh_S/0/1/0/all/0/1">Satinder Singh</a></dc:creator>
856 </item>
857 <item rdf:about="http://fr.arxiv.org/abs/2010.15196">
858 <title>A fast and scalable computational framework for large-scale and high-dimensional Bayesian optimal experimental design. (arXiv:2010.15196v1 [math.NA])</title>
859 <link>http://fr.arxiv.org/abs/2010.15196</link>
860 <description rdf:parseType="Literal"><p>We develop a fast and scalable computational framework to solve large-scale
861 and high-dimensional Bayesian optimal experimental design problems. In
862 particular, we consider the problem of optimal observation sensor placement for
863 Bayesian inference of high-dimensional parameters governed by partial
864 differential equations (PDEs), which is formulated as an optimization problem
865 that seeks to maximize an expected information gain (EIG). Such optimization
866 problems are particularly challenging due to the curse of dimensionality for
867 high-dimensional parameters and the expensive solution of large-scale PDEs. To
868 address these challenges, we exploit two essential properties of such problems:
869 the low-rank structure of the Jacobian of the parameter-to-observable map to
870 extract the intrinsically low-dimensional data-informed subspace, and the high
871 correlation of the approximate EIGs by a series of approximations to reduce the
872 number of PDE solves. We propose an efficient offline-online decomposition for
873 the optimization problem: an offline stage of computing all the quantities that
874 require a limited number of PDE solves independent of parameter and data
875 dimensions, and an online stage of optimizing sensor placement that does not
876 require any PDE solve. For the online optimization, we propose a swapping
877 greedy algorithm that first construct an initial set of sensors using leverage
878 scores and then swap the chosen sensors with other candidates until certain
879 convergence criteria are met. We demonstrate the efficiency and scalability of
880 the proposed computational framework by a linear inverse problem of inferring
881 the initial condition for an advection-diffusion equation, and a nonlinear
882 inverse problem of inferring the diffusion coefficient of a log-normal
883 diffusion equation, with both the parameter and data dimensions ranging from a
884 few tens to a few thousands.
885 </p>
886 </description>
887 <dc:creator> <a href="http://fr.arxiv.org/find/math/1/au:+Wu_K/0/1/0/all/0/1">Keyi Wu</a>, <a href="http://fr.arxiv.org/find/math/1/au:+Chen_P/0/1/0/all/0/1">Peng Chen</a>, <a href="http://fr.arxiv.org/find/math/1/au:+Ghattas_O/0/1/0/all/0/1">Omar Ghattas</a></dc:creator>
888 </item>
889 <item rdf:about="http://fr.arxiv.org/abs/2010.15201">
890 <title>Forecasting Hamiltonian dynamics without canonical coordinates. (arXiv:2010.15201v1 [cs.LG])</title>
891 <link>http://fr.arxiv.org/abs/2010.15201</link>
892 <description rdf:parseType="Literal"><p>Conventional neural networks are universal function approximators, but
893 because they are unaware of underlying symmetries or physical laws, they may
894 need impractically many training data to approximate nonlinear dynamics.
895 Recently introduced Hamiltonian neural networks can efficiently learn and
896 forecast dynamical systems that conserve energy, but they require special
897 inputs called canonical coordinates, which may be hard to infer from data. Here
898 we significantly expand the scope of such networks by demonstrating a simple
899 way to train them with any set of generalised coordinates, including easily
900 observable ones.
901 </p>
902 </description>
903 <dc:creator> <a href="http://fr.arxiv.org/find/cs/1/au:+Choudhary_A/0/1/0/all/0/1">Anshul Choudhary</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Lindner_J/0/1/0/all/0/1">John F. Lindner</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Holliday_E/0/1/0/all/0/1">Elliott G. Holliday</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Miller_S/0/1/0/all/0/1">Scott T. Miller</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Sinha_S/0/1/0/all/0/1">Sudeshna Sinha</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Ditto_W/0/1/0/all/0/1">William L. Ditto</a></dc:creator>
904 </item>
905 <item rdf:about="http://fr.arxiv.org/abs/2010.15203">
906 <title>Micromobility in Smart Cities: A Closer Look at Shared Dockless E-Scooters via Big Social Data. (arXiv:2010.15203v1 [cs.SI])</title>
907 <link>http://fr.arxiv.org/abs/2010.15203</link>
908 <description rdf:parseType="Literal"><p>The micromobility is shaping first- and last-mile travels in urban areas.
909 Recently, shared dockless electric scooters (e-scooters) have emerged as a
910 daily alternative to driving for short-distance commuters in large cities due
911 to the affordability, easy accessibility via an app, and zero emissions.
912 Meanwhile, e-scooters come with challenges in city management, such as traffic
913 rules, public safety, parking regulations, and liability issues. In this paper,
914 we collected and investigated 5.8 million scooter-tagged tweets and 144,197
915 images, generated by 2.7 million users from October 2018 to March 2020, to take
916 a closer look at shared e-scooters via crowdsourcing data analytics. We
917 profiled e-scooter usages from spatial-temporal perspectives, explored
918 different business roles (i.e., riders, gig workers, and ridesharing
919 companies), examined operation patterns (e.g., injury types, and parking
920 behaviors), and conducted sentiment analysis. To our best knowledge, this paper
921 is the first large-scale systematic study on shared e-scooters using big social
922 data.
923 </p>
924 </description>
925 <dc:creator> <a href="http://fr.arxiv.org/find/cs/1/au:+Feng_Y/0/1/0/all/0/1">Yunhe Feng</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Zhong_D/0/1/0/all/0/1">Dong Zhong</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Sun_P/0/1/0/all/0/1">Peng Sun</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Zheng_W/0/1/0/all/0/1">Weijian Zheng</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Cao_Q/0/1/0/all/0/1">Qinglei Cao</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Luo_X/0/1/0/all/0/1">Xi Luo</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Lu_Z/0/1/0/all/0/1">Zheng Lu</a></dc:creator>
926 </item>
927 <item rdf:about="http://fr.arxiv.org/abs/2010.15206">
928 <title>Rosella: A Self-Driving Distributed Scheduler for Heterogeneous Clusters. (arXiv:2010.15206v1 [cs.DC])</title>
929 <link>http://fr.arxiv.org/abs/2010.15206</link>
930 <description rdf:parseType="Literal"><p>Large-scale interactive web services and advanced AI applications make
931 sophisticated decisions in real-time, based on executing a massive amount of
932 computation tasks on thousands of servers. Task schedulers, which often operate
933 in heterogeneous and volatile environments, require high throughput, i.e.,
934 scheduling millions of tasks per second, and low latency, i.e., incurring
935 minimal scheduling delays for millisecond-level tasks. Scheduling is further
936 complicated by other users' workloads in a shared system, other background
937 activities, and the diverse hardware configurations inside datacenters.
938 </p>
939 <p>We present Rosella, a new self-driving, distributed approach for task
940 scheduling in heterogeneous clusters. Our system automatically learns the
941 compute environment and adjust its scheduling policy in real-time. The solution
942 provides high throughput and low latency simultaneously, because it runs in
943 parallel on multiple machines with minimum coordination and only performs
944 simple operations for each scheduling decision. Our learning module monitors
945 total system load, and uses the information to dynamically determine optimal
946 estimation strategy for the backends' compute-power. Our scheduling policy
947 generalizes power-of-two-choice algorithms to handle heterogeneous workers,
948 reducing the max queue length of $O(\log n)$ obtained by prior algorithms to
949 $O(\log \log n)$. We implement a Rosella prototype and evaluate it with a
950 variety of workloads. Experimental results show that Rosella significantly
951 reduces task response times, and adapts to environment changes quickly.
952 </p>
953 </description>
954 <dc:creator> <a href="http://fr.arxiv.org/find/cs/1/au:+Wu_Q/0/1/0/all/0/1">Qiong Wu</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Manandhar_S/0/1/0/all/0/1">Sunil Manandhar</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Liu_Z/0/1/0/all/0/1">Zhenming Liu</a></dc:creator>
955 </item>
956 <item rdf:about="http://fr.arxiv.org/abs/2010.15209">
957 <title>Ground Roll Suppression using Convolutional Neural Networks. (arXiv:2010.15209v1 [eess.IV])</title>
958 <link>http://fr.arxiv.org/abs/2010.15209</link>
959 <description rdf:parseType="Literal"><p>Seismic data processing plays a major role in seismic exploration as it
960 conditions much of the seismic interpretation performance. In this context,
961 generating reliable post-stack seismic data depends also on disposing of an
962 efficient pre-stack noise attenuation tool. Here we tackle ground roll noise,
963 one of the most challenging and common noises observed in pre-stack seismic
964 data. Since ground roll is characterized by relative low frequencies and high
965 amplitudes, most commonly used approaches for its suppression are based on
966 frequency-amplitude filters for ground roll characteristic bands. However, when
967 signal and noise share the same frequency ranges, these methods usually deliver
968 also signal suppression or residual noise. In this paper we take advantage of
969 the highly non-linear features of convolutional neural networks, and propose to
970 use different architectures to detect ground roll in shot gathers and
971 ultimately to suppress them using conditional generative adversarial networks.
972 Additionally, we propose metrics to evaluate ground roll suppression, and
973 report strong results compared to expert filtering. Finally, we discuss
974 generalization of trained models for similar and different geologies to better
975 understand the feasibility of our proposal in real applications.
976 </p>
977 </description>
978 <dc:creator> <a href="http://fr.arxiv.org/find/eess/1/au:+Oliveira_D/0/1/0/all/0/1">Dario Augusto Borges Oliveira</a>, <a href="http://fr.arxiv.org/find/eess/1/au:+Semin_D/0/1/0/all/0/1">Daniil Semin</a>, <a href="http://fr.arxiv.org/find/eess/1/au:+Zaytsev_S/0/1/0/all/0/1">Semen Zaytsev</a></dc:creator>
979 </item>
980 <item rdf:about="http://fr.arxiv.org/abs/2010.15210">
981 <title>On Linearizability and the Termination of Randomized Algorithms. (arXiv:2010.15210v1 [cs.DC])</title>
982 <link>http://fr.arxiv.org/abs/2010.15210</link>
983 <description rdf:parseType="Literal"><p>We study the question of whether the "termination with probability 1"
984 property of a randomized algorithm is preserved when one replaces the atomic
985 registers that the algorithm uses with linearizable (implementations of)
986 registers. We show that in general this is not so: roughly speaking, every
987 randomized algorithm A has a corresponding algorithm A' that solves the same
988 problem if the registers that it uses are atomic or strongly-linearizable, but
989 does not terminate if these registers are replaced with "merely" linearizable
990 ones. Together with a previous result shown in [15], this implies that one
991 cannot use the well-known ABD implementation of registers in message-passing
992 systems to automatically transform any randomized algorithm that works in
993 shared-memory systems into a randomized algorithm that works in message-passing
994 systems: with a strong adversary the resulting algorithm may not terminate.
995 </p>
996 </description>
997 <dc:creator> <a href="http://fr.arxiv.org/find/cs/1/au:+Hadzilacos_V/0/1/0/all/0/1">Vassos Hadzilacos</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Hu_X/0/1/0/all/0/1">Xing Hu</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Toueg_S/0/1/0/all/0/1">Sam Toueg</a></dc:creator>
998 </item>
999 <item rdf:about="http://fr.arxiv.org/abs/2010.15211">
1000 <title>Safety-Aware Cascade Controller Tuning Using Constrained Bayesian Optimization. (arXiv:2010.15211v1 [eess.SY])</title>
1001 <link>http://fr.arxiv.org/abs/2010.15211</link>
1002 <description rdf:parseType="Literal"><p>This paper presents an automated, model-free, data-driven method for the safe
1003 tuning of PID cascade controller gains based on Bayesian optimization. The
1004 optimization objective is composed of data-driven performance metrics and
1005 modeled using Gaussian processes. We further introduce a data-driven constraint
1006 that captures the stability requirements from system data. Numerical evaluation
1007 shows that the proposed approach outperforms relay feedback autotuning and
1008 quickly converges to the global optimum, thanks to a tailored stopping
1009 criterion. We demonstrate the performance of the method in simulations and
1010 experiments on a linear axis drive of a grinding machine. For experimental
1011 implementation, in addition to the introduced safety constraint, we integrate a
1012 method for automatic detection of the critical gains and extend the
1013 optimization objective with a penalty depending on the proximity of the current
1014 candidate points to the critical gains. The resulting automated tuning method
1015 optimizes system performance while ensuring stability and standardization.
1016 </p>
1017 </description>
1018 <dc:creator> <a href="http://fr.arxiv.org/find/eess/1/au:+Konig_C/0/1/0/all/0/1">Christopher K&#xf6;nig</a>, <a href="http://fr.arxiv.org/find/eess/1/au:+Khosravi_M/0/1/0/all/0/1">Mohammad Khosravi</a>, <a href="http://fr.arxiv.org/find/eess/1/au:+Maier_M/0/1/0/all/0/1">Markus Maier</a>, <a href="http://fr.arxiv.org/find/eess/1/au:+Smith_R/0/1/0/all/0/1">Roy S. Smith</a>, <a href="http://fr.arxiv.org/find/eess/1/au:+Rupenyan_A/0/1/0/all/0/1">Alisa Rupenyan</a>, <a href="http://fr.arxiv.org/find/eess/1/au:+Lygeros_J/0/1/0/all/0/1">John Lygeros</a></dc:creator>
1019 </item>
1020 <item rdf:about="http://fr.arxiv.org/abs/2010.15217">
1021 <title>Away from Trolley Problems and Toward Risk Management. (arXiv:2010.15217v1 [cs.CY])</title>
1022 <link>http://fr.arxiv.org/abs/2010.15217</link>
1023 <description rdf:parseType="Literal"><p>As automated vehicles receive more attention from the media, there has been
1024 an equivalent increase in the coverage of the ethical choices a vehicle may be
1025 forced to make in certain crash situations with no clear safe outcome. Much of
1026 this coverage has focused on a philosophical thought experiment known as the
1027 "trolley problem," and substituting an automated vehicle for the trolley and
1028 the car's software for the bystander. While this is a stark and straightforward
1029 example of ethical decision making for an automated vehicle, it risks
1030 marginalizing the entire field if it is to become the only ethical problem in
1031 the public's mind. In this chapter, I discuss the shortcomings of the trolley
1032 problem, and introduce more nuanced examples that involve crash risk and
1033 uncertainty. Risk management is introduced as an alternative approach, and its
1034 ethical dimensions are discussed.
1035 </p>
1036 </description>
1037 <dc:creator> <a href="http://fr.arxiv.org/find/cs/1/au:+Goodall_N/0/1/0/all/0/1">Noah J. Goodall</a></dc:creator>
1038 </item>
1039 <item rdf:about="http://fr.arxiv.org/abs/2010.15218">
1040 <title>StencilFlow: Mapping Large Stencil Programs to Distributed Spatial Computing Systems. (arXiv:2010.15218v1 [cs.DC])</title>
1041 <link>http://fr.arxiv.org/abs/2010.15218</link>
1042 <description rdf:parseType="Literal"><p>Spatial computing devices have been shown to significantly accelerate stencil
1043 computations, but have so far relied on unrolling the iterative dimension of a
1044 single stencil operation to increase temporal locality. This work considers the
1045 general case of mapping directed acyclic graphs of heterogeneous stencil
1046 computations to spatial computing systems, assuming large input programs
1047 without an iterative component. StencilFlow maximizes temporal locality and
1048 ensures deadlock freedom in this setting, providing end-to-end analysis and
1049 mapping from a high-level program description to distributed hardware. We
1050 evaluate the generated architectures on an FPGA testbed, demonstrating the
1051 highest single-device and multi-device performance recorded for stencil
1052 programs on FPGAs to date, then leverage the framework to study a complex
1053 stencil program from a production weather simulation application. Our work
1054 enables productively targeting distributed spatial computing systems with large
1055 stencil programs, and offers insight into architecture characteristics required
1056 for their efficient execution in practice.
1057 </p>
1058 </description>
1059 <dc:creator> <a href="http://fr.arxiv.org/find/cs/1/au:+Licht_J/0/1/0/all/0/1">Johannes de Fine Licht</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Kuster_A/0/1/0/all/0/1">Andreas Kuster</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Matteis_T/0/1/0/all/0/1">Tiziano De Matteis</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Ben_Nun_T/0/1/0/all/0/1">Tal Ben-Nun</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Hofer_D/0/1/0/all/0/1">Dominic Hofer</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Hoefler_T/0/1/0/all/0/1">Torsten Hoefler</a></dc:creator>
1060 </item>
1061 <item rdf:about="http://fr.arxiv.org/abs/2010.15221">
1062 <title>Geometric Sampling of Networks. (arXiv:2010.15221v1 [math.DG])</title>
1063 <link>http://fr.arxiv.org/abs/2010.15221</link>
1064 <description rdf:parseType="Literal"><p>Motivated by the methods and results of manifold sampling based on Ricci
1065 curvature, we propose a similar approach for networks. To this end we make
1066 appeal to three types of discrete curvature, namely the graph Forman-, full
1067 Forman- and Haantjes-Ricci curvatures for edge-based and node-based sampling.
1068 We present the results of experiments on real life networks, as well as for
1069 square grids arising in Image Processing. Moreover, we consider fitting Ricci
1070 flows and we employ them for the detection of networks' backbone. We also
1071 develop embedding kernels related to the Forman-Ricci curvatures and employ
1072 them for the detection of the coarse structure of networks, as well as for
1073 network visualization with applications to SVM. The relation between the Ricci
1074 curvature of the original manifold and that of a Ricci curvature driven
1075 discretization is also studied.
1076 </p>
1077 </description>
1078 <dc:creator> <a href="http://fr.arxiv.org/find/math/1/au:+Barkanass_V/0/1/0/all/0/1">Vladislav Barkanass</a>, <a href="http://fr.arxiv.org/find/math/1/au:+Jost_J/0/1/0/all/0/1">J&#xfc;rgen Jost</a>, <a href="http://fr.arxiv.org/find/math/1/au:+Saucan_E/0/1/0/all/0/1">Emil Saucan</a></dc:creator>
1079 </item>
1080 <item rdf:about="http://fr.arxiv.org/abs/2010.15222">
1081 <title>Exploring complex networks with the ICON R package. (arXiv:2010.15222v1 [cs.SI])</title>
1082 <link>http://fr.arxiv.org/abs/2010.15222</link>
1083 <description rdf:parseType="Literal"><p>We introduce ICON, an R package that contains 1075 complex network datasets
1084 in a standard edgelist format. All provided datasets have associated citations
1085 and have been indexed by the Colorado Index of Complex Networks - also referred
1086 to as ICON. In addition to supplying a large and diverse corpus of useful
1087 real-world networks, ICON also implements an S3 generic to work with the
1088 network and ggnetwork R packages for network analysis and visualization,
1089 respectively. Sample code in this report also demonstrates how ICON can be used
1090 in conjunction with the igraph package. Currently, the Comprehensive R Archive
1091 Network hosts ICON v0.4.0. We hope that ICON will serve as a standard corpus
1092 for complex network research and prevent redundant work that would be otherwise
1093 necessary by individual research groups. The open source code for ICON and for
1094 this reproducible report can be found at https://github.com/rrrlw/ICON.
1095 </p>
1096 </description>
1097 <dc:creator> <a href="http://fr.arxiv.org/find/cs/1/au:+Wadhwa_R/0/1/0/all/0/1">Raoul R. Wadhwa</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Scott_J/0/1/0/all/0/1">Jacob G. Scott</a></dc:creator>
1098 </item>
1099 <item rdf:about="http://fr.arxiv.org/abs/2010.15225">
1100 <title>A Visuospatial Dataset for Naturalistic Verb Learning. (arXiv:2010.15225v1 [cs.CL])</title>
1101 <link>http://fr.arxiv.org/abs/2010.15225</link>
1102 <description rdf:parseType="Literal"><p>We introduce a new dataset for training and evaluating grounded language
1103 models. Our data is collected within a virtual reality environment and is
1104 designed to emulate the quality of language data to which a pre-verbal child is
1105 likely to have access: That is, naturalistic, spontaneous speech paired with
1106 richly grounded visuospatial context. We use the collected data to compare
1107 several distributional semantics models for verb learning. We evaluate neural
1108 models based on 2D (pixel) features as well as feature-engineered models based
1109 on 3D (symbolic, spatial) features, and show that neither modeling approach
1110 achieves satisfactory performance. Our results are consistent with evidence
1111 from child language acquisition that emphasizes the difficulty of learning
1112 verbs from naive distributional data. We discuss avenues for future work on
1113 cognitively-inspired grounded language learning, and release our corpus with
1114 the intent of facilitating research on the topic.
1115 </p>
1116 </description>
1117 <dc:creator> <a href="http://fr.arxiv.org/find/cs/1/au:+Ebert_D/0/1/0/all/0/1">Dylan Ebert</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Pavlick_E/0/1/0/all/0/1">Ellie Pavlick</a></dc:creator>
1118 </item>
1119 <item rdf:about="http://fr.arxiv.org/abs/2010.15229">
1120 <title>Speech-Based Emotion Recognition using Neural Networks and Information Visualization. (arXiv:2010.15229v1 [cs.HC])</title>
1121 <link>http://fr.arxiv.org/abs/2010.15229</link>
1122 <description rdf:parseType="Literal"><p>Emotions recognition is commonly employed for health assessment. However, the
1123 typical metric for evaluation in therapy is based on patient-doctor appraisal.
1124 This process can fall into the issue of subjectivity, while also requiring
1125 healthcare professionals to deal with copious amounts of information. Thus,
1126 machine learning algorithms can be a useful tool for the classification of
1127 emotions. While several models have been developed in this domain, there is a
1128 lack of userfriendly representations of the emotion classification systems for
1129 therapy. We propose a tool which enables users to take speech samples and
1130 identify a range of emotions (happy, sad, angry, surprised, neutral, clam,
1131 disgust, and fear) from audio elements through a machine learning model. The
1132 dashboard is designed based on local therapists' needs for intuitive
1133 representations of speech data in order to gain insights and informative
1134 analyses of their sessions with their patients.
1135 </p>
1136 </description>
1137 <dc:creator> <a href="http://fr.arxiv.org/find/cs/1/au:+Almahmoud_J/0/1/0/all/0/1">Jumana Almahmoud</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Kikkeri_K/0/1/0/all/0/1">Kruthika Kikkeri</a></dc:creator>
1138 </item>
1139 <item rdf:about="http://fr.arxiv.org/abs/2010.15232">
1140 <title>Construction Payment Automation Using Blockchain-Enabled Smart Contracts and Reality Capture Technologies. (arXiv:2010.15232v1 [cs.CR])</title>
1141 <link>http://fr.arxiv.org/abs/2010.15232</link>
1142 <description rdf:parseType="Literal"><p>This paper presents a smart contract-based solution for autonomous
1143 administration of construction progress payments. It bridges the gap between
1144 payments (cash flow) and the progress assessments at job sites (product flow)
1145 enabled by reality capture technologies and building information modeling
1146 (BIM). The approach eliminates the reliance on the centralized and heavily
1147 intermediated mechanisms of existing payment applications. The construction
1148 progress is stored in a distributed manner using content addressable file
1149 sharing; it is broadcasted to a smart contract which automates the on-chain
1150 payment settlements and the transfer of lien rights. The method was
1151 successfully used for processing payments to 7 subcontractors in two commercial
1152 construction projects where progress monitoring was performed using a
1153 camera-equipped unmanned aerial vehicle (UAV) and an unmanned ground vehicle
1154 (UGV) equipped with a laser scanner. The results show promise for the method's
1155 potential for increasing the frequency, granularity, and transparency of
1156 payments. The paper is concluded with a discussion of implications for project
1157 management, introducing a new model of project as a singleton state machine.
1158 </p>
1159 </description>
1160 <dc:creator> <a href="http://fr.arxiv.org/find/cs/1/au:+Hamledari_H/0/1/0/all/0/1">Hesam Hamledari</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Fischer_M/0/1/0/all/0/1">Martin Fischer</a></dc:creator>
1161 </item>
1162 <item rdf:about="http://fr.arxiv.org/abs/2010.15233">
1163 <title>Accurate Prostate Cancer Detection and Segmentation on Biparametric MRI using Non-local Mask R-CNN with Histopathological Ground Truth. (arXiv:2010.15233v1 [eess.IV])</title>
1164 <link>http://fr.arxiv.org/abs/2010.15233</link>
1165 <description rdf:parseType="Literal"><p>Purpose: We aimed to develop deep machine learning (DL) models to improve the
1166 detection and segmentation of intraprostatic lesions (IL) on bp-MRI by using
1167 whole amount prostatectomy specimen-based delineations. We also aimed to
1168 investigate whether transfer learning and self-training would improve results
1169 with small amount labelled data.
1170 </p>
1171 <p>Methods: 158 patients had suspicious lesions delineated on MRI based on
1172 bp-MRI, 64 patients had ILs delineated on MRI based on whole mount
1173 prostatectomy specimen sections, 40 patients were unlabelled. A non-local Mask
1174 R-CNN was proposed to improve the segmentation accuracy. Transfer learning was
1175 investigated by fine-tuning a model trained using MRI-based delineations with
1176 prostatectomy-based delineations. Two label selection strategies were
1177 investigated in self-training. The performance of models was evaluated by 3D
1178 detection rate, dice similarity coefficient (DSC), 95 percentile Hausdrauff (95
1179 HD, mm) and true positive ratio (TPR).
1180 </p>
1181 <p>Results: With prostatectomy-based delineations, the non-local Mask R-CNN with
1182 fine-tuning and self-training significantly improved all evaluation metrics.
1183 For the model with the highest detection rate and DSC, 80.5% (33/41) of lesions
1184 in all Gleason Grade Groups (GGG) were detected with DSC of 0.548[0.165], 95 HD
1185 of 5.72[3.17] and TPR of 0.613[0.193]. Among them, 94.7% (18/19) of lesions
1186 with GGG &gt; 2 were detected with DSC of 0.604[0.135], 95 HD of 6.26[3.44] and
1187 TPR of 0.580[0.190].
1188 </p>
1189 <p>Conclusion: DL models can achieve high prostate cancer detection and
1190 segmentation accuracy on bp-MRI based on annotations from histologic images. To
1191 further improve the performance, more data with annotations of both MRI and
1192 whole amount prostatectomy specimens are required.
1193 </p>
1194 </description>
codemadness.org:70 /git/sfeed_tests/file/input/sfeed/realworld/arxiv.org.rss.10.xml.gph:1205: line too long