arxiv.org.rss.20.xml - sfeed_tests - sfeed tests and RSS and Atom files
HTML git clone git://git.codemadness.org/sfeed_tests
DIR Log
DIR Files
DIR Refs
DIR README
DIR LICENSE
---
arxiv.org.rss.20.xml (832069B)
---
1 <?xml version="1.0" encoding="UTF-8"?>
2
3 <rss version="2.0"
4 xmlns:content="http://purl.org/rss/1.0/modules/content/"
5 xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/"
6 xmlns:dc="http://purl.org/dc/elements/1.1/"
7 xmlns:syn="http://purl.org/rss/1.0/modules/syndication/"
8 xmlns:admin="http://webns.net/mvcb/"
9 >
10
11 <channel>
12 <title>cs updates on arXiv.org</title>
13 <link>http://fr.arxiv.org/</link>
14 <description>Computer Science (cs) updates on the arXiv.org e-print archive</description>
15 <language>en-us</language>
16 <pubDate>Fri, 30 Oct 2020 00:30:00 GMT</pubDate>
17 <lastBuildDate>Thu, 29 Oct 2020 20:30:00 -0500</lastBuildDate>
18 <managingEditor>www-admin@arxiv.org</managingEditor>
19
20 
25 <item>
26 <title>Raw Audio for Depression Detection Can Be More Robust Against Gender Imbalance than Mel-Spectrogram Features. (arXiv:2010.15120v1 [cs.SD])</title>
27 <link>http://fr.arxiv.org/abs/2010.15120</link>
28 <description><p>Authors: <a href="http://fr.arxiv.org/find/cs/1/au:+Bailey_A/0/1/0/all/0/1">Andrew Bailey</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Plumbley_M/0/1/0/all/0/1">Mark D. Plumbley</a></p>
29
30 <p>Depression is a large-scale mental health problem and a challenging area for
31 machine learning researchers in terms of the detection of depression. Datasets
32 such as the Distress Analysis Interview Corpus - Wizard of Oz have been created
33 to aid research in this area. However, on top of the challenges inherent in
34 accurately detecting depression, biases in datasets may result in skewed
35 classification performance. In this paper we examine gender bias in the
36 DAIC-WOZ dataset using audio-based deep neural networks. We show that gender
37 biases in DAIC-WOZ can lead to an overreporting of performance, which has been
38 overlooked in the past due to the same gender biases being present in the test
39 set. By using raw audio and different concepts from Fair Machine Learning, such
40 as data re-distribution, we can mitigate against the harmful effects of bias.
41 </p>
42 </description>
43 <guid isPermaLink="false">oai:arXiv.org:2010.15120</guid>
44 </item>
45 <item>
46 <title>papaya2: 2D Irreducible Minkowski Tensor computation. (arXiv:2010.15138v1 [cs.GR])</title>
47 <link>http://fr.arxiv.org/abs/2010.15138</link>
48 <description><p>Authors: <a href="http://fr.arxiv.org/find/cs/1/au:+Schaller_F/0/1/0/all/0/1">Fabian M. Schaller</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Wagner_J/0/1/0/all/0/1">Jenny Wagner</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Kapfer_S/0/1/0/all/0/1">Sebastian C. Kapfer</a></p>
49
50 <p>A common challenge in scientific and technical domains is the quantitative
51 description of geometries and shapes, e.g. in the analysis of microscope
52 imagery or astronomical observation data. Frequently, it is desirable to go
53 beyond scalar shape metrics such as porosity and surface to volume ratios
54 because the samples are anisotropic or because direction-dependent quantities
55 such as conductances or elasticity are of interest. Minkowski Tensors are a
56 systematic family of versatile and robust higher-order shape descriptors that
57 allow for shape characterization of arbitrary order and promise a path to
58 systematic structure-function relationships for direction-dependent properties.
59 Papaya2 is a software to calculate 2D higher-order shape metrics with a library
60 interface, support for Irreducible Minkowski Tensors and interpolated marching
61 squares. Extensions to Matlab, JavaScript and Python are provided as well.
62 While the tensor of inertia is computed by many tools, we are not aware of
63 other open-source software which provides higher-rank shape characterization in
64 2D.
65 </p>
66 </description>
67 <guid isPermaLink="false">oai:arXiv.org:2010.15138</guid>
68 </item>
69 <item>
70 <title>DeSMOG: Detecting Stance in Media On Global Warming. (arXiv:2010.15149v1 [cs.CL])</title>
71 <link>http://fr.arxiv.org/abs/2010.15149</link>
72 <description><p>Authors: <a href="http://fr.arxiv.org/find/cs/1/au:+Luo_Y/0/1/0/all/0/1">Yiwei Luo</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Card_D/0/1/0/all/0/1">Dallas Card</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Jurafsky_D/0/1/0/all/0/1">Dan Jurafsky</a></p>
73
74 <p>Citing opinions is a powerful yet understudied strategy in argumentation. For
75 example, an environmental activist might say, "Leading scientists agree that
76 global warming is a serious concern," framing a clause which affirms their own
77 stance ("that global warming is serious") as an opinion endorsed ("[scientists]
78 agree") by a reputable source ("leading"). In contrast, a global warming denier
79 might frame the same clause as the opinion of an untrustworthy source with a
80 predicate connoting doubt: "Mistaken scientists claim [...]." Our work studies
81 opinion-framing in the global warming (GW) debate, an increasingly partisan
82 issue that has received little attention in NLP. We introduce DeSMOG, a dataset
83 of stance-labeled GW sentences, and train a BERT classifier to study novel
84 aspects of argumentation in how different sides of a debate represent their own
85 and each other's opinions. From 56K news articles, we find that similar
86 linguistic devices for self-affirming and opponent-doubting discourse are used
87 across GW-accepting and skeptic media, though GW-skeptical media shows more
88 opponent-doubt. We also find that authors often characterize sources as
89 hypocritical, by ascribing opinions expressing the author's own view to source
90 entities known to publicly endorse the opposing view. We release our stance
91 dataset, model, and lexicons of framing devices for future work on
92 opinion-framing and the automatic detection of GW stance.
93 </p>
94 </description>
95 <guid isPermaLink="false">oai:arXiv.org:2010.15149</guid>
96 </item>
97 <item>
98 <title>On the Optimality and Convergence Properties of the Learning Model Predictive Controller. (arXiv:2010.15153v1 [math.OC])</title>
99 <link>http://fr.arxiv.org/abs/2010.15153</link>
100 <description><p>Authors: <a href="http://fr.arxiv.org/find/math/1/au:+Rosolia_U/0/1/0/all/0/1">Ugo Rosolia</a>, <a href="http://fr.arxiv.org/find/math/1/au:+Lian_Y/0/1/0/all/0/1">Yingzhao Lian</a>, <a href="http://fr.arxiv.org/find/math/1/au:+Maddalena_E/0/1/0/all/0/1">Emilio T. Maddalena</a>, <a href="http://fr.arxiv.org/find/math/1/au:+Ferrari_Trecate_G/0/1/0/all/0/1">Giancarlo Ferrari-Trecate</a>, <a href="http://fr.arxiv.org/find/math/1/au:+Jones_C/0/1/0/all/0/1">Colin N. Jones</a></p>
101
102 <p>In this technical note we analyse the performance improvement and optimality
103 properties of the Learning Model Predictive Control (LMPC) strategy for linear
104 deterministic systems. The LMPC framework is a policy iteration scheme where
105 closed-loop trajectories are used to update the control policy for the next
106 execution of the control task. We show that, when a Linear Independence
107 Constraint Qualification (LICQ) condition holds, the LMPC scheme guarantees
108 strict iterative performance improvement and optimality, meaning that the
109 closed-loop cost evaluated over the entire task converges asymptotically to the
110 optimal cost of the infinite-horizon control problem. Compared to previous
111 works this sufficient LICQ condition can be easily checked, it holds for a
112 larger class of systems and it can be used to adaptively select the prediction
113 horizon of the controller, as demonstrated by a numerical example.
114 </p>
115 </description>
116 <guid isPermaLink="false">oai:arXiv.org:2010.15153</guid>
117 </item>
118 <item>
119 <title>Kernel Aggregated Fast Multipole Method: Efficient summation of Laplace and Stokes kernel functions. (arXiv:2010.15155v1 [math.NA])</title>
120 <link>http://fr.arxiv.org/abs/2010.15155</link>
121 <description><p>Authors: <a href="http://fr.arxiv.org/find/math/1/au:+Yan_W/0/1/0/all/0/1">Wen Yan</a>, <a href="http://fr.arxiv.org/find/math/1/au:+Blackwell_R/0/1/0/all/0/1">Robert Blackwell</a></p>
122
123 <p>Many different simulation methods for Stokes flow problems involve a common
124 computationally intense task---the summation of a kernel function over $O(N^2)$
125 pairs of points. One popular technique is the Kernel Independent Fast Multipole
126 Method (KIFMM), which constructs a spatial adaptive octree and places a small
127 number of equivalent multipole and local points around each octree box, and
128 completes the kernel sum with $O(N)$ performance. However, the KIFMM cannot be
129 used directly with nonlinear kernels, can be inefficient for complicated linear
130 kernels, and in general is difficult to implement compared to less-efficient
131 alternatives such as Ewald-type methods. Here we present the Kernel Aggregated
132 Fast Multipole Method (KAFMM), which overcomes these drawbacks by allowing
133 different kernel functions to be used for specific stages of octree traversal.
134 In many cases a simpler linear kernel suffices during the most extensive stage
135 of octree traversal, even for nonlinear kernel summation problems. The KAFMM
136 thereby improves computational efficiency in general and also allows efficient
137 evaluation of some nonlinear kernel functions such as the regularized
138 Stokeslet. We have implemented our method as an open-source software library
139 STKFMM with support for Laplace kernels, the Stokeslet, regularized Stokeslet,
140 Rotne-Prager-Yamakawa (RPY) tensor, and the Stokes double-layer and traction
141 operators. Open and periodic boundary conditions are supported for all kernels,
142 and the no-slip wall boundary condition is supported for the Stokeslet and RPY
143 tensor. The package is designed to be ready-to-use as well as being readily
144 extensible to additional kernels. Massive parallelism is supported with mixed
145 OpenMP and MPI.
146 </p>
147 </description>
148 <guid isPermaLink="false">oai:arXiv.org:2010.15155</guid>
149 </item>
150 <item>
151 <title>Diagnostic data integration using deep neural networks for real-time plasma analysis. (arXiv:2010.15156v1 [physics.comp-ph])</title>
152 <link>http://fr.arxiv.org/abs/2010.15156</link>
153 <description><p>Authors: <a href="http://fr.arxiv.org/find/physics/1/au:+Garola_A/0/1/0/all/0/1">A. Rigoni Garola</a>, <a href="http://fr.arxiv.org/find/physics/1/au:+Cavazzana_R/0/1/0/all/0/1">R. Cavazzana</a>, <a href="http://fr.arxiv.org/find/physics/1/au:+Gobbin_M/0/1/0/all/0/1">M. Gobbin</a>, <a href="http://fr.arxiv.org/find/physics/1/au:+Delogu_R/0/1/0/all/0/1">R.S. Delogu</a>, <a href="http://fr.arxiv.org/find/physics/1/au:+Manduchi_G/0/1/0/all/0/1">G. Manduchi</a>, <a href="http://fr.arxiv.org/find/physics/1/au:+Taliercio_C/0/1/0/all/0/1">C. Taliercio</a>, <a href="http://fr.arxiv.org/find/physics/1/au:+Luchetta_A/0/1/0/all/0/1">A. Luchetta</a></p>
154
155 <p>Recent advances in acquisition equipment is providing experiments with
156 growing amounts of precise yet affordable sensors. At the same time an improved
157 computational power, coming from new hardware resources (GPU, FPGA, ACAP), has
158 been made available at relatively low costs. This led us to explore the
159 possibility of completely renewing the chain of acquisition for a fusion
160 experiment, where many high-rate sources of data, coming from different
161 diagnostics, can be combined in a wide framework of algorithms. If on one hand
162 adding new data sources with different diagnostics enriches our knowledge about
163 physical aspects, on the other hand the dimensions of the overall model grow,
164 making relations among variables more and more opaque. A new approach for the
165 integration of such heterogeneous diagnostics, based on composition of deep
166 \textit{variational autoencoders}, could ease this problem, acting as a
167 structural sparse regularizer. This has been applied to RFX-mod experiment
168 data, integrating the soft X-ray linear images of plasma temperature with the
169 magnetic state.
170 </p>
171 <p>However to ensure a real-time signal analysis, those algorithmic techniques
172 must be adapted to run in well suited hardware. In particular it is shown that,
173 attempting a quantization of neurons transfer functions, such models can be
174 modified to create an embedded firmware. This firmware, approximating the deep
175 inference model to a set of simple operations, fits well with the simple logic
176 units that are largely abundant in FPGAs. This is the key factor that permits
177 the use of affordable hardware with complex deep neural topology and operates
178 them in real-time.
179 </p>
180 </description>
181 <guid isPermaLink="false">oai:arXiv.org:2010.15156</guid>
182 </item>
183 <item>
184 <title>Panoster: End-to-end Panoptic Segmentation of LiDAR Point Clouds. (arXiv:2010.15157v1 [cs.CV])</title>
185 <link>http://fr.arxiv.org/abs/2010.15157</link>
186 <description><p>Authors: <a href="http://fr.arxiv.org/find/cs/1/au:+Gasperini_S/0/1/0/all/0/1">Stefano Gasperini</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Mahani_M/0/1/0/all/0/1">Mohammad-Ali Nikouei Mahani</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Marcos_Ramiro_A/0/1/0/all/0/1">Alvaro Marcos-Ramiro</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Navab_N/0/1/0/all/0/1">Nassir Navab</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Tombari_F/0/1/0/all/0/1">Federico Tombari</a></p>
187
188 <p>Panoptic segmentation has recently unified semantic and instance
189 segmentation, previously addressed separately, thus taking a step further
190 towards creating more comprehensive and efficient perception systems. In this
191 paper, we present Panoster, a novel proposal-free panoptic segmentation method
192 for point clouds. Unlike previous approaches relying on several steps to group
193 pixels or points into objects, Panoster proposes a simplified framework
194 incorporating a learning-based clustering solution to identify instances. At
195 inference time, this acts as a class-agnostic semantic segmentation, allowing
196 Panoster to be fast, while outperforming prior methods in terms of accuracy.
197 Additionally, we showcase how our approach can be flexibly and effectively
198 applied on diverse existing semantic architectures to deliver panoptic
199 predictions.
200 </p>
201 </description>
202 <guid isPermaLink="false">oai:arXiv.org:2010.15157</guid>
203 </item>
204 <item>
205 <title>CNN Profiler on Polar Coordinate Images for Tropical Cyclone Structure Analysis. (arXiv:2010.15158v1 [cs.CV])</title>
206 <link>http://fr.arxiv.org/abs/2010.15158</link>
207 <description><p>Authors: <a href="http://fr.arxiv.org/find/cs/1/au:+Chen_B/0/1/0/all/0/1">Boyo Chen</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Chen_B/0/1/0/all/0/1">Buo-Fu Chen</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Hsiao_C/0/1/0/all/0/1">Chun-Min Hsiao</a></p>
208
209 <p>Convolutional neural networks (CNN) have achieved great success in analyzing
210 tropical cyclones (TC) with satellite images in several tasks, such as TC
211 intensity estimation. In contrast, TC structure, which is conventionally
212 described by a few parameters estimated subjectively by meteorology
213 specialists, is still hard to be profiled objectively and routinely. This study
214 applies CNN on satellite images to create the entire TC structure profiles,
215 covering all the structural parameters. By utilizing the meteorological domain
216 knowledge to construct TC wind profiles based on historical structure
217 parameters, we provide valuable labels for training in our newly released
218 benchmark dataset. With such a dataset, we hope to attract more attention to
219 this crucial issue among data scientists. Meanwhile, a baseline is established
220 with a specialized convolutional model operating on polar-coordinates. We
221 discovered that it is more feasible and physically reasonable to extract
222 structural information on polar-coordinates, instead of Cartesian coordinates,
223 according to a TC's rotational and spiral natures. Experimental results on the
224 released benchmark dataset verified the robustness of the proposed model and
225 demonstrated the potential for applying deep learning techniques for this
226 barely developed yet important topic.
227 </p>
228 </description>
229 <guid isPermaLink="false">oai:arXiv.org:2010.15158</guid>
230 </item>
231 <item>
232 <title>Sizeless: Predicting the optimal size of serverless functions. (arXiv:2010.15162v1 [cs.DC])</title>
233 <link>http://fr.arxiv.org/abs/2010.15162</link>
234 <description><p>Authors: <a href="http://fr.arxiv.org/find/cs/1/au:+Eismann_S/0/1/0/all/0/1">Simon Eismann</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Bui_L/0/1/0/all/0/1">Long Bui</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Grohmann_J/0/1/0/all/0/1">Johannes Grohmann</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Abad_C/0/1/0/all/0/1">Cristina L. Abad</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Herbst_N/0/1/0/all/0/1">Nikolas Herbst</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Kounev_S/0/1/0/all/0/1">Samuel Kounev</a></p>
235
236 <p>Serverless functions are a cloud computing paradigm that reduces operational
237 overheads for developers, because the cloud provider takes care of resource
238 management tasks such as resource provisioning, deployment, and auto-scaling.
239 The only resource management task that developers are still in charge of is
240 resource sizing, that is, selecting how much resources are allocated to each
241 worker instance. However, due to the challenging nature of resource sizing,
242 developers often neglect it despite its significant cost and performance
243 benefits. Existing approaches aiming to automate serverless functions resource
244 sizing require dedicated performance tests, which are time consuming to
245 implement and maintain.
246 </p>
247 <p>In this paper, we introduce Sizeless -- an approach to predict the optimal
248 resource size of a serverless function using monitoring data from a single
249 resource size. As our approach requires only production monitoring data,
250 developers no longer need to implement and maintain representative performance
251 tests. Furthermore, it enables cloud providers, which cannot engage in testing
252 the performance of user functions, to implement resource sizing on a platform
253 level and automate the last resource management task associated with serverless
254 functions. In our evaluation, Sizeless was able to predict the execution time
255 of the serverless functions of a realistic server-less application with a
256 median prediction accuracy of 93.1%. Using Sizeless to optimize the memory size
257 of this application results in a speedup of 16.7% while simultaneously
258 decreasing costs by 2.5%.
259 </p>
260 </description>
261 <guid isPermaLink="false">oai:arXiv.org:2010.15162</guid>
262 </item>
263 <item>
264 <title>Polymer Informatics with Multi-Task Learning. (arXiv:2010.15166v1 [cond-mat.mtrl-sci])</title>
265 <link>http://fr.arxiv.org/abs/2010.15166</link>
266 <description><p>Authors: <a href="http://fr.arxiv.org/find/cond-mat/1/au:+Kunneth_C/0/1/0/all/0/1">Christopher K&#xfc;nneth</a>, <a href="http://fr.arxiv.org/find/cond-mat/1/au:+Rajan_A/0/1/0/all/0/1">Arunkumar Chitteth Rajan</a>, <a href="http://fr.arxiv.org/find/cond-mat/1/au:+Tran_H/0/1/0/all/0/1">Huan Tran</a>, <a href="http://fr.arxiv.org/find/cond-mat/1/au:+Chen_L/0/1/0/all/0/1">Lihua Chen</a>, <a href="http://fr.arxiv.org/find/cond-mat/1/au:+Kim_C/0/1/0/all/0/1">Chiho Kim</a>, <a href="http://fr.arxiv.org/find/cond-mat/1/au:+Ramprasad_R/0/1/0/all/0/1">Rampi Ramprasad</a></p>
267
268 <p>Modern data-driven tools are transforming application-specific polymer
269 development cycles. Surrogate models that can be trained to predict the
270 properties of new polymers are becoming commonplace. Nevertheless, these models
271 do not utilize the full breadth of the knowledge available in datasets, which
272 are oftentimes sparse; inherent correlations between different property
273 datasets are disregarded. Here, we demonstrate the potency of multi-task
274 learning approaches that exploit such inherent correlations effectively,
275 particularly when some property dataset sizes are small. Data pertaining to 36
276 different properties of over $13, 000$ polymers (corresponding to over $23,000$
277 data points) are coalesced and supplied to deep-learning multi-task
278 architectures. Compared to conventional single-task learning models (that are
279 trained on individual property datasets independently), the multi-task approach
280 is accurate, efficient, scalable, and amenable to transfer learning as more
281 data on the same or different properties become available. Moreover, these
282 models are interpretable. Chemical rules, that explain how certain features
283 control trends in specific property values, emerge from the present work,
284 paving the way for the rational design of application specific polymers meeting
285 desired property or performance objectives.
286 </p>
287 </description>
288 <guid isPermaLink="false">oai:arXiv.org:2010.15166</guid>
289 </item>
290 <item>
291 <title>Semi-Grant-Free NOMA: Ergodic Rates Analysis with Random Deployed Users. (arXiv:2010.15169v1 [cs.IT])</title>
292 <link>http://fr.arxiv.org/abs/2010.15169</link>
293 <description><p>Authors: <a href="http://fr.arxiv.org/find/cs/1/au:+Zhang_C/0/1/0/all/0/1">Chao Zhang</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Liu_Y/0/1/0/all/0/1">Yuanwei Liu</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Yi_W/0/1/0/all/0/1">Wenqiang Yi</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Qin_Z/0/1/0/all/0/1">Zhijin Qin</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Ding_Z/0/1/0/all/0/1">Zhiguo Ding</a></p>
294
295 <p>Semi-grant-free (Semi-GF) non-orthogonal multiple access (NOMA) enables
296 grant-free (GF) and grant-based (GB) users to share the same resource blocks,
297 thereby balancing the connectivity and stability of communications. This letter
298 analyzes ergodic rates of Semi-GF NOMA systems. First, this paper exploits a
299 Semi-GF protocol, denoted as dynamic protocol, for selecting GF users into the
300 occupied GB channels via the GB user's instantaneous received power. Under this
301 protocol, the closed-form analytical and approximated expressions for ergodic
302 rates are derived. The numerical results illustrate that the GF user (weak NOMA
303 user) has a performance upper limit, while the ergodic rate of the GB user
304 (strong NOMA user) increases linearly versus the transmit signal-to-noise
305 ratio.
306 </p>
307 </description>
308 <guid isPermaLink="false">oai:arXiv.org:2010.15169</guid>
309 </item>
310 <item>
311 <title>Slicing a single wireless collision channel among throughput- and timeliness-sensitive services. (arXiv:2010.15171v1 [cs.IT])</title>
312 <link>http://fr.arxiv.org/abs/2010.15171</link>
313 <description><p>Authors: <a href="http://fr.arxiv.org/find/cs/1/au:+Leyva_Mayorga_I/0/1/0/all/0/1">Israel Leyva-Mayorga</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Chiariotti_F/0/1/0/all/0/1">Federico Chiariotti</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Stefanovic_C/0/1/0/all/0/1">&#x10c;edomir Stefanovi&#x107;</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Kalor_A/0/1/0/all/0/1">Anders E. Kal&#xf8;r</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Popovski_P/0/1/0/all/0/1">Petar Popovski</a></p>
314
315 <p>The fifth generation (5G) wireless system has a platform-driven approach,
316 aiming to support heterogeneous connections with very diverse requirements. The
317 shared wireless resources should be sliced in a way that each user perceives
318 that its requirement has been met. Heterogeneity challenges the traditional
319 notion of resource efficiency, as the resource usage has cater for, e.g. rate
320 maximization for one user and timeliness requirement for another user. This
321 paper treats a model for radio access network (RAN) uplink, where a
322 throughput-demanding broadband user shares wireless resources with an
323 intermittently active user that wants to optimize the timeliness, expressed in
324 terms of latency-reliability or Age of Information (AoI). We evaluate the
325 trade-offs between throughput and timeliness for Orthogonal Multiple Access
326 (OMA) as well as Non-Orthogonal Multiple Access (NOMA) with successive
327 interference cancellation (SIC). We observe that NOMA with SIC, in a
328 conservative scenario with destructive collisions, is just slightly inferior to
329 that of OMA, which indicates that it may offer significant benefits in
330 practical deployments where the capture effect is frequently encountered. On
331 the other hand, finding the optimal configuration of NOMA with SIC depends on
332 the activity pattern of the intermittent user, to which OMA is insensitive.
333 </p>
334 </description>
335 <guid isPermaLink="false">oai:arXiv.org:2010.15171</guid>
336 </item>
337 <item>
338 <title>Improving Perceptual Quality by Phone-Fortified Perceptual Loss for Speech Enhancement. (arXiv:2010.15174v1 [cs.SD])</title>
339 <link>http://fr.arxiv.org/abs/2010.15174</link>
340 <description><p>Authors: <a href="http://fr.arxiv.org/find/cs/1/au:+Hsieh_T/0/1/0/all/0/1">Tsun-An Hsieh</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Yu_C/0/1/0/all/0/1">Cheng Yu</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Fu_S/0/1/0/all/0/1">Szu-Wei Fu</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Lu_X/0/1/0/all/0/1">Xugang Lu</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Tsao_Y/0/1/0/all/0/1">Yu Tsao</a></p>
341
342 <p>Speech enhancement (SE) aims to improve speech quality and intelligibility,
343 which are both related to a smooth transition in speech segments that may carry
344 linguistic information, e.g. phones and syllables. In this study, we took
345 phonetic characteristics into account in the SE training process. Hence, we
346 designed a phone-fortified perceptual (PFP) loss, and the training of our SE
347 model was guided by PFP loss. In PFP loss, phonetic characteristics are
348 extracted by wav2vec, an unsupervised learning model based on the contrastive
349 predictive coding (CPC) criterion. Different from previous deep-feature-based
350 approaches, the proposed approach explicitly uses the phonetic information in
351 the deep feature extraction process to guide the SE model training. To test the
352 proposed approach, we first confirmed that the wav2vec representations carried
353 clear phonetic information using a t-distributed stochastic neighbor embedding
354 (t-SNE) analysis. Next, we observed that the proposed PFP loss was more
355 strongly correlated with the perceptual evaluation metrics than point-wise and
356 signal-level losses, thus achieving higher scores for standardized quality and
357 intelligibility evaluation metrics in the Voice Bank--DEMAND dataset.
358 </p>
359 </description>
360 <guid isPermaLink="false">oai:arXiv.org:2010.15174</guid>
361 </item>
362 <item>
363 <title>A Study on Efficiency in Continual Learning Inspired by Human Learning. (arXiv:2010.15187v1 [cs.LG])</title>
364 <link>http://fr.arxiv.org/abs/2010.15187</link>
365 <description><p>Authors: <a href="http://fr.arxiv.org/find/cs/1/au:+Ball_P/0/1/0/all/0/1">Philip J. Ball</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Li_Y/0/1/0/all/0/1">Yingzhen Li</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Lamb_A/0/1/0/all/0/1">Angus Lamb</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Zhang_C/0/1/0/all/0/1">Cheng Zhang</a></p>
366
367 <p>Humans are efficient continual learning systems; we continually learn new
368 skills from birth with finite cells and resources. Our learning is highly
369 optimized both in terms of capacity and time while not suffering from
370 catastrophic forgetting. In this work we study the efficiency of continual
371 learning systems, taking inspiration from human learning. In particular,
372 inspired by the mechanisms of sleep, we evaluate popular pruning-based
373 continual learning algorithms, using PackNet as a case study. First, we
374 identify that weight freezing, which is used in continual learning without
375 biological justification, can result in over $2\times$ as many weights being
376 used for a given level of performance. Secondly, we note the similarity in
377 human day and night time behaviors to the training and pruning phases
378 respectively of PackNet. We study a setting where the pruning phase is given a
379 time budget, and identify connections between iterative pruning and multiple
380 sleep cycles in humans. We show there exists an optimal choice of iteration
381 v.s. epochs given different tasks.
382 </p>
383 </description>
384 <guid isPermaLink="false">oai:arXiv.org:2010.15187</guid>
385 </item>
386 <item>
387 <title>Explicit stabilized multirate method for stiff stochastic differential equations. (arXiv:2010.15193v1 [math.NA])</title>
388 <link>http://fr.arxiv.org/abs/2010.15193</link>
389 <description><p>Authors: <a href="http://fr.arxiv.org/find/math/1/au:+Abdulle_A/0/1/0/all/0/1">Assyr Abdulle</a>, <a href="http://fr.arxiv.org/find/math/1/au:+Souza_G/0/1/0/all/0/1">Giacomo Rosilho de Souza</a></p>
390
391 <p>Stabilized explicit methods are particularly efficient for large systems of
392 stiff stochastic differential equations (SDEs) due to their extended stability
393 domain. However, they loose their efficiency when a severe stiffness is induced
394 by very few "fast" degrees of freedom, as the stiff and nonstiff terms are
395 evaluated concurrently. Therefore, inspired by [A. Abdulle, M. J. Grote, and G.
396 Rosilho de Souza, Preprint (2020), <a href="/abs/2006.00744">arXiv:2006.00744</a>] we introduce a stochastic
397 modified equation whose stiffness depends solely on the "slow" terms. By
398 integrating this modified equation with a stabilized explicit scheme we devise
399 a multirate method which overcomes the bottleneck caused by a few severely
400 stiff terms and recovers the efficiency of stabilized schemes for large systems
401 of nonlinear SDEs. The scheme is not based on any scale separation assumption
402 of the SDE and therefore it is employable for problems stemming from the
403 spatial discretization of stochastic parabolic partial differential equations
404 on locally refined grids. The multirate scheme has strong order 1/2, weak order
405 1 and its stability is proved on a model problem. Numerical experiments confirm
406 the efficiency and accuracy of the scheme.
407 </p>
408 </description>
409 <guid isPermaLink="false">oai:arXiv.org:2010.15193</guid>
410 </item>
411 <item>
412 <title>Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in First-person Simulated 3D Environments. (arXiv:2010.15195v1 [cs.LG])</title>
413 <link>http://fr.arxiv.org/abs/2010.15195</link>
414 <description><p>Authors: <a href="http://fr.arxiv.org/find/cs/1/au:+Carvalho_W/0/1/0/all/0/1">Wilka Carvalho</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Liang_A/0/1/0/all/0/1">Anthony Liang</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Lee_K/0/1/0/all/0/1">Kimin Lee</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Sohn_S/0/1/0/all/0/1">Sungryull Sohn</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Lee_H/0/1/0/all/0/1">Honglak Lee</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Lewis_R/0/1/0/all/0/1">Richard L. Lewis</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Singh_S/0/1/0/all/0/1">Satinder Singh</a></p>
415
416 <p>First-person object-interaction tasks in high-fidelity, 3D, simulated
417 environments such as the AI2Thor virtual home-environment pose significant
418 sample-efficiency challenges for reinforcement learning (RL) agents learning
419 from sparse task rewards. To alleviate these challenges, prior work has
420 provided extensive supervision via a combination of reward-shaping,
421 ground-truth object-information, and expert demonstrations. In this work, we
422 show that one can learn object-interaction tasks from scratch without
423 supervision by learning an attentive object-model as an auxiliary task during
424 task learning with an object-centric relational RL agent. Our key insight is
425 that learning an object-model that incorporates object-attention into forward
426 prediction provides a dense learning signal for unsupervised representation
427 learning of both objects and their relationships. This, in turn, enables faster
428 policy learning for an object-centric relational RL agent. We demonstrate our
429 agent by introducing a set of challenging object-interaction tasks in the
430 AI2Thor environment where learning with our attentive object-model is key to
431 strong performance. Specifically, we compare our agent and relational RL agents
432 with alternative auxiliary tasks to a relational RL agent equipped with
433 ground-truth object-information, and show that learning with our object-model
434 best closes the performance gap in terms of both learning speed and maximum
435 success rate. Additionally, we find that incorporating object-attention into an
436 object-model's forward predictions is key to learning representations which
437 capture object-category and object-state.
438 </p>
439 </description>
440 <guid isPermaLink="false">oai:arXiv.org:2010.15195</guid>
441 </item>
442 <item>
443 <title>A fast and scalable computational framework for large-scale and high-dimensional Bayesian optimal experimental design. (arXiv:2010.15196v1 [math.NA])</title>
444 <link>http://fr.arxiv.org/abs/2010.15196</link>
445 <description><p>Authors: <a href="http://fr.arxiv.org/find/math/1/au:+Wu_K/0/1/0/all/0/1">Keyi Wu</a>, <a href="http://fr.arxiv.org/find/math/1/au:+Chen_P/0/1/0/all/0/1">Peng Chen</a>, <a href="http://fr.arxiv.org/find/math/1/au:+Ghattas_O/0/1/0/all/0/1">Omar Ghattas</a></p>
446
447 <p>We develop a fast and scalable computational framework to solve large-scale
448 and high-dimensional Bayesian optimal experimental design problems. In
449 particular, we consider the problem of optimal observation sensor placement for
450 Bayesian inference of high-dimensional parameters governed by partial
451 differential equations (PDEs), which is formulated as an optimization problem
452 that seeks to maximize an expected information gain (EIG). Such optimization
453 problems are particularly challenging due to the curse of dimensionality for
454 high-dimensional parameters and the expensive solution of large-scale PDEs. To
455 address these challenges, we exploit two essential properties of such problems:
456 the low-rank structure of the Jacobian of the parameter-to-observable map to
457 extract the intrinsically low-dimensional data-informed subspace, and the high
458 correlation of the approximate EIGs by a series of approximations to reduce the
459 number of PDE solves. We propose an efficient offline-online decomposition for
460 the optimization problem: an offline stage of computing all the quantities that
461 require a limited number of PDE solves independent of parameter and data
462 dimensions, and an online stage of optimizing sensor placement that does not
463 require any PDE solve. For the online optimization, we propose a swapping
464 greedy algorithm that first construct an initial set of sensors using leverage
465 scores and then swap the chosen sensors with other candidates until certain
466 convergence criteria are met. We demonstrate the efficiency and scalability of
467 the proposed computational framework by a linear inverse problem of inferring
468 the initial condition for an advection-diffusion equation, and a nonlinear
469 inverse problem of inferring the diffusion coefficient of a log-normal
470 diffusion equation, with both the parameter and data dimensions ranging from a
471 few tens to a few thousands.
472 </p>
473 </description>
474 <guid isPermaLink="false">oai:arXiv.org:2010.15196</guid>
475 </item>
476 <item>
477 <title>Forecasting Hamiltonian dynamics without canonical coordinates. (arXiv:2010.15201v1 [cs.LG])</title>
478 <link>http://fr.arxiv.org/abs/2010.15201</link>
479 <description><p>Authors: <a href="http://fr.arxiv.org/find/cs/1/au:+Choudhary_A/0/1/0/all/0/1">Anshul Choudhary</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Lindner_J/0/1/0/all/0/1">John F. Lindner</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Holliday_E/0/1/0/all/0/1">Elliott G. Holliday</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Miller_S/0/1/0/all/0/1">Scott T. Miller</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Sinha_S/0/1/0/all/0/1">Sudeshna Sinha</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Ditto_W/0/1/0/all/0/1">William L. Ditto</a></p>
480
481 <p>Conventional neural networks are universal function approximators, but
482 because they are unaware of underlying symmetries or physical laws, they may
483 need impractically many training data to approximate nonlinear dynamics.
484 Recently introduced Hamiltonian neural networks can efficiently learn and
485 forecast dynamical systems that conserve energy, but they require special
486 inputs called canonical coordinates, which may be hard to infer from data. Here
487 we significantly expand the scope of such networks by demonstrating a simple
488 way to train them with any set of generalised coordinates, including easily
489 observable ones.
490 </p>
491 </description>
492 <guid isPermaLink="false">oai:arXiv.org:2010.15201</guid>
493 </item>
494 <item>
495 <title>Micromobility in Smart Cities: A Closer Look at Shared Dockless E-Scooters via Big Social Data. (arXiv:2010.15203v1 [cs.SI])</title>
496 <link>http://fr.arxiv.org/abs/2010.15203</link>
497 <description><p>Authors: <a href="http://fr.arxiv.org/find/cs/1/au:+Feng_Y/0/1/0/all/0/1">Yunhe Feng</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Zhong_D/0/1/0/all/0/1">Dong Zhong</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Sun_P/0/1/0/all/0/1">Peng Sun</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Zheng_W/0/1/0/all/0/1">Weijian Zheng</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Cao_Q/0/1/0/all/0/1">Qinglei Cao</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Luo_X/0/1/0/all/0/1">Xi Luo</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Lu_Z/0/1/0/all/0/1">Zheng Lu</a></p>
498
499 <p>The micromobility is shaping first- and last-mile travels in urban areas.
500 Recently, shared dockless electric scooters (e-scooters) have emerged as a
501 daily alternative to driving for short-distance commuters in large cities due
502 to the affordability, easy accessibility via an app, and zero emissions.
503 Meanwhile, e-scooters come with challenges in city management, such as traffic
504 rules, public safety, parking regulations, and liability issues. In this paper,
505 we collected and investigated 5.8 million scooter-tagged tweets and 144,197
506 images, generated by 2.7 million users from October 2018 to March 2020, to take
507 a closer look at shared e-scooters via crowdsourcing data analytics. We
508 profiled e-scooter usages from spatial-temporal perspectives, explored
509 different business roles (i.e., riders, gig workers, and ridesharing
510 companies), examined operation patterns (e.g., injury types, and parking
511 behaviors), and conducted sentiment analysis. To our best knowledge, this paper
512 is the first large-scale systematic study on shared e-scooters using big social
513 data.
514 </p>
515 </description>
516 <guid isPermaLink="false">oai:arXiv.org:2010.15203</guid>
517 </item>
518 <item>
519 <title>Rosella: A Self-Driving Distributed Scheduler for Heterogeneous Clusters. (arXiv:2010.15206v1 [cs.DC])</title>
520 <link>http://fr.arxiv.org/abs/2010.15206</link>
521 <description><p>Authors: <a href="http://fr.arxiv.org/find/cs/1/au:+Wu_Q/0/1/0/all/0/1">Qiong Wu</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Manandhar_S/0/1/0/all/0/1">Sunil Manandhar</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Liu_Z/0/1/0/all/0/1">Zhenming Liu</a></p>
522
523 <p>Large-scale interactive web services and advanced AI applications make
524 sophisticated decisions in real-time, based on executing a massive amount of
525 computation tasks on thousands of servers. Task schedulers, which often operate
526 in heterogeneous and volatile environments, require high throughput, i.e.,
527 scheduling millions of tasks per second, and low latency, i.e., incurring
528 minimal scheduling delays for millisecond-level tasks. Scheduling is further
529 complicated by other users' workloads in a shared system, other background
530 activities, and the diverse hardware configurations inside datacenters.
531 </p>
532 <p>We present Rosella, a new self-driving, distributed approach for task
533 scheduling in heterogeneous clusters. Our system automatically learns the
534 compute environment and adjust its scheduling policy in real-time. The solution
535 provides high throughput and low latency simultaneously, because it runs in
536 parallel on multiple machines with minimum coordination and only performs
537 simple operations for each scheduling decision. Our learning module monitors
538 total system load, and uses the information to dynamically determine optimal
539 estimation strategy for the backends' compute-power. Our scheduling policy
540 generalizes power-of-two-choice algorithms to handle heterogeneous workers,
541 reducing the max queue length of $O(\log n)$ obtained by prior algorithms to
542 $O(\log \log n)$. We implement a Rosella prototype and evaluate it with a
543 variety of workloads. Experimental results show that Rosella significantly
544 reduces task response times, and adapts to environment changes quickly.
545 </p>
546 </description>
547 <guid isPermaLink="false">oai:arXiv.org:2010.15206</guid>
548 </item>
549 <item>
550 <title>Ground Roll Suppression using Convolutional Neural Networks. (arXiv:2010.15209v1 [eess.IV])</title>
551 <link>http://fr.arxiv.org/abs/2010.15209</link>
552 <description><p>Authors: <a href="http://fr.arxiv.org/find/eess/1/au:+Oliveira_D/0/1/0/all/0/1">Dario Augusto Borges Oliveira</a>, <a href="http://fr.arxiv.org/find/eess/1/au:+Semin_D/0/1/0/all/0/1">Daniil Semin</a>, <a href="http://fr.arxiv.org/find/eess/1/au:+Zaytsev_S/0/1/0/all/0/1">Semen Zaytsev</a></p>
553
554 <p>Seismic data processing plays a major role in seismic exploration as it
555 conditions much of the seismic interpretation performance. In this context,
556 generating reliable post-stack seismic data depends also on disposing of an
557 efficient pre-stack noise attenuation tool. Here we tackle ground roll noise,
558 one of the most challenging and common noises observed in pre-stack seismic
559 data. Since ground roll is characterized by relative low frequencies and high
560 amplitudes, most commonly used approaches for its suppression are based on
561 frequency-amplitude filters for ground roll characteristic bands. However, when
562 signal and noise share the same frequency ranges, these methods usually deliver
563 also signal suppression or residual noise. In this paper we take advantage of
564 the highly non-linear features of convolutional neural networks, and propose to
565 use different architectures to detect ground roll in shot gathers and
566 ultimately to suppress them using conditional generative adversarial networks.
567 Additionally, we propose metrics to evaluate ground roll suppression, and
568 report strong results compared to expert filtering. Finally, we discuss
569 generalization of trained models for similar and different geologies to better
570 understand the feasibility of our proposal in real applications.
571 </p>
572 </description>
573 <guid isPermaLink="false">oai:arXiv.org:2010.15209</guid>
574 </item>
575 <item>
576 <title>On Linearizability and the Termination of Randomized Algorithms. (arXiv:2010.15210v1 [cs.DC])</title>
577 <link>http://fr.arxiv.org/abs/2010.15210</link>
578 <description><p>Authors: <a href="http://fr.arxiv.org/find/cs/1/au:+Hadzilacos_V/0/1/0/all/0/1">Vassos Hadzilacos</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Hu_X/0/1/0/all/0/1">Xing Hu</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Toueg_S/0/1/0/all/0/1">Sam Toueg</a></p>
579
580 <p>We study the question of whether the "termination with probability 1"
581 property of a randomized algorithm is preserved when one replaces the atomic
582 registers that the algorithm uses with linearizable (implementations of)
583 registers. We show that in general this is not so: roughly speaking, every
584 randomized algorithm A has a corresponding algorithm A' that solves the same
585 problem if the registers that it uses are atomic or strongly-linearizable, but
586 does not terminate if these registers are replaced with "merely" linearizable
587 ones. Together with a previous result shown in [15], this implies that one
588 cannot use the well-known ABD implementation of registers in message-passing
589 systems to automatically transform any randomized algorithm that works in
590 shared-memory systems into a randomized algorithm that works in message-passing
591 systems: with a strong adversary the resulting algorithm may not terminate.
592 </p>
593 </description>
594 <guid isPermaLink="false">oai:arXiv.org:2010.15210</guid>
595 </item>
596 <item>
597 <title>Safety-Aware Cascade Controller Tuning Using Constrained Bayesian Optimization. (arXiv:2010.15211v1 [eess.SY])</title>
598 <link>http://fr.arxiv.org/abs/2010.15211</link>
599 <description><p>Authors: <a href="http://fr.arxiv.org/find/eess/1/au:+Konig_C/0/1/0/all/0/1">Christopher K&#xf6;nig</a>, <a href="http://fr.arxiv.org/find/eess/1/au:+Khosravi_M/0/1/0/all/0/1">Mohammad Khosravi</a>, <a href="http://fr.arxiv.org/find/eess/1/au:+Maier_M/0/1/0/all/0/1">Markus Maier</a>, <a href="http://fr.arxiv.org/find/eess/1/au:+Smith_R/0/1/0/all/0/1">Roy S. Smith</a>, <a href="http://fr.arxiv.org/find/eess/1/au:+Rupenyan_A/0/1/0/all/0/1">Alisa Rupenyan</a>, <a href="http://fr.arxiv.org/find/eess/1/au:+Lygeros_J/0/1/0/all/0/1">John Lygeros</a></p>
600
601 <p>This paper presents an automated, model-free, data-driven method for the safe
602 tuning of PID cascade controller gains based on Bayesian optimization. The
603 optimization objective is composed of data-driven performance metrics and
604 modeled using Gaussian processes. We further introduce a data-driven constraint
605 that captures the stability requirements from system data. Numerical evaluation
606 shows that the proposed approach outperforms relay feedback autotuning and
607 quickly converges to the global optimum, thanks to a tailored stopping
608 criterion. We demonstrate the performance of the method in simulations and
609 experiments on a linear axis drive of a grinding machine. For experimental
610 implementation, in addition to the introduced safety constraint, we integrate a
611 method for automatic detection of the critical gains and extend the
612 optimization objective with a penalty depending on the proximity of the current
613 candidate points to the critical gains. The resulting automated tuning method
614 optimizes system performance while ensuring stability and standardization.
615 </p>
616 </description>
617 <guid isPermaLink="false">oai:arXiv.org:2010.15211</guid>
618 </item>
619 <item>
620 <title>Away from Trolley Problems and Toward Risk Management. (arXiv:2010.15217v1 [cs.CY])</title>
621 <link>http://fr.arxiv.org/abs/2010.15217</link>
622 <description><p>Authors: <a href="http://fr.arxiv.org/find/cs/1/au:+Goodall_N/0/1/0/all/0/1">Noah J. Goodall</a></p>
623
624 <p>As automated vehicles receive more attention from the media, there has been
625 an equivalent increase in the coverage of the ethical choices a vehicle may be
626 forced to make in certain crash situations with no clear safe outcome. Much of
627 this coverage has focused on a philosophical thought experiment known as the
628 "trolley problem," and substituting an automated vehicle for the trolley and
629 the car's software for the bystander. While this is a stark and straightforward
630 example of ethical decision making for an automated vehicle, it risks
631 marginalizing the entire field if it is to become the only ethical problem in
632 the public's mind. In this chapter, I discuss the shortcomings of the trolley
633 problem, and introduce more nuanced examples that involve crash risk and
634 uncertainty. Risk management is introduced as an alternative approach, and its
635 ethical dimensions are discussed.
636 </p>
637 </description>
638 <guid isPermaLink="false">oai:arXiv.org:2010.15217</guid>
639 </item>
640 <item>
641 <title>StencilFlow: Mapping Large Stencil Programs to Distributed Spatial Computing Systems. (arXiv:2010.15218v1 [cs.DC])</title>
642 <link>http://fr.arxiv.org/abs/2010.15218</link>
643 <description><p>Authors: <a href="http://fr.arxiv.org/find/cs/1/au:+Licht_J/0/1/0/all/0/1">Johannes de Fine Licht</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Kuster_A/0/1/0/all/0/1">Andreas Kuster</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Matteis_T/0/1/0/all/0/1">Tiziano De Matteis</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Ben_Nun_T/0/1/0/all/0/1">Tal Ben-Nun</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Hofer_D/0/1/0/all/0/1">Dominic Hofer</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Hoefler_T/0/1/0/all/0/1">Torsten Hoefler</a></p>
644
645 <p>Spatial computing devices have been shown to significantly accelerate stencil
646 computations, but have so far relied on unrolling the iterative dimension of a
647 single stencil operation to increase temporal locality. This work considers the
648 general case of mapping directed acyclic graphs of heterogeneous stencil
649 computations to spatial computing systems, assuming large input programs
650 without an iterative component. StencilFlow maximizes temporal locality and
651 ensures deadlock freedom in this setting, providing end-to-end analysis and
652 mapping from a high-level program description to distributed hardware. We
653 evaluate the generated architectures on an FPGA testbed, demonstrating the
654 highest single-device and multi-device performance recorded for stencil
655 programs on FPGAs to date, then leverage the framework to study a complex
656 stencil program from a production weather simulation application. Our work
657 enables productively targeting distributed spatial computing systems with large
658 stencil programs, and offers insight into architecture characteristics required
659 for their efficient execution in practice.
660 </p>
661 </description>
662 <guid isPermaLink="false">oai:arXiv.org:2010.15218</guid>
663 </item>
664 <item>
665 <title>Geometric Sampling of Networks. (arXiv:2010.15221v1 [math.DG])</title>
666 <link>http://fr.arxiv.org/abs/2010.15221</link>
667 <description><p>Authors: <a href="http://fr.arxiv.org/find/math/1/au:+Barkanass_V/0/1/0/all/0/1">Vladislav Barkanass</a>, <a href="http://fr.arxiv.org/find/math/1/au:+Jost_J/0/1/0/all/0/1">J&#xfc;rgen Jost</a>, <a href="http://fr.arxiv.org/find/math/1/au:+Saucan_E/0/1/0/all/0/1">Emil Saucan</a></p>
668
669 <p>Motivated by the methods and results of manifold sampling based on Ricci
670 curvature, we propose a similar approach for networks. To this end we make
671 appeal to three types of discrete curvature, namely the graph Forman-, full
672 Forman- and Haantjes-Ricci curvatures for edge-based and node-based sampling.
673 We present the results of experiments on real life networks, as well as for
674 square grids arising in Image Processing. Moreover, we consider fitting Ricci
675 flows and we employ them for the detection of networks' backbone. We also
676 develop embedding kernels related to the Forman-Ricci curvatures and employ
677 them for the detection of the coarse structure of networks, as well as for
678 network visualization with applications to SVM. The relation between the Ricci
679 curvature of the original manifold and that of a Ricci curvature driven
680 discretization is also studied.
681 </p>
682 </description>
683 <guid isPermaLink="false">oai:arXiv.org:2010.15221</guid>
684 </item>
685 <item>
686 <title>Exploring complex networks with the ICON R package. (arXiv:2010.15222v1 [cs.SI])</title>
687 <link>http://fr.arxiv.org/abs/2010.15222</link>
688 <description><p>Authors: <a href="http://fr.arxiv.org/find/cs/1/au:+Wadhwa_R/0/1/0/all/0/1">Raoul R. Wadhwa</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Scott_J/0/1/0/all/0/1">Jacob G. Scott</a></p>
689
690 <p>We introduce ICON, an R package that contains 1075 complex network datasets
691 in a standard edgelist format. All provided datasets have associated citations
692 and have been indexed by the Colorado Index of Complex Networks - also referred
693 to as ICON. In addition to supplying a large and diverse corpus of useful
694 real-world networks, ICON also implements an S3 generic to work with the
695 network and ggnetwork R packages for network analysis and visualization,
696 respectively. Sample code in this report also demonstrates how ICON can be used
697 in conjunction with the igraph package. Currently, the Comprehensive R Archive
698 Network hosts ICON v0.4.0. We hope that ICON will serve as a standard corpus
699 for complex network research and prevent redundant work that would be otherwise
700 necessary by individual research groups. The open source code for ICON and for
701 this reproducible report can be found at https://github.com/rrrlw/ICON.
702 </p>
703 </description>
704 <guid isPermaLink="false">oai:arXiv.org:2010.15222</guid>
705 </item>
706 <item>
707 <title>A Visuospatial Dataset for Naturalistic Verb Learning. (arXiv:2010.15225v1 [cs.CL])</title>
708 <link>http://fr.arxiv.org/abs/2010.15225</link>
709 <description><p>Authors: <a href="http://fr.arxiv.org/find/cs/1/au:+Ebert_D/0/1/0/all/0/1">Dylan Ebert</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Pavlick_E/0/1/0/all/0/1">Ellie Pavlick</a></p>
710
711 <p>We introduce a new dataset for training and evaluating grounded language
712 models. Our data is collected within a virtual reality environment and is
713 designed to emulate the quality of language data to which a pre-verbal child is
714 likely to have access: That is, naturalistic, spontaneous speech paired with
715 richly grounded visuospatial context. We use the collected data to compare
716 several distributional semantics models for verb learning. We evaluate neural
717 models based on 2D (pixel) features as well as feature-engineered models based
718 on 3D (symbolic, spatial) features, and show that neither modeling approach
719 achieves satisfactory performance. Our results are consistent with evidence
720 from child language acquisition that emphasizes the difficulty of learning
721 verbs from naive distributional data. We discuss avenues for future work on
722 cognitively-inspired grounded language learning, and release our corpus with
723 the intent of facilitating research on the topic.
724 </p>
725 </description>
726 <guid isPermaLink="false">oai:arXiv.org:2010.15225</guid>
727 </item>
728 <item>
729 <title>Speech-Based Emotion Recognition using Neural Networks and Information Visualization. (arXiv:2010.15229v1 [cs.HC])</title>
730 <link>http://fr.arxiv.org/abs/2010.15229</link>
731 <description><p>Authors: <a href="http://fr.arxiv.org/find/cs/1/au:+Almahmoud_J/0/1/0/all/0/1">Jumana Almahmoud</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Kikkeri_K/0/1/0/all/0/1">Kruthika Kikkeri</a></p>
732
733 <p>Emotions recognition is commonly employed for health assessment. However, the
734 typical metric for evaluation in therapy is based on patient-doctor appraisal.
735 This process can fall into the issue of subjectivity, while also requiring
736 healthcare professionals to deal with copious amounts of information. Thus,
737 machine learning algorithms can be a useful tool for the classification of
738 emotions. While several models have been developed in this domain, there is a
739 lack of userfriendly representations of the emotion classification systems for
740 therapy. We propose a tool which enables users to take speech samples and
741 identify a range of emotions (happy, sad, angry, surprised, neutral, clam,
742 disgust, and fear) from audio elements through a machine learning model. The
743 dashboard is designed based on local therapists' needs for intuitive
744 representations of speech data in order to gain insights and informative
745 analyses of their sessions with their patients.
746 </p>
747 </description>
748 <guid isPermaLink="false">oai:arXiv.org:2010.15229</guid>
749 </item>
750 <item>
751 <title>Construction Payment Automation Using Blockchain-Enabled Smart Contracts and Reality Capture Technologies. (arXiv:2010.15232v1 [cs.CR])</title>
752 <link>http://fr.arxiv.org/abs/2010.15232</link>
753 <description><p>Authors: <a href="http://fr.arxiv.org/find/cs/1/au:+Hamledari_H/0/1/0/all/0/1">Hesam Hamledari</a>, <a href="http://fr.arxiv.org/find/cs/1/au:+Fischer_M/0/1/0/all/0/1">Martin Fischer</a></p>
754
755 <p>This paper presents a smart contract-based solution for autonomous
756 administration of construction progress payments. It bridges the gap between
757 payments (cash flow) and the progress assessments at job sites (product flow)
758 enabled by reality capture technologies and building information modeling
759 (BIM). The approach eliminates the reliance on the centralized and heavily
760 intermediated mechanisms of existing payment applications. The construction
761 progress is stored in a distributed manner using content addressable file
762 sharing; it is broadcasted to a smart contract which automates the on-chain
763 payment settlements and the transfer of lien rights. The method was
764 successfully used for processing payments to 7 subcontractors in two commercial
765 construction projects where progress monitoring was performed using a
766 camera-equipped unmanned aerial vehicle (UAV) and an unmanned ground vehicle
767 (UGV) equipped with a laser scanner. The results show promise for the method's
768 potential for increasing the frequency, granularity, and transparency of
769 payments. The paper is concluded with a discussion of implications for project
770 management, introducing a new model of project as a singleton state machine.
771 </p>
772 </description>
773 <guid isPermaLink="false">oai:arXiv.org:2010.15232</guid>
774 </item>
775 <item>
776 <title>Accurate Prostate Cancer Detection and Segmentation on Biparametric MRI using Non-local Mask R-CNN with Histopathological Ground Truth. (arXiv:2010.15233v1 [eess.IV])</title>
777 <link>http://fr.arxiv.org/abs/2010.15233</link>
codemadness.org:70 /git/sfeed_tests/file/input/sfeed/realworld/arxiv.org.rss.20.xml.gph:788: line too long