Skip to content

Commit 9542d6f

Browse files
committed
vacancies
1 parent 51aa05b commit 9542d6f

2 files changed

Lines changed: 367 additions & 16 deletions

File tree

projects.html

Lines changed: 52 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
<head>
1010
<meta charset="utf-8">
1111

12-
<!-- begin _includes/seo.html --><title>InDeep Vacancies</title>
12+
<!-- begin _includes/seo.html --><title>The InterpretingDL projects - Interpreting Deep Learning</title>
1313
<meta name="description" content="Website for 2019 NWA-ORC proposal BD.1910: ‘Interpreting Deep Learning Models for Text and Sound: Methods &amp; Applications’.">
1414

1515

@@ -217,10 +217,7 @@ <h1 id="page-title" class="page__title" itemprop="headline">
217217
<li>
218218
<a href="/papers"><span class="nav__sub-title">Key papers</span></a>
219219
</li>
220-
221-
<li>
222-
<a href="/vacancies"><span class="nav__sub-title">Vacancies</span></a>
223-
</li>
220+
224221

225222
</ul>
226223
</nav>
@@ -240,17 +237,56 @@ <h1 id="page-title" class="page__title" itemprop="headline">
240237

241238
<section class="page__content" itemprop="text">
242239

243-
<h1 id="interpreting-deep-learning-models-for-text-and-sound-methods--applications"<i>InDeep</i>
244-
Vacancies</h1>
245-
246-
<ul>
247-
<li><a href="https://www.rug.nl/about-ug/work-with-us/job-opportunities/?details=00347-02S00085IP">One PhD position at the University of
248-
Groningen, starting 1 September 2021.</a>
249-
</li>
250-
<li><a href="https://www.academictransfer.com/en/298252/two-phd-positions-in-nlp-and-speech-1-fte/">Two PhD position at Tilburg University, starting 1
251-
September 2021.</a>
252-
</li>
253-
</ul>
240+
<h1 id="interpreting-deep-learning-models-for-text-and-sound-methods--applications"<i>InDeep</i>: Interpreting Deep Learning Models for Text and Sound</h1>
241+
242+
<h2 id="consortium">Consortium:</h2>
243+
<h3>Principal investigators</h3>
244+
<ul>
245+
<li>Willem Zuidema (ILLC, University of Amsterdam)</li>
246+
<li>Afra Alishahi (Tilburg University)</li>
247+
<li>Grzegorz Chrupała (Tilburg University)</li>
248+
<li>Arianna Bisazza (University of Groningen)</li>
249+
<li>Tom Lentz (ILLC, University of Amsterdam)</li>
250+
<li>Louis ten Bosch (Radboud University, Nijmegen)</li>
251+
<li>Iris Hendrickx (Radboud University, Nijmegen)</li>
252+
<li>Antske Fokkens (Free University, Amsterdam)</li>
253+
<li>Ashley Burgoyne (ILLC, University of Amsterdam)</li>
254+
</ul>
255+
256+
<h3>Cofunding and cooperation partners</h3>
257+
<ul>
258+
<li>KPN</li>
259+
<li>Textkernel
260+
<li> Deloitte
261+
<li> AIgent
262+
<li> Chordify
263+
<li> Global textware
264+
<li> TNO
265+
<li> Floodtags
266+
<li> Waag
267+
<li> muZIEum
268+
</ul>
269+
270+
<!-- ## Partners:
271+
partners -->
272+
273+
<h2 id="funding">Funding:</h2>
274+
2 million euro from the National Research Agenda programme (NWA-ORC 2019) of the Netherlands Organization for Scientific Research (NWO) + in-kind contributions from the cofunding partners. The project will run from mid 2021 until mid 2026.
275+
276+
<h2 id="description">Description:</h2>
277+
<p>In this project the InterpretingDL network brings together pioneering researchers in the domain of interpretability of deep learning models of text, language, speech and music. They collaborate with companies and non-for-profit institutions working with language, speech and music technology, to develop applications that help assess the usefulness of alternative interpretability techniques on a range of different tasks.
278+
In “justification” tasks, we look at how interpretability techniques help give users meaningful feedback. Examples include legal and medical document text mining and audio search. In “augmentation” tasks we look at how these techniques facilitate the use of domain knowledge and models from outside deep learning to make the models perform even better. Examples include machine translation, music recommendation and writing feedback. In “interaction” tasks we allow users to influence the functioning of their automated systems, by providing both interpretable information on how the system operates, and letting human produced output find its way into the internal states of the learning algorithm. Examples include adapting speech recognition to non-standard accents and dialects, interactive music generation, and machine assisted translation.</p>
279+
280+
<h2 id="activities">Activities:</h2>
281+
<ul>
282+
<li>Fundamental research on interpretability methods in NLP, speech and music processing</li>
283+
<li>Applied research on interpretability, in tight collaboration with the partners</li>
284+
<li>A public outreach program, involving citizen science projects, lectures, concerts, debates, demos and nights in the museum</li>
285+
<li>An industrial outreach program, involving master classes on deep learning and interpretability in NLP, speech and music processing</li>
286+
<li>Software packages and online demos</li>
287+
</ul>
288+
289+
254290
</section>
255291

256292
<footer class="page__meta">

0 commit comments

Comments
 (0)