Skip to content

Commit 81b2ed5

Browse files
committed
update experience
1 parent 8ffa0dd commit 81b2ed5

4 files changed

Lines changed: 12 additions & 4 deletions

File tree

content/home/experience_2.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,13 +32,21 @@ experience:
3232
date_start: '2022-05-01'
3333
date_end: ''
3434
description: Pre Doc research associate at the Institute for Machine Learning.
35+
- title: Applied Scientist Intern
36+
company: Amazon Web Services (AWS)
37+
company_url: ''
38+
company_logo: aws
39+
location: Berlin, Germany
40+
date_start: '2024-09-01'
41+
date_end: '2024-12-30'
42+
description: Research on Foundational Time Series Models.
3543
- title: PhD Researcher
3644
company: NXAI GmbH
3745
company_url: ''
3846
company_logo: nxai
3947
location: Linz, Austria
4048
date_start: '2025-04-01'
41-
date_end: '2025-09-30'
49+
date_end: '2025-08-30'
4250
description: Research on Foundational Time Series Models.
4351
- title: Applied Scientist Intern
4452
company: Amazon Web Services (AWS)

content/publication/2025-tirex-workshop/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ authors: ["**Andreas Auer**, Patrick Podest, Daniel Klotz, Sebastian Böck, Gün
66
publication_types: ["2"]
77
abstract: "In-context learning, the ability of large language models to perform tasks using only examples provided in the prompt, has recently been adapted for time series forecasting. This paradigm enables zero-shot prediction, where past values serve as context for forecasting future values, making powerful forecasting tools accessible to non-experts and increasing the performance when training data are scarce. Most existing zero-shot forecasting approaches rely on transformer architectures, which, despite their success in language, often fall short of expectations in time series forecasting, where recurrent models like LSTMs frequently have the edge. Conversely, while LSTMs are well-suited for time series modeling due to their state-tracking capabilities, they lack strong in-context learning abilities. We introduce TiRex that closes this gap by leveraging xLSTM, an enhanced LSTM with competitive in-context learning skills. Unlike transformers, state-space models, or parallelizable RNNs such as RWKV, TiRex retains state-tracking, a critical property for long-horizon forecasting. To further facilitate its state-tracking ability, we propose a training-time masking strategy called CPM. TiRex sets a new state of the art in zero-shot time series forecasting on the HuggingFace benchmarks GiftEval and Chronos-ZS, outperforming significantly larger models including TabPFN-TS (Prior Labs), Chronos Bolt (Amazon), TimesFM (Google), and Moirai (Salesforce) across both short- and long-term forecasts."
88
featured: true
9-
publication: "[Spotlight] Workshop on Foundation Models for Structured Data @ ICML 2025."
9+
publication: "[Spotlight] Workshop on Foundation Models for Structured Data @ ICML 2025"
1010
links:
1111
- icon_pack: ai
1212
icon: arxiv

content/publication/2025-zero-cov/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: "Zero-Shot Time Series Forecasting with Covariates via In-Context Learning"
3-
date: 2024-12-31
3+
date: 2024-06-04
44
publishDate: 2025-06-04
55
authors: ["**Andreas Auer**, Raghul Parthipan, Pedro Mercado, Abdul Fatir Ansari, Lorenzo Stella, Bernie Wang, Michael Bohlke-Schneider, Syama Sundar Rangapuram"]
66
publication_types: ["2"]

content/publication/2025-zs-classification/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ authors: ["**Andreas Auer**, Daniel Klotz, Sebastian Böck, Sepp Hochreiter"]
66
publication_types: ["2"]
77
abstract: "Recent research on time series foundation models has primarily focused on forecasting, leaving it unclear how generalizable their learned representations are. In this study, we examine whether frozen pre-trained forecasting models can provide effective representations for classification. To this end, we compare different representation extraction strategies and introduce two model-agnostic embedding augmentations. Our experiments show that the best forecasting models achieve classification accuracy that matches or even surpasses that of state-of-the-art models pre-trained specifically for classification. Moreover, we observe a positive correlation between forecasting and classification performance. These findings challenge the assumption that task-specific pre-training is necessary, and suggest that learning to forecast may provide a powerful route toward constructing general-purpose time series foundation models."
88
featured: true
9-
publication: "Recent Advances in Time Series Foundation Models (BERT2S) @ NeurIPS 2025."
9+
publication: "Recent Advances in Time Series Foundation Models (BERT2S) @ NeurIPS 2025"
1010
links:
1111
- icon_pack: ai
1212
icon: arxiv

0 commit comments

Comments
 (0)