TiRex - a foundation model for timeseries
Shownotes
Huggingface: https://huggingface.co/NX-AI/TiRex
Leaderboard: https://huggingface.co/spaces/Salesforce/GIFT-Eval
Do you already know the Rexroth blog
If you have any questions, please contact us: vertrieb@boschrexroth.de
Produced by Bosch Rexroth AG, Sales Europe Centre Susanne Noll
Transkript anzeigen
00:00:03: Hello everybody and welcome to a new episode of our tech podcast.
00:00:07: My name is Robert Weber and today we are talking about Tirex, the first XLSTM time-based time series foundation model and we were allowed to borrow an episode from the Industrial AI podcast.
00:00:21: Thanks a lot guys.
00:00:22: That's why Peter Seeberg is hosting this episode.
00:00:24: Enjoy listening.
00:00:30: Hi there.
00:00:31: Welcome to a new episode of the Industrial AI Podcast.
00:00:35: My name is Peter Seeberg and I'm your host.
00:00:38: And today I'm going to be talking to the one and only Sepp Hochreiter.
00:00:42: And Sepp and I are going to be talking about Tirex.
00:00:46: And Tirex is the first XLSTM based time series foundation model.
00:00:52: Hi Sepp.
00:00:53: Hi.
00:00:54: How are you doing?
00:00:56: I'm fine.
00:00:56: I'm very excited about because we are launching Tirex.
00:01:00: I'm very excited.
00:01:02: Oh, that's great.
00:01:03: We're going to be talking about Tirex in just a minute.
00:01:06: We have had you on our show actually two, three times.
00:01:11: So and for other reasons, I believe that at least ninety five percent of our listeners will have heard about you
00:01:18: but still, maybe very quickly introduce yourself to our listeners.
00:01:23: Yes.
00:01:24: My name is
00:01:27: I'm heading the Institute for Machine Learning here in Linz.
00:01:30: It's a JKU, Johannes Kepler University.
00:01:34: I'm also chief scientist of newly funded AI company called NXAI.
00:01:43: And this company is dedicated to bring AI to industrial applications, to bring AI to the machinery.
00:01:52: and to be a focus in at the moment at XLSTM, the new technique.
00:01:58: And I'm known for inventing LSTM.
00:02:01: LSTM stands for Long Short-term Memory and LSTM started all this chatbot, chatGPT stuff because the first large language model was an LSTM model.
00:02:12: And I'm known for LSTM.
00:02:14: Great.
00:02:15: Thank you very much.
00:02:15: Last time that we met was actually in Linz.
00:02:18: You refer to Bose, your company, new company, NXAI, which you are a co-founder of, as well as where the Johannes Kepler University is.
00:02:29: Yeah, you already refer to LSTM.
00:02:32: I dare to use the quote, great thinkers stand on the shoulders of giants and even if it was themselves.
00:02:42: So why maybe don't you quickly take us by the hand, look back at your, I don't know, maybe thirty, thirty five years of AI research.
00:02:54: And maybe you want to tell us what were the main stations that brought you then to XLSTM?
00:03:01: Yes, I invented LSTM, 1991 in my diploma thesis,
00:03:06: where I first analyzed the vanishing gradient, which is a common problem in deep learning, which you have to overcome to build large models
00:03:16: And I proposed LSTM architecture for the current neural networks, which can process time series, which can process text.
00:03:26: But then neural networks were not popular anymore in the community.
00:03:31: Support vector machines came, even we had problems to publish LSTMs.
00:03:37: And then starting in two thousand six, deep learning came starting in two thousand ten.
00:03:44: LSTM became very popular.
00:03:47: All text and speech programs on cell phones were LSTM based.
00:03:55: It has many, many LSTM applications, the same with Amazon and you name it Microsoft or so.
00:04:01: But then it turned out 2017, there is another technique, its called transformer where the tention mechanism is build in that this architectures are better in paralyzing. You can push more data through this models in training than you could in LSTMs.
00:04:27: Even if the first LSTM, the first large language models were based on LSTM, but this parallelization to get more training data in the same time pushed LSTM from the market and transformer was used.
00:04:41: And I always thought, hmm, can we not scale up LSTM
00:04:47: like transformers.
00:04:49: Can we not do the same?
00:04:50: Can we not build large models?
00:04:52: Can we not make it faster?
00:04:55: And with XLSTM we achieved this.
00:04:57: We looked into it, we copied some of the tricks of the transformer technology, added some of our own tricks from the LSTM technique and then published this XLSTM technology, which is... model which is based on the original LSTM but can be paralyzed and has some other tweaks which make it really really powerful and we showed it can achieve the performance of transformers in large language modeling.
00:05:33: We will show soon that we are on the same level as transformers.
00:05:38: Right.
00:05:38: We may be coming into a little bit more detail later on in this comparison of the transform and XLSTM technology.
00:05:47: But our topic today is time series.
00:05:49: Now, am I correct in assuming that until recently with regards to XLSTM as you just introduced, you have been concentrating on language.
00:06:00: So how is time series data different from non-time series data.
00:06:07: So the data that does not have any time stamp from the perspective of the researcher, Sepp Hochreiter.
00:06:15: Yes, first of all for me, there's not a big difference.
00:06:18: If you give me a sequence, I can use every time series method because I can assign to the sequence element time points and I can analyze
00:06:30: sequences like a DNA or even text also might be a sequence.
00:06:34: From this there's not a big difference
00:06:38: but the data is different if you look at text their coordination between words which are far away and this is a more abstract symbols you process.
00:06:52: And in time series in most cases you have numerical values you have numbers or vector and you possess this numerical values and often in time series the data comes out of a complex system.
00:07:07: The system has something like a hidden state.
00:07:09: It's about in what state is the system and then you want to predict the future or you want to classify what's happening right now.
00:07:20: This is a difference between abstract symbols which have some meaning and numerical values which came out of a complex system with hidden states.
00:07:31: Right, so referring to the systems, maybe you can give us a couple of examples.
00:07:36: Time series are being used in a variety of very different markets.
00:07:42: Maybe you can give us a couple of examples of use cases and markets where the typical time series data comes from.
00:07:50: Time series pervasive there are
00:07:52: everywhere.
00:07:52: you find them everywhere and you encounter them everywhere.
00:07:58: if you think about weather forecasting if your drive or use your car and the navigator tells you as estimated to time of arrival it's a time sharing
00:08:08: it's a forecasting.
00:08:09: if your system tells you when the battery if you have a E-car is empty it's a time series problem.
00:08:17: but it's in stock market prediction, in predictive maintenance, in logistics, you have to predict.
00:08:25: When do you have to order new parts, that your production, does not stand still, or when your machinery needs new oil, you have to predict
00:08:36: the market.
00:08:38: For example, if you produce something for the car industry, you have to predict how many cars will be sold to adjust your production.
00:08:48: Very prominent was Amazon.
00:08:51: They have all across time series prediction because they have to predict two things first, how much a product is bought and also How long does it take to deliver it?
00:09:07: Because they have some delivery things, in the same they are better in predicting how well products are sold, then the producers themselves.
00:09:16: Amazon is one prediction company, and the whole business model is built on a prediction.
00:09:21: But you need it for climate.
00:09:24: You need it for medicine.
00:09:26: There's EEG and EKG.
00:09:30: There are so many predictions.
00:09:32: You want to know how the body is responding to treatments or during a surgery.
00:09:39: There's applications in agrar.
00:09:43: If you do some corn or apples or whatever, you have to predict the weather.
00:09:49: You have to predict a soil condition.
00:09:53: A very famous application where we were very good is hydrology to predict floodings.
00:10:01: Because here, if it's raining, we have these hidden states.
00:10:05: The rain goes into the soil, goes under underground basins.
00:10:13: And you have to memorize how full are these basins.
00:10:17: Because if they're full, the rain directly will go into the river.
00:10:20: Otherwise, the underground basins will be filled up before the water goes into the rivers.
00:10:25: And this is a very, very prominent example.
00:10:29: how we do in earth science, in climate change, where you need this forecasting all the time.
00:10:35: You need forecasting in energy, smart grids.
00:10:39: You have to predict the weather for solar energy, for wind energy.
00:10:43: And you have also to predict
00:10:44: the customer
00:10:45: behavior.
00:10:46: If there's something like a football game, like Germany is in the final, everybody turns on the TV and puts a beer into the fridge or whatever.
00:10:57: This were a couple of examples, but there's many, many, many more.
00:11:00: It's everywhere.
00:11:01: It's really everywhere.
00:11:02: Yeah, really, we hear you.
00:11:04: And I'm sure you could go on for a couple of minutes.
00:11:07: Yeah, so very good.
00:11:08: And you gave a couple of examples of the specific area we have a main interest in here in our podcast in the industrial environment.
00:11:17: So since when then, you know, looking back these whatever, thirty five years since when have you been looking specifically at time series?
00:11:27: From the moment that you came up with LSTM.
00:11:30: And if that was the case, what were until then the main algorithmic capabilities will come to the new ones later on?
00:11:38: But what were the standards in the past that were capable of looking into the future of time series?
00:11:45: Yes, I started in kindergarten.
00:11:47: I was always interested to predict the future.
00:11:50: but now kidding with LSTM, the first LSTM applications were time series because text was not available.
00:11:58: We never thought about doing text
00:12:01: LSTM and where I come from only time series were in our mind and LSTM has been designed for time series as an old original LSTM and it performed very well.
00:12:15: LSTM is used everywhere.
00:12:18: Even one guy from Google told me, LSTM is still used in Google Translate because it's faster.
00:12:25: than this transformer architecture in inference and in applying it.
00:12:29: But LSTM were in many, many industries in many, many broad domains in industry for prediction.
00:12:38: I gave a couple of applications but there are many more and LSTM was good there.
00:12:44: Alternatively, there were models like Arima statistical models.
00:12:50: They only do this local averaging, meaning you make an average over the last values or you calculate a trend or something like this.
00:13:01: This was typically for stock market predictions with traditional statistical methods and LSTM was better because LSTM could memorize stuff and it could memorize in what state
00:13:14: some system is.
00:13:16: I produce a hydrology thing here.
00:13:19: If it's snowing, the snow does not go to water.
00:13:22: The snow is stored.
00:13:23: The snow is lying on the soil.
00:13:25: And if the sun shines, the snow transit to water
00:13:29: And this is something like storing water, also in the Glacier underground basin.
00:13:34: Some systems, also the sea, if there's a storm at the sea, you don't see it.
00:13:39: But there's a hidden state because In the sea, under the water, still a lot of food is in the water because of the storm before.
00:13:48: And fish eating this.
00:13:50: There are these hidden states everywhere.
00:13:53: And these statistical methods were not good
00:13:56: to
00:13:56: capture the hidden states because they do it on your averaging.
00:13:59: LSTM was very good to capture the hidden states of some systems.
00:14:04: Think about a pipe.
00:14:05: You have a water pipe.
00:14:06: You open
00:14:08: or something water is flowing but on the other end it takes a time until the water arrives
00:14:14: but you have to memorize.
00:14:16: yes i opened the water pipe.
00:14:18: and the water is flowing.
00:14:20: This is a hidden states.
00:14:22: Very good.
00:14:23: now you have come with a new time series foundation model called tirex the king of time series.
00:14:32: i assume that's what you want to convey with that.
00:14:35: And it's based on XLSTM.
00:14:37: You just introduced XLSTM in the comparison with a transformer.
00:14:41: But what are the main features?
00:14:43: What is the USP of Tirex?
00:14:46: Yeah, Tirex indeed.
00:14:48: It's the king of time series.
00:14:50: It's a king of time series models.
00:14:53: First of all, it's based on XLSTM.
00:14:56: And I already told you that.
00:14:59: the original LSTM is very, very good in time series prediction.
00:15:02: Now we improved it.
00:15:04: But it still kept its super performance in time series prediction.
00:15:09: It's very good.
00:15:11: But with all these tricks of the transformers, it became even more powerful.
00:15:18: And this is a time series foundation model.
00:15:20: What does this mean?
00:15:22: This is a new kind of time series prediction which come out of this large language models because of the in-context learning
00:15:32: for large language models, you can write something in the context, you give some questions or give some examples.
00:15:40: And then the large language models is processing this and gives you an answer.
00:15:44: Here's ideas
00:15:45: I train a very large model on many, many different time series.
00:15:50: And then I give a new time series in context.
00:15:53: It's like a prompt.
00:15:54: It's like a question.
00:15:55: But in this case, only numerical values.
00:15:58: It's a time series.
00:15:59: And then you say, can you give me the future.
00:16:03: Can you give me the next time point or the next ten time points?
00:16:06: Or can you give me what's happening in hundred time points?
00:16:10: And this is the idea of the large language models.
00:16:15: They have so much knowledge.
00:16:18: And this time series foundation models have so much knowledge about time series
00:16:25: that they don't have to learn new time series, but they already see patterns I saw in other time series, and if we give as a prefixes the beginning of a time series, for them it's clear, yes, the future will look like this.
00:16:40: Here we have a very, very .. Foundation models.
00:16:43: First of all, they allow non-experts to use high-quality time series models.
00:16:51: You have no idea about time series.
00:16:53: You put it in context, all your values, and you get good prediction.
00:16:56: Wow.
00:16:58: You don't have to know anything about time series or deep learning.
00:17:01: That's the first big advantage.
00:17:03: The second big advantage is if you don't have enough data, then you cannot learn a model for your particular domain or time series.
00:17:15: But this foundation model, you only give the beginning of your time series
00:17:19: and you don't have any data, you don't have training data, but the model already makes good prediction.
00:17:25: Therefore its perfectly suited for tasks where not enough data is available.
00:17:33: Okay, very good.
00:17:35: So what about, so this is like about the quality, maybe the use, we come to that in a moment.
00:17:41: At the very end, we're going to be looking at some, some benchmark numbers, maybe do some comparison as well.
00:17:47: But before then, if you compare, what about the size of the model?
00:17:53: What about the speed of the model in relation to other solutions in the market?
00:17:57: Okay.
00:17:57: I'll go later to numbers, but compared to other solutions, I have to mention other solutions
00:18:03: almost other competitors in this domain, meaning time series foundation models are based on the transformer technology because it's so popular, it's so successful in large language models in ChatGPT and they have a problem.
00:18:22: They have a problem because they are typically very large and they are typically very slow.
00:18:30: For example, if you give a time series as I said in context, they always, for every prediction, they have to go over the whole time series again and again.
00:18:39: They are super slow.
00:18:41: What we achieved is two things.
00:18:44: First of all, our model is small.
00:18:47: Our model has, because it's based on XLSTM, a fixed memory therefore
00:18:53: perfectly suited for embedded systems at edge devices, which transformer cannot do.
00:19:00: And we are super fast.
00:19:02: We are super fast because of two reasons, because we are small.
00:19:06: If we are small, we are faster because we don't have to do so much computations.
00:19:11: But because in inference, transformers are quadratic in their context lengths, in the lengths of the time series you give in context.
00:19:21: The LSTM is linear because it only accesses the memory.
00:19:27: It's faster.
00:19:27: It's much faster.
00:19:29: It's smaller and faster.
00:19:30: And now the most important thing is it's even better in prediction quality, in forecasting quality, because the XLSTM we use is able to do stage tracking.
00:19:45: I told you, there are states like in hydrology, if you want to predict how much water is in your river, there are these hidden states.
00:19:53: Water is in the snow, water is in the soil, water is in the underground basins.
00:19:58: And you have to keep track of this.
00:20:00: You have to memorize it.
00:20:01: You have to track oh its raining.
00:20:03: But the water is going into the soil, but it will flow out later.
00:20:07: And these are states, these are hidden states of the system.
00:20:12: Also in robotics, the state would be where's your robot arm?
00:20:15: you can memorize what movements you have done and where your robot arm is located and LSTM can do that
00:20:25: but transformers or these fast models like RWKV or Mamba these models which came out cannot do the state tracking cannot keep track or cannot monitor in which state your system is
00:20:42: and that's so important And therefore we are in many timeseries so much better because we can do state tracking.
00:20:50: We can memorize in what state a complex model is.
00:20:55: And to come to the competitors, our competitors are something like Kronos from Amazon, Times FM from Google, Morai from Salesforce, Toto from DataDoc
00:21:12: and also Alibaba, the Chinese company, put some new foundation models for time series only a couple of days ago into the hacking phase leaderboards.
00:21:26: And this are big companies, they devoted a big team to get good models
00:21:31: and we are considerably better.
00:21:33: We are clearly better than all these methods because we have an advantage because we can do the state tracking.
00:21:41: And it's not only a small difference, it's a clear difference where we are better.
00:21:46: And all these big companies could not keep up with us because it's a technology.
00:21:53: It's our technology, it's a NXAI technology, it's European technology, which has beaten everything else.
00:22:00: And we are not only bad in forecasting, as already said, we are faster and we are smaller.
00:22:07: And this is fantastic.
00:22:09: That's unbelievable.
00:22:10: We are better, faster, smaller.
00:22:14: And we are so happy.
00:22:16: We are so excited that we are clearly in front compared to the teams of these big companies.
00:22:23: That's great.
00:22:24: We can really feel your excitement, Sepp, that is really great.
00:22:28: Higher quality, more speed, smaller.
00:22:31: What does that mean?
00:22:33: Your already refer to edge as a potential.
00:22:36: Maybe give us a couple of typical use cases where you see tirex to be applied.
00:22:44: Tirex should become the standard if you do some time series forecasting at the machinery.
00:22:52: If you have a small device and you want to know what's happening on your machine would do the better control stuff
00:23:00: you should use this because at the machinery you have to be fast to interfere fast enough and you have to be small because you cannot put a big computer besides your machinery.
00:23:12: Small and fast is important and being good is also an advantage.
00:23:16: Or in process control like a digital twin.
00:23:19: you have a simulation and you do prognosis, you do forecasting of your system, like the heat.
00:23:28: Is it too hot at some point?
00:23:31: If it's too hot, if the forecasting said it will become too hot, you have industrial process, you have this small device on the side with Tirex in it, Tirex says, hey, stop, it's becoming too hot.
00:23:45: Then you a regular down or Tirex tells you ah the catalystator is not well distributed because because of forecasting I can predict the distribution of the catalysator of some chemical material in your process.
00:24:00: It says, hey, we have to change this, give more of it or whatever.
00:24:05: And this is important because this has to be in real time.
00:24:09: If you want to steer the process.
00:24:12: if you want to control the process it have to have real-time capacities.
00:24:17: it have to be small because they have to fit into a small device in an embedded device in your production system.
00:24:23: but also Tirex
00:24:24: you will see it in autonomous driving because in cars you have to predict when is the battery empty and there are many prediction things.
00:24:33: you will see it in drones if you have to predict it.
00:24:37: You will see it in all autonomous systems, especially in autonomous production systems, because Tirex is good.
00:24:47: Tirex, I mean, it's a quality of prediction, is small, it fits on small devices, and it's super fast.
00:24:54: Yes, that's ideal for industry.
00:24:57: Industry should jump on it.
00:24:59: Exactly.
00:25:00: And I'm so happy and I'm sure that many listeners are so happy hearing exactly this.
00:25:07: It's almost like as if you have produced, you know, we started working three, four years ago, and now you come with this great solution, almost as if it was specifically made for our audience, so to say.
00:25:21: Very good.
00:25:22: So you already refer to it will be telling you who is the you.
00:25:26: I mean, you refer to the continued state tracking, but also about the context learning specifically.
00:25:33: So what does that mean?
00:25:35: Who is going to be the typical user?
00:25:37: Is that changing?
00:25:38: Is it more the data scientist type of very knowledgeable person?
00:25:43: Or does it mean that you're going to have like typically the domain expert being capable of using solutions that are going to be based on Tirex.
00:25:51: That's
00:25:52: a good thing because you don't have to be an expert anymore because you download your Tirex, you feed your numbers, your time series into the context and you get a prediction.
00:26:05: And the prediction is as good and in most cases even better.
00:26:10: Then if you would build a model, use also expert knowledge in time series research and do a prediction.
00:26:18: That's super good because now time series prediction is open for everybody.
00:26:23: But even better, even better, assume you are a company and you sell a device to different customers.
00:26:33: Every customer says, can you adjust the device
00:26:36: to my needs.
00:26:37: Can you adjust the device to my environment or to my product or whatever?
00:26:42: And then you need somebody who is fine-tuning the time series, a prediction model, or as a forecasting model for each customer.
00:26:51: If you use Tirex, for example, you put Tirex on it, on the machinery, it goes to the customer.
00:26:58: As the customer starts the machinery, Tirex will suck in the data
00:27:05: from some machinery and put it in context and it's doing prediction.
00:27:10: And if the customer has a new product, Tirex will get in the data for the new product or the new use of some machinery and can do prediction.
00:27:21: If the machinery is worn out or changes its behavior, Tirex can get in the actual data and do prediction
00:27:30: when you sell something and you don't have to care about it because Tidex can adapt to all changes because it can automatically load as a time series into the context and track the machinery, track the use of the machinery and you don't have to do anything anymore as a company is selling machinery with time series forecasting in built-in in the machines you're selling.
00:27:59: Thats really great to here.
00:28:01: It's a direction that I've been looking at and expecting almost like for quite some time
00:28:07: that domain experts are going to be, you know, using their data, the data that they kind of been producing, but were never capable of doing something with themselves always needed to go to, to other people, third parties or in house.
00:28:23: Now, you gave a general example of a company, you know, selling devices.
00:28:29: Now, what?
00:28:29: what is going to be the type of Tirex customer, what kind of product
00:28:35: or service, are they going to build on top of Tirex or are they going to be using Tirex directly?
00:28:43: And maybe you want to tell us then in relation to that, what is going to be the type of license that you're going to put Tyrex onto the market.
00:28:51: The first of all, Tirex is a base model.
00:28:54: We will put on Hackingface to show everybody.
00:28:58: that we are better, better sensors, Amazon guys, Google guys, Salesforce guys, data guys, Alibaba guys, you name it, a better than American than Chinese.
00:29:09: So we have to go out.
00:29:10: But what we can do then is to fine tuning.
00:29:15: The base model can do every time series.
00:29:18: But if you have enough data in one domain, you can a little bit tweak and you always get better in this specific domain if you adjusted and there are tricks how to do the fine tuning how to adjust it to a specific application so you get better.
00:29:36: The basic model is already better than specific models used by statistic guys what is used right now
00:29:45: but you can get even better if you do fine tuning fine adjustment.
00:29:52: if you go into your domain and this would be customers.
00:29:56: When we say we have the space model, but we can adapt it to your use case and you get even better performance.
00:30:03: Perhaps you get even faster.
00:30:05: We can address it to your hardware, to your chip, to your embedded device.
00:30:13: And here, the customer, will pay us, hopefully, that we adapt this super cool model.
00:30:21: It's super strong
00:30:23: to their hardware, their specific applications.
00:30:29: Talking about the specific data.
00:30:31: I understand.
00:30:32: So there's going to be, I don't know, there's going to be a hydraulic model.
00:30:36: There's going to be a, whatever type of machine robotic model, et cetera.
00:30:41: Now the model that you come with, which is already very powerful, that was based on available public data or maybe also on data from companies that you have been working with in specific industrial segments.
00:30:57: Right now it's only based on public data.
00:31:01: It's important because otherwise we would have a license problems.
00:31:05: It's based on public data.
00:31:07: And here a nice thing is a couple of days
00:31:10: A new model came out.
00:31:12: It's called Toto from DataDoc, a big American company.
00:31:17: And they said they had one trillion internal data.
00:31:23: Additional to the public data we are using.
00:31:27: And we are still better.
00:31:28: That's like a joke because they used internal data to build their model.
00:31:34: Additionally to the data we have available, imagine if we would have all the data the companies have internally.
00:31:43: We are beating them, but what model we would build if we would have also access to this data?
00:31:50: It would be unbelievable.
00:31:52: And here we hope that we get more data
00:31:55: from our industrial partners to even build on top of this Tirex model, even better, more specific models.
00:32:07: Like multivariate data, we already have ideas how to make it multivariate stuff like this.
00:32:12: But here, for building this, we need good data.
00:32:16: And we are right now collecting data.
00:32:19: We are right now asking different partners Can we collaborate to build even stronger time series models?
00:32:27: But we are so strong already, but we are looking into the future.
00:32:30: You can become even better.
00:32:32: Yeah.
00:32:32: And I'm sure that there's going to be hundreds, if not thousands of listeners of companies that are going to be very, very much interested in using one way or the other, their data in combination with your Tirex.
00:32:46: Okay.
00:32:46: Let's look a little bit at the numbers you refer to
00:32:50: two or three competitors, let's say in the market already, maybe you want to share with us what are what is the number one or maybe number two, three time series benchmarks and you refer to two, three potential competitors.
00:33:06: And maybe you want to tell us then how Tirex is performing relative to them.
00:33:12: Yes.
00:33:13: It's a little bit complicated because there are some evaluation measures and if you're not familiar, there are only numbers for you.
00:33:22: Let's say if we go back to the status, we were seeing now there are new submissions.
00:33:28: There's one measurement method.
00:33:31: It's called CRPS.
00:33:33: It's about probabilistic forecasting where you not only say one point, but you say like an interval, you know how good it is
00:33:43: and there was business numbers.
00:33:45: The smaller numbers are better.
00:33:48: The Kronos had o point four eight.
00:33:52: Kronos is from Amazon times FM from Google has o point four six.
00:33:59: Tab PFN thats a method from Frank Hutter in Freiburg has four point eight.
00:34:06: All these methods, foundational methods, there's also moreai
00:34:12: from Salesforce, Salesforce invested a lot into time series.
00:34:17: It was about o.five.
00:34:19: And you see, all lined up at a four point six, four point seven, four point six, four point seven.
00:34:27: And we get into the same measurement, four point one.
00:34:31: There's a big gap.
00:34:32: It's all big companies are competing at the level of four point six, four point seven,
00:34:39: and we with our first submission we got four point one,
00:34:43: you see it's a gap.
00:34:45: Another method another criteria would be the rank.
00:34:50: you don't do simulation on one time series.
00:34:52: you go over many many time series and then you want to see how good you are on what rank you are.
00:35:00: what is the average rank?
00:35:02: perhaps your one second then your third then your first and you give us the average rank.
00:35:08: and if we do this average rank on what place you are, we get for Tirex, we got on average three over many, many, many methods, also specialized methods.
00:35:22: And the next best method, like time fn, has six on average, is on place six on average.
00:35:29: Chronos is on place seven on average.
00:35:31: Morai is also on place seven on average, and so on.
00:35:35: And the next ones are on six.
00:35:38: You see, there's a big gap, whether you measure directly the performance, the prediction performance, or you... rank
00:35:46: some methods say on what a place you are first or second and then average hours of places.
00:35:52: We are also with a big gap better than all others.
00:35:55: It's so fantastic.
00:35:57: We couldn't believe it that we are performing so, so good.
00:36:02: And the reason for this is the technologies that you refer to
00:36:06: continued state tracking, context learning in combination with on top of XLSTM in relation to all the other ones are transformer based or is that not necessary?
00:36:17: All others now transformer based because transformers are so popular.
00:36:22: But in industry, in practice.
00:36:25: LSTM performed very well.
00:36:27: LSTM was always strong in time series, but more modern than transformer based methods.
00:36:33: But this is now in context learning, that you don't learn, which is known for large language models.
00:36:40: And therefore everybody jumped onto transformers because we know transformers can do this.
00:36:45: Perhaps it was not clear.
00:36:48: LSTM or xLSTM can do this.
00:36:50: That xlstm can do it
00:36:52: For me, it was clear because we are also in language, but here it went through the roof of this performance.
00:37:00: Very good.
00:37:00: Congratulations.
00:37:02: It sounds really, really impressive.
00:37:04: Before we're going to close off, why don't you share with us maybe where your base, whereas your team both for NXAI as well as your job at the Johannes Kepler University.
00:37:17: Maybe you're looking for new colleagues.
00:37:19: There's jobs open maybe and if so, what should interested people bring?
00:37:25: Yes, indeed we have jobs open.
00:37:28: We are located in Linz
00:37:29: both the company NXAI is in Linz
00:37:33: also my institute at the university is in Linz.
00:37:37: we are looking always for very motivated interested researchers but also developers.
00:37:46: it's such an exciting field.
00:37:48: believe me if you join us you will have fun.
00:37:50: it is just great to do it and also many success stuff.
00:37:54: but what we also offer is a dual system that you can also work from home half of the time or something like this.
00:38:03: This would be a negotiated and you have a very inspiring environment.
00:38:09: Many researchers, many new ideas.
00:38:12: Everything is on fire.
00:38:14: That's amazing.
00:38:15: Maybe I'll consider
00:38:17: applying for a job with you.
00:38:19: No, I will not.
00:38:22: But you, dear listener, I'm sure that there's going to be many, many people.
00:38:26: And I think the most important thing is you kind of, but also you are too modest, but we can all feel your excitement.
00:38:34: And we hear we heard again today with the great technology coming from you, coming from Linz from also coming from Europe.
00:38:45: so and I can only support you and suggest any interested party person listening to be contacting you.
00:38:53: so Sepp, thank you very, very much again.
00:38:57: As I suggested before, it feels almost like you are now so close to our industry, to our industrial environment here.
00:39:06: We are very, very much looking forward to seeing solutions based on Tirex, the time series foundation model that is better, smaller and faster.
00:39:18: Thank you very much, Sepp, and looking forward to see you soon in the Alps again.
00:39:22: Yes, it was a pleasure.
00:39:24: And please check out Tirax.
00:39:27: It's rewarding.
00:39:28: Thank you, Sepp.
00:39:30: Bye bye.
00:39:30: Bye bye.
Neuer Kommentar