In this episode of SwarmCast, our CEO Harel Boren is joined by our long-time partner Mike Erlihson, Former Principal Data Scientist at SALT security.
Join us to hear about Mike’s Deep Learning Journey:
1) What inspired his journey from pure and applied mathematics, into the AI industry?
2) How SALT migrated all AI model training to SwarmOne and transformed it’s entire AI life-cycle?
Listen
Watch
Read the Full transcript
Harel Boren: Welcome to Swarmcast, a podcast that’s dedicated to exploring the key AI and data science topics with industry leaders. SwarmOne AI, the autonomous AI infrastructure platform for all AI workloads, from training through evaluation and all the way to deployment, self setting and self optimizing in all compute environments, whether on premise or on cloud. So I’d like to introduce today’s renowned guest, Mike Erlison, or Dr. Mike Erlison, should I say. Mike is a leading AI expert specializing in deep learning and data science with a PhD in mathematics. As head of AI and cybersecurity domain, he brings extensive experience in AI research, having reviewed over 400, perhaps even more deep learning, papers. A prominent AI influencer, lecturer and scientific content creator. Nobody in Israel does not recognize Mike Esson. So Mike, very happy to have you today.
You’ve started in pure mathematics; your PhD was in applied mathematics.
let’s kick off with a quick introduction about your own background and experience. You’ve started in pure mathematics, right?
Guest introduction
Mike Erlison: Yeah. first of all, thank you for inviting me to speak in a cool podcast. I’m really happy about this opportunity. And you know I started from pure mathematics. Yeah, My first degree, my BSc degree was from the Moscow Innovation University from Moscow. And it was not in pure mathematics, it was in computer science and mathematics. And I did there also the second degree master degree. And then I emigrated to Israel and I wanted to continue my learning path because I wanted to do PhD because actually everyone in our family has a PhD. So I got no other choice. It’s kind of a pedigree otherwise went yeah, yeah, than doing PhD. And then I came to Technion. I was taught that if I wanted to do some computer, science. But I was told that I need to do many, many courses in order to be accepted to the second degree. Just to the second degree, not to the PhD.
Harel Boren: Okay.
Mike Erlison: and then I came to mathematics faculty and they told me okay, you can start, you can start second, second degree. I, I couldn’t start the Ph.D. there straightforwardly, but the second, degree and I started learning there without any courses, with very high average grade. And I started learning there, studying and I completed my master degree in applied mathematics. It wasn’t pure mathematics. It was applied, but it was more pure than implied. Actually you, know, quite esoteric, stuff like random, random combinatorial structures and, and limit shapes of some STOC processes. And my PhD was in applied mathematics. But still it was much closer to the pure mathematics and to apply it. So actually you are correct. My Journey started from, let’s call it Pure mathematics. 90% pure mathematics.
Harel Boren: Got you.
Question:
Harel Boren: What inspired your journey into pure and applied mathematics, into the AI industry?
So what was, a question that came into my mind before we, sat, down to speak is what was actually your inspiration, from taking yourself from pure mathematics? And what I understand now for the first time is that you’ve actually done your second degree twice. So what inspired your, your so diligent, journey into pure and applied mathematics, into the AI industry and the level of expertise that you entertain today?
Mike Erlison: Actually, there are several possible answers to the question. And I’m going to give you an answer which I love the most. And the Chancellor, you know, if you are doing the pure mathematics, I think I have five, papers. All of them were written before, 2008. And I’d like to ask you how many people in the whole world and the whole scientific community are capable of understanding these five papers which I, which I wrote, which is, I want to ask you, what do you think?
Harel Boren: Well, I can judge from many other papers. It’s either I decided to spend the whole weekend and bang my head into other concepts and leave with 70% understanding, or, I, I don’t understand. Yeah, okay. You really have to move very, very deep.
Mike Erlison: I guess that in the best case, there are, a few hundreds of mathematicians in the whole world who are capable of understanding papers I wrote. And it doesn’t sound good for me. It didn’t sound good for me then and it doesn’t sound good for me now. And I wanted to contribute, let’s say, to the humanity. I want to do something, something significant. And I wasn’t capable to do it in pure mathematics. By, by the way, there are some, some of mathematicians who are capable of doing it in the framework of pure mathematics. But I’m not, maybe I’m not talented enough to do it. So I, I decided to let, to leave mathematics and go and go to the industry and try to, to contribute the humanity the, the world, not die of pure mathematics. And why my contributions, to the stuff which are developed in the, by the companies.
Harel Boren: And that was essentially A.I. when did you make your move?
Mike Erlison: it wasn’t A.I. because I started my career in high tech, almost 20 years ago. It was 2006. And you know, I haven’t heard. I’m not sure that I Knew Then in 2016, I knew what is AI I knew it is something which is called artificial intelligence. But I started to my journey in high tech, not from AI but from different stuff. I started from computer vision and when I went to the company which were developing some ah, wireless communication systems chips and then based in the base station of first generation. And then I ah, taught some Concourse and the Bear Shelby and the university about it and then came to Samsung and only in 2016, at the end of 2016, some random lecture. Some random lecture. And I want to emphasize it because you know all this, all this journey to the AI was quite random. Why call it quite random? Because you know I, I could get, I could get to the AI, to the deep learning in 2020 because I was working in Samsung in the solid state device. Almost no relation to machine learning then and to AI and then some random lecture which I ever didn’t want to attend, you know. And you know I just got to this lecture and you know my eyes become twice bigger because I saw all this mathematical structure, all this convolutional neural net and Alexnet, and Lynette and I saw all this stuff actually working and also stuff actually working better than all these manually built features. And then I told to myself, okay Mike, now you need to go to deep learning. And then deep learning slowly transformed to AI. And this was the beginning of my journey. And I’m very happy about it because, because, because you know, before, before I get into Deep learning, I can’t, I couldn’t fully say, that I, I love what I’m doing. Okay Now I not just love what I’m doing, I’m in love with what I’m doing.
Harel Boren: That’s really beautiful.
Mike Erlison: And it is, it is stronger.
Harel Boren: That’s really beautiful. That’s really beautiful. Yeah. I recall in a different, in different scenario and a different, in a different world and I assume a year or two later. But that moment when you see a, an AI model really working for the first time looks like magic. It always, it just looks like magic and everything that comes after it, whether more complicated or simpler, it’s always magic. And I think that this is what causes people to to really fall in love with AI.
Mike Erlison: yeah, exactly. For me, for me, by the way, after, after Deep seq, it is, it’s very common to call it the aha moment. As for me, the aha moment in deep learning in the artificial intelligence was this lecture in the end of 2016.
Harel Boren: Right. Yeah, right. For me, by the way, the aha moment was a after running a A model to identify between cats and Dogs. And there was a small function I included inside to understand which features were the most interesting or the most they had the largest weight on.
Mike Erlison: The model, the most salient. Salient, yeah.
Harel Boren: To understand what is a dog and what is a what is a cat. And it turned out that the eyes had the most important features. And it really struck me how this becomes so similar to human beings because human beings also look directly into the eyes. And the fact that this particular model, a small little model with a measly data set was also looking into the eyes was mind boggling for me. But enough for that.
Question:
Harel Boren: So what developments in AI and data science do you find most excited about right now? And you really got to keep with the pace because this changes on a monthly basis.
Mike Erlison: Okay, I think first of all it is really hard to answer the question because as you mentioned there are a lot of developments and there are a lot of interesting developments and there are a lot of stuff, all these language models on these agents and multi agentic flows and stuff like that. And now what is really popular is the model context protocol of Anthropic and now of OpenAI. And after after Anthropic put some, put some blog about it. I think there were 100 companies which, which did it in the week after it and now Opal AI joined, joined the celebration. They can define it this, this way. And and you know it is what, what, what is really exciting is the rate of innovation. You know rate of innovation is exponential and the exponential, it is not just exponential of two weeks. I think this is exponential in x square. You know the rate of innovation is really huge by the way. I don’t think that anything which comes out from these big companies, large companies is really very innovative. I think about 20% of the stuff which are published by, on the media is really innovative. But still if I compare it to the state of the data science and AI and machine learning say five years ago, this rate of innovation is enormous. You just can’t imagine this speed of innovation even three years ago. Even three years ago. You know, the number of tools, the number of new models, the number of systems and packages, you know, you can’t just follow after all this innovation. You know, I am in many WhatsApp groups and I’m trying to follow, I’m trying to be really updated. It is really hard. I think in one year from now it will be totally impossible to be updated of all what is happening. in AI.
“I think there is some misuse of whitecoding for younger data scientists“
But what is really exciting for me now is what is called whitecoding . And I explained. Okay. And I think that it is very important concept. but I think that there is some misuse of this whitecoding . everybody knows what is whitecoding . You don’t write your code. You tell to cursor or to copilot or to something. there are dozens of tools which can write your code, for you. But the concept is that the new code is your English. English is English. For me as a data scientist, as a, quite experienced data scientist with a quite vast and in some places quite deep knowledge on AI, it is cool because I don’t need to spend my time on coding. For example, I wanted to do something and it involved scraping of 15, different websites. And three years ago it would have taken me about, you know, days, maybe weeks. Now how much time it took me two hours. Two hours? Yeah. And I didn’t write any. Maybe I write one, One, one string of code. One, one string of code. Some print, you know, some, some logger. I just, you know, all the. All my bags. There are some bags I asked to write, to write the code. And then cursor, I’m using cursor. I love cursor. And then cursor. Write a code and then, you know, there’s some bags. You know, I tried to run it, it was some bags and I just did copy paste. And the only word I wrote, I wrote it fix. And two hours after it. I have a working code. But you know, it is not some, code which goes to production. Some. You know, it’s not some game for me. I, I wanted to get some, something for me. I’m not. So I, it’s not so important. All security consideration, you know, there’s some, some code I wanted to do scraping of some you know, some, some paper reviews. As you mentioned. I love paper reviews. I want to do scrapings of, of 15 guys who are doing paper reviews. And I wanted to summarize their reviews and to learn to learn from them. But if you want. But for, data scientists who has less experience and want to put this code into production, this one coding concept can be quite dangerous, you know, because I don’t care about some security, bugs in my, in my code. It is not important. But then you’re using this whitecoding to put your code into production. And if you’re using whitecoding and you don’t understand your code and you don’t understand the coding depth, it can be Very, very dangerous. So from one side I’m really excited because it really helps me. It’s, it’s really shortens my way from thinking about some concept and implemented it which two years ago would have taken me weeks in some cases and now it takes me hours. But for younger data scientists with less experience. You should use it. how did it. The grain of salt. You know, you can use wire coding, but you must understand your code.
Harel Boren: Okay, yes. I have a feeling that it, it is the difference between the the code as if you’re a student, the code as you write it. Ah, as opposed to having code in a, in a textbook and then your mission is to understand it. Now if you, if you jump over the understanding then you’re jumping over very dangerous part. I perfectly agree with you.
Question:
Harel Boren: Do you feel that the hardware in AI keeps pace with the software?
and with all this turmoil and an advancement in AI, do you feel that the software part of it, the software part of the stack, Or I should actually rephrase my question. Do you feel that the hardware part of the stack, keeps pace with the software part of the stack? Or are we in. In any dissonance, in this regard?
Mike Erlison: I’m not sure that there is any dissonance. I think. Do you know, I familiar with the bitter lesson of Richard Sarton of this very famous blog in which he told that the most advancements in. Were motivated and were built on the compute actually and more on the clever engineering ideas of most profound and famous data scientists. So I think we still relying on this computer, huge amounts of computers. You can just take a look on these companies like Meta, like Amazon, they have Google, they have an infinite amount of compute. And these are companies which are building the most advanced language model. for example, if I wake up someday and here’s some small company developed some large language model, some foundational language model from scratch, I will be very surprised because you need the tens of thousands of GPUs to do it and you need all this. Even if you have tens of thousands of GPUs, you need your infrastructure. So most of the advancements in AI are motivated, still motivated by huge amounts of compute. By the way, just yesterday or day before yesterday I wrote a blog about it. my not conclusion, my thoughts about this bitter lesson of Richard Sutton Bloff. This blog was dated of 2019 maybe. I think the situation has been changing now. You know all these clever usage test time compute and the rads and all you are now using your anthropic and or OpenAI or Gemini. It is not just LLM, right? Multi layered systems, rags ah and databases and caches. It is very very complicated system which can also do searches on the Internet. So it is not just compute now but still if you don’t have compute, you can develop say foundational models but you can develop some small, everyone knows what is deep seq, everyone know what they did. And actually what we did is some kind of democratization of LLM training. I think that now even quite small companies can build some not very large and not from scratch, some very clever LLMs. So the computer is important, the hardware is important. but you know I think one year from now, two years from now, maybe three years from now, I feel it, maybe I’m wrong. I feel that the rate of the development of hardware, it is going to slow down. But what we see now we see that the engineering, the engineering tricks, engineering ideas, how we build all this infrastructure, all this ecosystem around LLMs, changing very fast. And I think that many new developments in AI will come not from just huge amount of compute and very developed infrastructure, but from some clever engineering ideas around all these LLMs. LLM. Now LLM is just a building block. You can do very smart things and very smart, you can build very smart tools based on LLM. It is not just LLM but LLM is at the heart of these systems.
Harel Boren: Yeah, thank you very much. And we’re also going to get to your interaction with Swarm1 later on.
What challenges do AI professionals and cybersecurity companies encounter when developing models
So if you can summarize, what challenges do AI professionals and cybersecurity companies especially encounter when they develop and evaluate and deploy large models to production. I mean in terms of execution, their actual execution.
Mike Erlison: Okay. I think the issues data science are having in cybersecurity companies are not very different from our companies. Maybe you know, in cybersecurity companies they are mostly dealing with data which is not a natural language but some, you know, some logs and jsons and then, and HTML. And I’m not sure the, our LLMs, even the newest one, have been trained on enough amounts of this data. So there is some there is a need maybe and I know a couple of companies which are trying to do it. There is some need in some cyber security LLMs or cyber LLM, some foundational cyber LLM. But you know, so it is a common issue if you go to the health domain. If you go to many different domains you can take some LLM and you need to find you, okay? And in cyber this situation is not very different from other domains. And now you are going to fine tune your model and it is not very simple, okay? And it is not very simple. Even you have your data, by the way, the data, the building your data set is far from being simple. but you know this problem exists in any domain. Now you can generate your data, but still the quality of generation is not sufficiently high in many cases. But let’s put this data issue aside. And now, now you need to find your new model. And still with all your frameworks, with all your tools of fine tuning your models and we have maybe thousands, okay at least many hundreds of tools and frameworks to train your models. It is still not very easy, not very easy in terms of now you have some model and you have some data set, okay? And you just want to fine tune it, okay? What I mean by fine tuning it, you are not just training the model, the model Training is the easy part. The monitoring of training and the results evaluation and all this stuff. and where are you running it? Then you are training your model. So you need to go to AWS or Google. But still if you it is not plug and play, you know I want. It could be very cool if I have some model, you know, some checkpoint, some known model, 13B, not very large. And I have my data set and I have my loss function. I know what is my loss function nest talking prediction. I can define my loss function. I’m data scientist. I understand very well in loss function and I understand less in all this hardware hardware consideration. I want to go to AWS without configuring, without all this configuring all these machines and then choosing the GPUs. I want just to train my model. I want plug and play, I want to give the model, I want to give the training configuration and I would just straightforwardly train my model, okay? And this things which can sound, which it sounds quite, quite trivial currently is hardly possible, you know, to go to AWS. Even this SageMaker, it is not so simple.
Could you share your experiences using Swarm1 as principal data scientist at Salt
Question:
Harel Boren: So maybe this connects me to my next question which would was if you could share with the audience, how you actually benefited from Swarm1 as the principal data scientist, at Salt, Security, for example in terms of flexibility of the platform for AI infrastructure, setting this up actually making this what you said this trivial move actually bring it to existence. Could you share your experiences in that regard?
Mike Erlison: Yeah, I can tell you in one sentence this experience was very positive. And you know at salt our models were highly customized. Okay, what do I mean by highly customized? Which is the models which we trained were quite small model that was two or three years ago. These models were quite small and they’re quite customized and you know some LSTMs and GRU and some very light transformers and these models don’t have full support in hugging face. And guys who know how to train models with not full support in hugging face they know that it is far from being simple if you work with the bare metal AWS or even SageMaker and it is a SWAN platform. It actually was quite easy and they even helped us a couple of years ago to fix our bags. But the most important thing that our data scientists didn’t have to run around all these AWS configurations and to configure it and you choose the H2H100 or A100 GPUs in AWS and all these theorem configuration which are not really. It is important but I don’t want my data scientist my data scientist waste, waste time in stuff like that because I think it can be done automatically and in swarm platform it can be, it can be done automatically. I don’t for example I don’t see any error in which word cuda. It feels I’m a bit frightened this cuda lord scares me out. But, but still all this model training what was, was very simple because, because you know you just need to define your model. Just need to give the card of the model. Now you can define your, your loss function and when you want your checkpoints and all this train set validation set and test it and it just runs. You know I don’t, I don’t need to configure machines and I don’t. And then I don’t, I don’t need to install some package here and some package here. And I don’t get, and I don’t get you know all these errors about you know The Pytorch version can’t run some version of some other package which we were getting two years ago in aws all the process was really simple. You give your model, you give your training configuration and you’re just training your model. And one more important thing, it was much less expensive. It was much Cheaper than aws.
Harel Boren: Well thank you for that. yes it was, it was then and already now it can also run on your own compute. So whether that compute is an AWS or Azure OCI or GCP or Lambda Labs or Core Weave, whatever, it is it is a software tool that can reach those that simplicity that you were alluding to in regards to the activity of the data scientists. So well thank you very much for that and that’s really important to hear from someone who’s actually been using it for years. and So how do you
How does cybersecurity feel adapting to the growing risks of AI powered uh threats
Coming back to cybersecurity which is part of your essence, essential activity, focus in past years, how does is cybersecurity feel changing and adapting to the growing risks of AI powered threats if at all.
Mike Erlison: it is, it is the answer, it is trying to copy it. And the risks are really growing. And the reasons, the main reason these risks keeping growing is because now you can easily engineer very sophisticated attacks. For example on your API, on your cloud infrastructure with AI, you know, three years ago it was hardly possible each attack you need to develop it and then to think about it. Now we can engineer very complicated attacks, just some you know, some language model. You can give it the data and it can just engineer very sophisticated and very hard to catch attacks. And and you know it really affects the cyber security field because now you need, you need to incorporate the ability of detecting and catching this these attacks and it is not very simple and many old, many old methods, you know to detect stuff like SQL injection and stuff like that you should do it as well. But now you have to add capabilities of detecting and avoiding this Uh0 data because you don’t know what will come out in, in two days. It is called zero day attacks. And you know these attacks were, are AI generated and you have no choice but the leverage AI to solve these attacks. You know. But, but all this, but this incorporation of AI capabilities into cyber security products is not so simple. Before it you have some quite not very complicated methods of anomaly detection and some you know, some regexes and dictionaries to catch stuff like SQL or SQL injection or injection like that. Now you need keeping all this backward compatibility with the old stuff. You need to incorporate some AI capabilities in your project. And it’s not very simple. It’s not very simple because our plain models, I think if you take for example Claude or OpenAI ChatGPT 4.5 they know cybersecurity, they understand cybersecurity, but you know, the devil is in the details. Okay. If you ask ChatGPT or Claude or Gemini, if you ask some general question about cybersecurity, it will give you a good answer. But now when you drill down to the real case it is not so simple. So you need your dedicated models, you need to train, you need to fine tune models which are specific to a cybersecurity domain. You know cybersecurity domain is like math. Math is huge and cyber security is not smaller than math. You know you have cloud security and have wafs, you have API security, you have dlp, the data loss prevention and many, many things, network security which also very important. So I think the future of cyber security is quite small dedicated models for each domain. So we are doing some network security or if you want to cover all these cybersecurity stack or most of them, you can just put several models. Each of each of them are they fine tuned on a specific cyber security subdomain. And you need to have these models working together. And it is not so simple because you know the stuff is changing. You know I like to say, I used to say that the distribution is not stationary, the stuff is changing, new attacks emerge. So these models, you need to retrain them, you need to fine tune them, maybe you need to build some tools for automatic retraining on this model. So if I can summarize my question, my answer into one sentence. You need to adapt to the new state of cybersecurity domain. And this adaptation goes through incorporation of LLN capabilities into your project together with the old good tools, regexes and then you know, and some basic classic anomal detection method.
Harel Boren: Wonderful. Thank you. I actually feel you know, in the underlines, in, in a few of your answers or in quite several of your answers, the inclination towards fine tuning as opposed to actual training. So fine tuning a variety of models to address zero day attacks which are generated by AI, and stuff like that. So I would, I would I would define you as a, as a fine tuning proponent without, without even asking you.
Mike Erlison: But you know, you know there are, there are quite a few companies which have enough money and resources to train huge, let’s say 1 trillion model parameters from scratch. There are not too many companies, you know the price of training such foundational model is going down. But still it is, it is huge.
Harel Boren: Got you. Well, thank you very much, for, for sharing, all of this with us today.
Mike: Serious data scientists not understanding is not good
Question:
Harel Boren: Mike, are there any recommended resources that you can point at for professionals? Any resources you can point at, such as books or podcasts or blogs or any tools that you’d like to point at, which coincide with your philosophy and your way to approach, issues.
Mike Erlison: You know, it’s a very tough question because serious data scientists not understanding is not good. So what I recommend is blogs which dive into some math. It’s not just you take some LLM, you do some prompt engineering, you do some red without understanding what is going on under the hood. There are a lot of such blogs which dive into math. And there is, I think, 100 more blogs which kind of tell you, okay, you don’t understand. You take some LLM, you just connect some rag, and then you build some agent. It can work, but still, I encourage you to learn, I encourage you to learn how stuff works. Okay? And for now, compared to the 25 years ago when I was doing my second degree and my PhD, this knowledge is fully accessible. You can access this knowledge in YouTube, in Coursera, there are, you know, medium and gradient. You have a lot of, A lot of, a lot of places you can acquire this knowledge. But I encourage you to learn, to learn how things work, not just connect.
Harel Boren: Things without understanding, you know, yeah, thank you very much for this insight. And it actually resonates very well with one of our previous guests, Amit Weiss, the CTO of, Numinos. And our mutual perception is that the upper layer frameworks, PyTorch, TensorFlow, hugging face, have everything kind of on a silver plate for you. And some of the deeper things which are making deep learning, such as activation functions as easy as that. And what activation function are you going to use now? It’s as if there are only three activation functions in this world and all the rest are irrelevant. And and definitely some that you can actually think of, yourself. So this is just an example for so many things which are happening, in the deep layers of a deep, neural network that have become run, of the mill and are not researched anymore by people researching the the field. And it’s a very young field. So this phenomena is rather surprising. And I, I would say, you know, it’s a little downer, to see things just connected and Okay, now let’s have an answer instead of, let’s look inside and see what we can actually improve and, and do it this way. So, yeah, you got a resonance with one of, our previous participants, by the way.
Mike Erlison: Nobody really understands for now what is going on inside your LLMs. Okay. Anthropique is doing great research efforts and also Google to understand what is going on, but still we are not there. So I encourage you to try to understand, to read this, to read this research, to ask questions. You know, There are many WhatsApp groups and then, you know, you can ask it in LinkedIn and the Reddit. You have a lot of media. Have, you have. They have tons of media. You don’t understand something. Ask. Okay, but still don’t get your stuff as working. It’s working. It’s okay. It is not okay.
Harel Boren: Yeah. And I believe that by this you can actually achieve many more aha, Ah moments of the type that we discussed in our initial, in the very first, minutes of our conversation today. every, Every. Every, you know, picking up and, and looking under the, under the kilt shows you another aha, moment or at least has a potential of that which, if you just, you know, connect and, and make it work, you’re. You’re losing. okay, so this was very interesting.
Question:
Harel Boren: Where can our audience follow your work? like on social media or blogs or any recent publications?
Mike Erlison: okay, I think, I think you can follow me in three main medias. The first of them is the substack. I’m. I’m writing constantly and on some sub. On substack. And they will put the links, under the. Then, Then the issue. Then the issue. The, The. The podcast recording. We will put some links and the one is, is my substack and the second one is my, my. In my LinkedIn. I’m pretty active there and I’m sharing a lot of books, mostly about math, sometimes about physics. And they also have two telegram channels, one in English and one Hebrew, which I’m. And I’m sharing my diplomatic paper reviews, as Arel said. Now it is about, 440 reviews of diplomatic papers. So LinkedIn substack, telegram, channels. Please follow me, by the way. I’m sharing only deeply technical stuff which, encourage. Encourages your thinking, by the way. I am reading tons of papers and I admit. And it is serious, 100% serious. I’m having aha. Moments every day.
Harel Boren: There is almost no day bless you.
Mike Erlison: That I don’t have aha. Moments.
Harel Boren: I am really, really. Thank you. Mike. this meeting was very insightful as every meeting with you has been in the past.
Mike Erlison: Thank you.
Harel Boren: And I look forward to touching, base with you in a year from now. And looking back and how nascent was the technology just 12 months ago when we were having this conversation, to our audience, I will say, if you enjoyed this session, stay connected with Swarm1 for more expert discussions and seeing. Mike, once again, look up, any news from us on Swarm 1. AI and I wish you a very good day, and a very good week, for all of you. Thank you very much. Take care.