In this episode of SwarmCast, our CEO Harel Boren sits down with one of the most influential voices in AI today Eduardo Ordax, known for making complex AI feel clear, human, and actionable.
Eduardo’s posts have become daily reference points for thousands of professionals trying to make sense of generative AI.
But behind the storytelling is a sharp strategic mind helping guide AI adoption at scale.
In this conversation, Eduardo shares:
- What’s still misunderstood about GenAI – and what comes next
- How the unseen layers of AI work can make or break real-world success
Listen
Watch
Read the Full transcript
Harel Boren: Welcome to Swarmcast, a podcast that’s dedicated to exploring key AI and data science topics with industry leaders. SwarmOne.AI is the autonomous AI infrastructure platform for all AI workloads from training, through evaluation and all the way to deployment. It is self setting and self optimizing and it works across all compute environments, whether on PREM or in the cloud. So, this time I’d like to introduce today’s renowned, guest. this is someone whose name you’ve probably come across if you’re only remotely connected to the AI space, Eduardo Ordax. Eduardo has built just a very justified reputation as one of the most influential voices in the world of generative AI through engaging content, powerful storytelling and human first approach, which I sincerely love. To tech. He’s demystified AI for thousands and thousands of people around the globe. We’re thrilled to have him here with us today. Hello, Eduardo.
Eduardo Ordax: Hey. Hey. It’s a real pleasure to be here finally with you today.
Harel Boren: Real pleasure on our side too.
You’re a computer science scientist with a deep background in AI
So, so maybe let’s kick off with a quick introduction about your background and expertise.
Guest introduction
Eduardo Ordax: Yeah, sure. So I’m a computer science scientist. Right. So I have a technical background. I remember the old days, coding engine like Java, Pascal, all the old school. Right. I’ve been working on sales, business development, strategy. And since 2015, I will say, yeah, 10 years ago I directly jump into this world of data and AI. It was mainly data 10 years ago, data analytics, not so much AI. Right now it seems like everything is AI. and for the last three years working at AWS, I’m leading go, to market for AI across EMEA. So that means helping our customers to leverage AI in many different conditions, how to improve, productivity, customer experience and so on. And yeah, as I told you before, super happy to be here with you today.
Harel Boren: Wonderful. A very interesting background. the fact that you made the, made the move, so, so deeply.
I got enchanted by your LinkedIn about first paragraph about commitment to practice
Harel Boren: So I want to start with something before I actually kick off. I got enchanted by your LinkedIn about first paragraph noting the principle of to give anything less than your best is to sacrifice the gift. And you know, it reminded me of some of Seth Goudin’s analysis in his books, particularly Practice. I really love that book. You know, showing up, showing up to the practice. Would you share with me how did you get this wonderful insight? Like what was the spark of this insight in you?
Eduardo Ordax: It’s not coming from me. you know, like I used to be a professional athlete in the past. I used to run 15, hundred. I’ve been competing in like the nationals and so on. Actually I have 3 minutes 50 seconds in the 1500. So it’s not so bad. It’s not so bad. And I’m a big fan of athletics. Right. And there were like a famous very famous athlete from the US he was called Steve Prefontaine. He was this kind of, you know, like a super talented fork that, that he was not able even to win a medal in the Olympics. But probably by that time he was, you know, the best one because when he was running he was trying to give everything since, you know, the first single meter. I mean like he was not like very tactical, very strategic, like hey, today we are going to die. And let’s see what happens. And he had this quote saying like hey, to give you like everything less just you know, like it’s mistake and you like. I, to be honest, I used to apply in my day to day. I really love it. So again, it’s not mine. but you like it’s something that I used to apply, you know, like in my single day, across many other unique aspects and really like me a lot. Yeah.
Harel Boren: You know, this resonates very much with things you hear all over the place. And when you mentioned now the, the you know, no fear to die or what I’m ready to die. It reminds me of Max Verstappen, I hope I’m pronouncing the name correctly. He’s the best Formula one driver in the world. And I saw an interview with him once and he said well I’m just not afraid to die. And and when you do that and you would have this wholehearted commitments to the practice, coming back to green, so to the practice, you get to, to theskudin. Then then you get that level of commitment. I really got enchanted by it and it’s beautiful.
I got enchanted by your LinkedIn about first paragraph about commitment to practice
so I want to start with something before I actually kick off. I got enchanted by your LinkedIn about first paragraph noting the principle of to give anything less than your best is to sacrifice the gift. And you know, it reminded me of some of Seth Goudin’s analysis in his books, particularly Practice. I really love that book. You know, showing up, showing up to the practice. Would you share with me how did you get this wonderful insight? Like what was the spark of this insight in you?
Eduardo Ordax: It’s not coming from me. you know, like I used to be a professional athlete in the past. I used to run 15, hundred. I’ve been competing in like the nationals and so on. Actually I have 3 minutes 50 seconds in the 1500. So it’s not so bad. It’s not so bad. And I’m a big fan of athletics. Right. And there were like a famous very famous athlete from the US he was called Steve Prefontaine. He was this kind of, you know, like a super talented fork that, that he was not able even to win a medal in the Olympics. But probably by that time he was, you know, the best one because when he was running he was trying to give everything since, you know, the first single meter. I mean like he was not like very tactical, very strategic, like hey, today we are going to die. And let’s see what happens. And he had this quote saying like hey, to give you like everything less just you know, like it’s mistake and you like. I, to be honest, I used to apply in my day to day. I really love it. So again, it’s not mine. but you like it’s something that I used to apply, you know, like in my single day, across many other unique aspects and really like me a lot. Yeah.
Harel Boren: You know, this resonates very much with things you hear all over the place. And when you mentioned now the, the you know, no fear to die or what I’m ready to die. It reminds me of Max Verstappen, I hope I’m pronouncing the name correctly. He’s the best Formula one driver in the world. And I saw an interview with him once and he said well I’m just not afraid to die. And and when you do that and you would have this wholehearted commitments to the practice, coming back to green, so to the practice, you get to, to theskudin. Then then you get that level of commitment. I really got enchanted by it and it’s beautiful.
You’ve built a reputation for making AI feel clear and actionable
So m. you know, you’ve built a reputation for making AI feel clear and actionable.
Not just on LinkedIn, but all over. And in how you think and how you work and how you point at things. Why do you believe it’s important to make AR more accessible and understandable for non technical audiences? Like what would, what, what, what inspires you to make this really very, very challenging move over and over again? I should say.
Eduardo Ordax: Yeah. You know, sometimes it’s because I reflect myself in this dilemma. Right. I mean like I’m Coming from a technical background, I’m computer scientist and I’ve been coding for so many years. But then I move more into the business side. Right. And sometimes, you know, like when you have been in both sides you realize about how disconnected they are both. Right. So I try to make sure, because right now today with AI, there is a lot of noise and there is a lot of misconception about what it is, what is AI and what it’s not. Right. And I see the media, I talk to people and sometimes they believe AI is just chatgpt, for example. Right? And it’s much more than that. So because I’ve been in both sides and actually my day to day I used to work in both sides because I’m talking to technical people most of the time, but also to business leaders and so on, I try to make the things simple because I think things can be simple if you just explain it in the right, in the right manner. Right. So this is what I’m trying to do most of the time. But also because the process for me is super insightful. I mean like it’s not the same to understand something that when you are trying to understand something, to teach others. So going through this process is going to be like a super insightful for you because you are going to get a lot of new knowledge, you are going to learn from things and you are going to be in a better position to start explaining, things and especially also in this world. I remember I always say the same story, but it’s funny. Three, four years ago, I didn’t have a single friend who was aware about what I was doing for a living. Yeah, they knew I was working in something related to data AI, but they didn’t have an idea. Right now all my friends, they are coming to me and there is always the same, discussion topic, hey, tell me about AI. What is going to happen? Humanoids, this model. It seems that I used to be an Earth and now I used to be the cool guy, right? So I think because of that that, you know, I call so many people, they want to jump into the space of AI. You need to separate what is real versus what it’s not. And when I go into social media, interviewing the TV and so on, I so many people are talking about AI and they really don’t know what they’re talking about. So I try to be at least something to say like, okay, I could be a trust voice. So people, you know, who follow me and who, you know, I, ah, can listen to me they really understand what is this world about AI. So it’s essentially because of that. Right?
Harel Boren: Well that’s very interesting. And I’m especially attracted to the end of your words because it has become from a tool to a hype, I think the speediest thing that I’ve ever seen. And as opposed to Internet, you know, you, you knew when Internet came about in 1995, 96, that all you have to connect with the modem to the, to, to. To the net and start, and start browsing. But this topic has a lot of myth around it. There is much more myth than, than knowledge in the public. And this puts people in stress, especially in the business communities. I should move into AI. Okay, but what should I do? So I think that your, your mission to demystify it actually funnels people’s energy in the right directions as opposed to like walking in the dark and trying to make their way. I really like it.
You write often about technical topics that even engineers find challenging
and I noticed you write a lot with a lot of clarity, often about technical topics that even engineers find challenging. and you managed to to make them very clear and very to the point. what is your process? And that’s enigmatic for me by the way, since I began reading your your post. What’s, what’s the, your process for staying informed at such a high level and breaking down these ideas so well.
Eduardo Ordax: Well, there is not like so many magic there to be honest. I mean like I don’t consider myself and this is like I’m totally honest. I don’t consider myself like a super talented people. I mean like technically speaking, I remember the old days when I was at the university and I was coding. I was not very good at that. I mean I used to be with my friends and show them how fast they were doing stuff. And for me I was struggling since the very beginning. Right. I think that the key thing for me is like I’m a hard worker. So I used to dedicate a
Eduardo Ordax: lot of time. I think I’m very well organized person and I think it’s about dedicating like good quality time. Right? So every day probably I spend one hour, one hour and a half, sometimes even more to go through all different news, all different topics. So for example, you know, like I remember when Anthropic released MCP last November, even not so many people were talking about it. They started to talk like few months later. So I mean for me, the process, like trying to go deep into that, try to see how it works, what the people is talking about, try to see the different technicalities. And even when I’m not a developer right now, not anymore, right. But I try to follow this process. I use a lot of AI to go through, complex topics. I using for example cloth. I mean like typical thing like hey, I’m reading a paper, I’m putting the paper into cloth. I used to ask so many different questions and you’re like it’s helping me on the process. So I think at the end, you know, like it’s trying and because you are trying to take these concepts to explain to others. As I was saying at the beginning, you know, like the process is much better for you because you are going to understand it much better. So it’s not something that, hey, I want to be aware how it’s working, but I want to make sure, like others, they are going to understand. But again, not sacred source. Just trying to dedicate good quality time, to be honest.
Harel Boren: So good quality time that coincides also with the, the first, notion. Okay, just go for it. Just show up, that, that, that I, I, yeah, I can I can, I, I can resonate very well. and by the way, you know, I, things have really changed over the past, several years with ChatGPT being introduced and everything becoming LLMs. And I think that has probably led to the very strong introduction of AI to the public and making it so, tangible. and, and there must be a good level of explanation and the fact that people, like you, dedicate that time and make that translation is invaluable, I believe.
Ike: One area where AI will have a super strong impact is software
may I ask you if, if you had to place a bet on one area where AI will have a truly, I don’t want to say exponential but super strong impact in the next two, three years. Where would you put it? and perhaps why?
Eduardo Ordax: Yeah, well, I know it’s always a tricky and complicated question because this is evolving super fast. I know it’s hard to answer because even, you know, Ike, we were trying to answer the same question two years ago we were talking about LLMs. Now we are talking about AI agents or Gentic AI. And probably in two years time we’ll be talking about humanoids realign, like physical tasks that we are doing today. But I think my bet is in all those tasks that they’re not providing too much value or where we as Humans, we are not providing extra value, they are going to be ah, replaced or in some manner they are going to be enhanced by AI, for example, writing code. I’ve been writing code for so many times and I remember 20 years back trying to copy the code from the books and I remember your teacher telling me hey, what are you doing there? What, what are you copying? Why are you copying the code from the book? And we’re like hey, I mean this function, it’s useless. Why I’m going to dedicate, I don’t know, 20, 30 minutes to code this if it’s exactly the same thing. I need to focus on the project, I don’t need to focus on this function, right? So I think it’s the same analogy to then with a stack overflow and now with LLMs. I think because of AI, I’m not saying that software developers that are going to be replaced, not at all. What I’m saying is like they will be able to release much faster and much better software. So that means when you like to release a new version of something, if it used to take months, it will take weeks. So I think we are going to have much better products all the time. and we are going to see this huge ah, explosion of new services, new companies and whatever. So I mean like the time that it takes to develop new services I think is going to explode. This is one of my bets because we are seeing it already. I mean like for some people AI has not changed their lives yet and it’s pretty much the same the work. But if you ask to any developer how they used to code in the past, how they are coding right now, different story. And I think the other big bet for me it’s in terms of customer interactions, right? If you look into many companies with these huge contact centers, twin stuff that takes a lot of time, they redirect to one agent and then to another agent and you still don’t know which is your order or how to return your services. And it’s so annoying because it’s taking a lot of time. I think because of AI and because of
Eduardo Ordax: AI agents this is going to be much more smoother, quicker and much better experience for the end user. So I think we are going to see as well, a huge transition from what we have today in terms of customer interaction, customer support, customer experience to what we will have in just few years time.
Harel Boren: I agree with your second insight very much but I find your first insight even deeper because I Haven’t thought about the fact that what you’re actually saying is that the implication is going to be indirect, not by necessarily the integration which will definitely happen, the integration of AI models within pieces of software and services, etc. But those services that were in the past just software and may even remain just software will be will be a subject of development which is much faster and quicker and widen the capacity of the human race to address so many things with software, simple software, regardless of the AI, engines, whether they are or not integrated within it. I think this is a very deep view. I think this is a very deep insight. Very, very deep. sincerely. The way that I viewed it, was that we’ve been developing software for the last, the human race for the last like 75 years. And I saw AI doing two things. One is changing everything that we have done with software into AI. Because everything that was done with software you can actually improve in AI and opening areas which simple algorithms were incapable of actually, satisfying. But what you’re saying here is something much more optimistic, and I think that much more realistic than that and that is the widening of capacity of treating things with software. That is really wonderful and it’s optimistic.
What do you think still holds companies back from actually delivering value with AI
We’re going to get in a few questions. I have another question for you which on the less optimistic side, so thank you for that. At Swarmone, we focus a lot on helping companies move from, from AI experimentation, so to speak, to real, scalable adoption. and especially we’re seeing that with more and more movement from enterprises into the field, trying to see, okay, what can we do? How can we improve whatever we’re doing. From your conversations with leaders and teams, which I think are more prevalent than what we have, what do you think still holds companies back from actually delivering value with AI? Of course this is a very general statement, but what do you believe is the major, most common hurdles keeping back, holding back companies or creating friction?
Eduardo Ordax: Yeah, it’s a good question. And it’s a good question because, you know, before this explosion of AI, I used to spend the previous five years working on this famous MLOps topic. So I was helping to customers to develop MLOP practices, MLOP platform in order to scale the adoption of ML. Right. By that time I remember the customer were struggling in terms of running machine learning models at scale in a reliable manner, efficiency, and so on. I always talk about three, main different aspects. It was people processing process and technology. I never thought technology, it’s A big problem. And let me explain. Of course there is a lot of legacy and so on, but I think right now we have the right technology, the same we used to have few years ago for the problem for the past. So of course technology is going to be improved but I don’t think it’s a problem. I think the problem, it’s on the other side in the people part and the process part.
People need to upskill for AI because it’s evolving super fast
So if we talk about the people part, one of the main issues with AI right now is that it’s evolving super fast. You are talking for example about AI agents. To my mind comes topics like different frameworks, like for example lan chain, langgraph or different tools, A2, a MCP. But all these different topics has been released few months ago. How we are expecting to have people super well trained on these topics if they are completely new but also still AI. It’s super new. We used to talk about data scientists and I remember a few years ago having good data scientists. It was challenging because there were not so many people being trained on data science. I think right now it’s not a problem about data scientists. But again if you think about the work of data scientists five years ago doing feature engineering, training models, doing hyper parameter tuning, deploying models, automatically retraining models and trying to get insights
from there. Again the work of data scientists have changed a lot. Most of the times right now they don’t have to train models because they are using LLMs for different use cases, different purposes. So even data scientists they need to upskill, again and we are talking about AI data scientists, AI engineers. So people is one of the main challenges to upskill the people. If the companies, they want to be AI first, they want to be data driven, they need to invest on their people. I think this is one of the biggest challenges for companies to really adopt AI.
I think the other problem as I was saying is about the processes around AI
I think the other problem as I was saying is about the processes. So and something that I’ve seen with AI is this like false promises or you know like exceeding expectations about what AI is going to deliver. And let me explain why. Everyone can go take any LLM, make some API calls and you have something that is working, it’s working pretty nice to be honest and you start getting some results. But this is one thing, one thing is doing on this experimentation mode and a different thing is to integrate these with your end systems to be compliant and so on. Because also one of the points about LLMs is they work amazing in I don’t know 80% of the cases, 90% of the cases. What is going to happen with the other remaining 10%? We are not considering many times this remaining 10%. People, companies, they get super excited because they have built this POC and it’s like hey, right now we are going to increase our productivity by 10x or we are going to improve the customer experience by whatever. But then they realize about the real challenges to put this into production. I need to redefine the process, they need to be compliant. also there’s a lot of unique concerns especially here in Europe about the regulation and so on and all these things coming to the table and they have to redesign how the strategy is going to be, how they are going to be implemented. So I think they over promise with a lot of expectations that then in the real world they realize that this is also super complex. So I think this is one of the biggest challenges again people, because not so many people is aware about the latest technologies, the latest services and so on. And the other aspect is about defining the process from the beginning till the end. And also to accomplish some stuff that you know like it’s a must. I mean like how you are going to be AI first if you don’t have a place, a data foundation, your data lake, your data is accurate, it’s clean, it’s on time, it’s well governed. So these are the kind of good practices you should have in place before taking a journey on AI.
Moving into the field creates a lack of, inherent lack of talent
Harel Boren: Thank you very much for this. You know I find the topic of talent kind of wishing back to those good old days that you mentioned in the beginning, where you had to kind of you know, data scientists would have to gather their data, have to put it in place, have to clean it up, have to write their own model and have to hyper parameter tune it and you know, whatever else and get to the performance levels that it requires. And I think that there is flowing if a little bit of laziness into the, into the field. Okay, let’s, let’s see what you know, what’s the next Gemma or what’s the you know what’s the next thing that cohere will put out or whatever and we don’t have to do anything. And at the end of the day this is kind of a maiming of the of of of the industry because limits creativity. If you have to sit down and write the model for your own company’s data and you have a Clear objective. Your chances for doing a good job are very high, I think. so yeah, that, that resonates very well with what you said and I think it’s, it’s a little bit of a sad situation. I think something should change there. but not everyone has the ability to train, you know, to train a huge LLM, you know, in internally. making a note on the lack of talent also there. The fact that so many enterprises are m. Moving into the field creates a lack of, inherent lack of talent. And when we speak with people we find that they don’t even know how to approach the issue. What team should I build? How many MLOps guys, how many DevOps guys, how many SecOps guys, how many FinOps guys? who are the, you know, who’s leading it, how they interface internally with the other, customers in the company? yeah, that resonates very much.
Given your active presence on LinkedIn and engagement with the AI community
so look, given your active presence on LinkedIn and the engagement with the AI community, how do you perceive the role of social platforms, social platforms
in shaping the discourse around AI development and deployment practices? and that’s a question for you. I don’t have any position here.
Eduardo Ordax: Well, I don’t have any position either because I mean like you have both to be honest. my personal opinion, if you find the right people there on social media, and I’m talking in general, I’m talking about Reddit, I’m talking about LinkedIn, I’m talking about Twitter. I have found a lot of pretty nice people where I’ve learned a lot from them. But also I’ve met a lot of unreica, false sellers of unrecoffians pretending to be experts and they don’t know at all what they are talking about.
Harel Boren: So the adapter is salvation. Yeah, the adapter is salvation. Yeah.
Eduardo Ordax: Correct, correct. So the nice thing about social media, and I think that’s why it’s super good and it’s happening to me as well. Right. I don’t have time to go into too many details every day and they’re talking about new staff, new models, new services. So for me it’s super easy to go into social media and the same process that I’m following to explain to others also on the other side, many others have done for me. Right. So I can read the post that it’s already digested and I can say like okay, MCP is this, it can work with this use case. It’s good for this specific unite Purposes and so on. So I think that aspect social media is great in terms of learning, but also in terms of unique, being in touch with the community. I’ve met so many people around the world right now. I can say that I have very good friends, that I’ve met just because of AI, because I’ve been unique, active on LinkedIn. I, I travel a lot, I travel around the world. And you’re like two weeks ago I was in Dubai, last week I was in Porto. Next week I’m traveling again. And in all the places that I’m traveling, it’s funny, but I meet people really there and I used to spend some time talking to others and so on. So it’s like everyone is getting closer. And I think that’s one of the main benefits of social networks. That’s why I love it. But at the same time you need to find the right people and you need to differentiate because of course there are many people pretending to be and sometimes you are not even able to differentiate what is real versus what. It’s not something that I don’t like, about social network right now. It’s about many bots or agents, commenting posts and so on. And it’s very easy to realize I go through my posts and I start seeing comments like hey, you have not written this. To be honest, it sounds like super, you know, like, yeah, it sounds like super, electric. Like you know, like very formal, very. It’s like, hey, this is not a real person. And I think we are doing something wrong because social media, social network is to, to have actually that, you know, like people talking together, real people, not, you know, like bots talking to other different bots because they are not adding any value. I mean like, is not about likes. It’s not. I mean like I spend a lot of time answering comments from people. You’re like writing to me and it takes a lot of time, but I think it’s because you, like, I try to be authentic. Like hey, I try to engage with the people because I really like it. And from time to time I have received messages, from people like, hey, try my platform. I’m going to put some agents there to answer all your comments, to engage with your audience. And it’s like, hey, but it’s not going to be me and I don’t want that. Maybe I don’t care if I don’t grow in number of followers, whatever. I mean like, I’m not there just to have followers. I’M there to unite, learn from others. So this is something that I don’t like to social media and I think we are going to break it if we continue or if some people continue doing the same thing. And again I don’t like it at all. So that’s okay. I had to say it.
Harel Boren: But yeah, yeah you know we have to probably implement some AI in order to cut away the AI generated and maybe make the social really social between people and not between people and bots.
Eduardo Ordax: Correct.
Harel Boren: Yeah, I like this very much.
Why do you think infrastructure friction is still so common in AI today
you know you once post posted a post that into a different direction I’m taking our discussion now. you once posted a post that training huge models isn’t really about data or GPUs it’s about CUDA drivers ssh and hoping and hoping that the cluster won’t crash. Now why do you think this kind of infrastructure friction is still so common in AI today? You know where since 2012 the revolution began, we’re 2025. Why is it.
Eduardo Ordax: Yeah, well to be honest and this is also based on so many discussions with people mainly unique startups, research scientists that they are training models and they always tell you the same story even because right now with ah training LLMs, I remember when
Eduardo Ordax: OpenAI released GPT they were quite ahead the others. But at the end of the day everyone is getting to the same point. As long as they have the data, as long as they have the compute, everyone is using more or less the same ingredients. It’s transformers right now it’s inference time computing. So more or less everyone is using the same ingredients and they are ah getting the same results. But we are talking mainly about research AI scientists. So they are very good at these specific topics but they are not good at how to maintain the cluster and so on. And because we are trying to train bigger and bigger models it becomes super challenging to make sure nothing is going to crash and so on. again this is based on many conversations where they are telling you the same story over and over again. It’s like hey Cuda error because limit whatever and it’s super annoying. And it’s funny because everyone is telling you the same story. So I mean it’s kind of joke as well because I mean ah training a model is not simple stuff. To be honest I really admire the people who is there on the field doing this but still it’s quite challenging and that’s why? Many also providers, they are offering managed versions of clusters to make this training much faster and easier. If you have a GPU that is going to fold automatically, you can do the checkpoint, you can replace the instance and you continue the training job. I think this is something that is going to keep happening at the same time that we are you know like increasing the size of the models. Of course gpus, they are going to increase as well, but it’s still very challenging.
Harel Boren: Yeah, it seems to me, I, I perfectly agree. It seems to me that the world is waiting for some sort of step function revolution as we have experienced between DOS and and Windows, you know, okay, command line, command line, command line. But come on, you know, let me use something else. And when Windows 3.1 particularly came on the field it was like oh gosh, everything so easier, so easier. I don’t. And when Plug and Play came on, later on Plug and Play came on and you didn’t have to think about what card you’re inserting and, and so, so it’s kind of waiting to a step function of that sort together with a Plug and Play type of experience. okay. Hopefully someone will deliver it. Wink, wink. you know we’re seeing how fast moving AI projects get stuck and not because of the model or the idea, but because of the infrastructure couldn’t scale or couldn’t stabilize in time. have you seen teams with great ideas lose momentum just because of this, just because of their inability to, to stabilize and scale. And if you have, and I have a sense that you have, then what do you believe is the remedy?
Eduardo Ordax: Yeah, it’s a good point, It’s a good point and I think it’s happening very often because you know like as we were saying before, you need to move super fast in this field. I don’t think it’s about you know like having good ideas or even having good use cases is about being first to deliver. And one thing is how you run this in, during experimentation phase. And a different story is when you run at the scale right where you need to ah, hundred of thousands of customers to make sure, even let’s say one simple example, you are deploying Chatbot to save your customers. Everything is working well and then you realize that so many customers they are going to be attacking to the same model and you don’t have bandwidth, it’s going to get errors all the time. So you need to dimensionate your systems to make sure that it’s going to be scalable. So I think it requires a lot of testing but also a lot of testing thinking about real ah case scenarios that is still people trying to measure success in very reduced scopes. So we are developing an assistant and only 1% of my employees, they are using it. Right. And I always say the same thing, and that’s why cloud, it’s also very important to AI because as soon as models get more complicated and you need to have the scalability, you need to have elasticity so you need to dimension your services to make sure you are going to be able to respond to such a huge demand. right. And also it’s about trying to measure success so people or companies still fail in the way to how to measure the success of the different use cases so they don’t face many of the challenges that maybe they are going to have in the future. Right. Because still very new technologies, they are still not super mature and they’re facing this problem over
Eduardo Ordax: and over again. Right. So I think the same and that was probably something similar to traditional ML where you were deploying I don’t know, just a model to do fraud detection. At some point of time you realize like hey, you cannot serve just one single endpoint into this specific instance and you need to deploy multiple endpoints to so many different people that they’re going to reach your application from many different places and you need to have this elasticity and so on. So it’s not the same when you deploy it in your unit testing environment than when you deploy it in your real environment. And I think it’s not nothing new. Again it’s happening with AI, it’s happening with traditional ML, but it’s happening with many other systems.
Harel Boren: Yeah, yeah, maybe also the the addition. But this is, this is a very small extension on what you said. maybe also the addition of tools which are able to simplify the testing to be made by non data scientists necessarily like the project manager being able to do that. And open source, no code solutions are there but now you’re dealing not with an open source, you’re dealing with your own company created model. So how are you going to actually enable its refinement by non professionals. But yeah, well this is I think a process in motion and I perfectly agree with your conclusion on, on an, on an interesting note and connects with something that I said in the beginning. I really fell in love, as I think another at least 5,000 people with your very funny but so true post, which is about. It’s all about the data. And as soon as it’s washed, as Gen AI, then it’s all beautiful. It becomes the same, same data but with a unicorn and colorful.
What do you think about AGI? I don’t know
And for some weird reason I can’t explain, it led me to seek what are your thoughts about another big big big thing and that is the dreaded AGI. and that’s my question to you. What do you think about AGI? I don’t know. Frankly I don’t know how I came from that post to this, but it kind of felt me, it felt that when things are washed in a certain way, then they become something else. And I was very interested to hear your thought about that.
Eduardo Ordax: I have opposite feelings here. I even by myself, you know, a part of me is telling me like, you know, like AEI is going to come at some point of time. A different part of me is telling me like hey no, this is not possible. At least in the short term. This is the reality. Reality, says that right now in many different aspects, LLMs, they are much better than us. I mean like that’s a reality and we cannot just say like, you know, like LLMs, they are just predictive token generators. Even when they are, they are doing it pretty well. I mean I cannot just read ah, hundreds of different documents, make a summary in just a couple of seconds. So I mean in many different ways LLMs, they are better than us and we don’t need to call it AGI, but in some manner it’s a different level of intelligence. Right? So this is from one side, but from the other side. I know LLMs, they are still failing in such simple things, especially about the capability they have to generalize. So while, I mean like a kid of two years, if they say a dog and they see a cat, for them it’s going to be super easy to differentiate all the time what it’s a dog versus what it’s a cat, this is not happening for AI. They need to see hundred or thousands of different images and even with that sometimes they are going to fail. So we cannot compare. They are still different. But something that LLMs, they are doing also very well is about trying to mimic how we behave. so we are talking about reasoning models. Are these models really reasoning well? Maybe not, but they are mimic how we reason sometimes just to mimic the process. It’s more than enough. Maybe what I used to say is they are not going to achieve AGI, but they are going to mimic AGI and for our purpose. That’s actually the same thing. What is the difference? I mean probably we are going to see much better models for sure. We will evolve from transformers, into something that is going to be more efficient, is going to work in a much better way. And I think this is going to come. To be honest, we used to forget so easily things. If any one of us have some like three years
Eduardo Ordax: ago, what these models they are capable of, we will be shocked right now. We used to say hey, nah, no, it’s not able to say, I don’t know if 9.9, it’s bigger than 9.11. It’s so funny because you see people still getting stuck into the discussions. But I mean we get used very easily to new stuff. But it’s simply amazing. I was working or I started to do data science by the typical unit. Titanic example, right? If you compare what you had 10 years ago with the Titanic example, but even not 10 years ago, five years ago, four years ago, what the things that you have today, it’s simply amazing. And if we do technology, maybe at some point of time, I don’t know if we should call it AGI, different, kind of intelligence, maybe we are still missing some because the problem is about how we define intelligence. For me also intelligence, it’s emotional intelligence, it’s common sense. You have feelings, you have many different aspects. But I mean like at some point of time probably we’ll be there. I don’t know if to say if it’s going to be 3 years, 4 years, 5 years, 10 years, but I think at some point of time we will be there.
Harel Boren: I side very much with your conclusion and I’ll tell you I’m coming from a different direction. When each of us is born, he’s born with a machine which is. We don’t know how it really works, but we understand that neurons are there. We understand that and we understand that they’re very complicated ways for them to work. Probably not the way that they’re working in actual computers. But at the end of the day you have a slightly Spanish accent and I have a slightly Hebrew accent. And we follow cultural habits and etc, etc etc. And we have feelings and we have underlying feelings and everything we know and everything we behave by. Comes into our own neural networks by imitation of, things, about us. So we have very sophisticated loss functions, we have very fast evaluation processes, and we have very, immediate deployment and rerun. an improvement of our models. But at the end of the day, this is what exists within us. So, it’s kind of going towards the end rather than, climbing the mountain. At the end of the day, as you said very correctly from the Titanic example, it’s still on Kaggle. I think from the Titanic example to today, it’s It’s like the. The. The world has revolved 20 times, but at the end of the day it will continue going up and up and up. And then by the Turing mo. The Turing Test sense of it, we will not be able to, discriminate or. Or define what is an AGI in the. In the ex. Experimentation level of it and what is, you know, a human being. So agree. Mixed, feelings. Mixed feelings. You know, because sometimes it’s. It sounds all like, science fiction.
Eduardo Ordax: Where can our audience follow your work
And we’re. I think we’re very lucky to be living in such a. Such an era. and I think we’re very lucky to having people like you, explaining this world for us on an ongoing basis. I will not keep you more. and I really thank you for. For your time here and sharing your thoughts with us, Eduardo. And looking forward also for, a beer together sometime somewhere around this world. where can our audience follow your work? Like social, ah, media blogs, any recent publication, anything upcoming, any collaborations, any initiatives, anything you want to share with us?
Eduardo Ordax: Well, truly most of the time. So probably you will be able to see me, somewhere around the world. next week I’m heading to the, summit in Madrid, then to Lisbon. Week after I will be in this conference in Amsterdam. but yeah, I mean, they can reach out to me on LinkedIn. I used to publish on Medium, but to be honest, not very often lately because I don’t have like so many time. But again, I’m super accessible. I always say the same thing. I try to reply, 99% of the time. Sometimes I’m missing some messages. but again, they can reach out to me on LinkedIn and I’m more than happy to be in touch. And again, it was, such a pleasure to be here with you today. probably we can say it’s all there, over there, or at some point, on time, in somewhere in the world.
Harel Boren: Wonderful. I thank you once again, Eduardo. I wish you a lot of success in your, actually sacred mission, which I. I enjoy, following, for a very long time. I wish you a lot of
Harel Boren: luck and, health. And, thank you very much for lending your time together with us and with our audience today. Thank you very much. Have a great day. Great week, actually, a great weekend.
Eduardo Ordax: It was a pleasure. It was a real pleasure.
Harel Boren: Thank you very much.