Hai risparmiato centinaia di ore di processi manuali per la previsione del numero di visualizzazioni del gioco utilizzando il motore di flusso di dati automatizzato di Domo.


Free Webinar—Less Copy, More Clever: Plug, Play, and Slay with Domo + AWS
speaker Welcome to our webinar Today.
speaker Webinar. We're so happy to have you here.
speaker This is the joint webinar between Domo and AWS.
speaker We've titled it Less Copy, more Clever, and the goal really is to talk about applying AI to real world scenarios, really moving you from tedious work into a world where you're automating processes and leveraging AI in ways that derive and drive serious value.
speaker Before we hop into the content, I'd love to give you some quick intros about who you're gonna see over the next 45 minutes or so.
speaker My name is Cody. I run AI adoption here at Domo.
speaker I have with me Nick Tamara Amman.
speaker We're part of the solutions and partner solutions teams at AWS.
speaker They are incredible partners, and we're super lucky to have 'em here with us today.
speaker Our agenda for the next 45 minutes is gonna focus heavily on why ai, like, why apply it, why get, why do you need it? What's happening out there? What are we hearing, talking to companies.
speaker From there, we're gonna talk about technology, like innovations that are happening in the, in the technology space.
speaker Now we're gonna wrap things up with an actual tangible example, one that we're hoping applies to where you work or can be extended to what you do, and we'll talk about what's next, how you can take advantage of that solution and start driving serious value with ai.
speaker Before we get into the technology and the solution, I, I did wanna level set.
speaker We've talked to a lot of companies over, over recent months about what's happening, what they're feeling, what they're sensing, and I think everyone has this kind of term on their minds, AI transformation.
speaker Many organizations are feeling competitive pressure, feeling this need to do something to not be disrupted, to create bit better efficiencies, to drive some change in how they operate.
speaker And it's a big, it's a big topic, a big term like this new a, this new AI capability is, is quite different than what had historically, but when it comes to AI transformation, like what does that really mean? Like, what does it mean to transform a lot of companies?
speaker A lot of individuals are viewing this as dramatic innovation.
speaker They see AI and it feels like something that is animated in what it can do.
speaker I'm sure many of you have gone out to chat GBT or Gemini or Anthropic, uh, with Claude and have asked questions like, Hey, write me a rap, or, you know, create this picture for me and it does it and it's fantastic.
speaker That's a little new.
speaker Historically, AI has felt like something that's, that's, that's pretty hard to unlock something a little more complicated, a little more expensive, a little more difficult.
speaker We've had a series of these kind of AI hype cycles and, and, and like AI winters where we kinda realize it's a little more complicated than we think it is.
speaker What's different this time is AI's a little more consumer focused, like we're actually seeing it. We can touch it and it feels very innovative, and because of that, we feel like we need to use it for innovation.
speaker What we're finding is that statement is sometimes true for some companies. Those that are a little more progressive in how they think about things. A focus on dramatic innovation can make sense.
speaker The reality for many companies though, this is, this is a study cut for MIT back in July. I'm sure many of you saw headlines about this. There's been massive investment in ai, 30 to $40 billion, but most organizations are reporting really no return on that.
speaker We feel that some of that is tied to this focus on an, on AI being innovation.
speaker We're finding in reality is we need to kinda step back into what's really happening in organizations and kind of focus on what the need really is. Is the need really innovation or is it something different?
speaker If you think about it, organizations really exist to organize, people process the technology to officially solve a problem. That's why they exist. And I realize, I'm going back, back super to fundamentals here, but AI's forcing us to do legacy things better.
speaker Whenever you have dramatic innovation, dramatic change, it requires you to really kind of make sure the first things are are really tackled and taken care of.
speaker The reality is when the people, the process or the technology, in this case, AI being a portion of that become more important than the problem, the likelihood of failure increases because we're not focused on the need that original organizational foundation, we're focused on something that's supposed to help us get there.
speaker What we're finding is like almost always the truth is that AI transformation equals efficiency.
speaker And we encourage you as you're thinking about your own journey on AI to focus on efficiency, like what can it lock, how can it make things a little bit easier for your organization?
speaker Um, one example, I love that this is, this is going to people much smarter than me, um, is from James Clear's Atomic Habits. He talks about the British cycling team and their evolution, um, how they became a world class team through this focus on marginal gains.
speaker Uh, it wasn't a focus on dramatic innovation. It wasn't a focus on, hey, overnight we're gonna change and we're gonna be the best. They focused on small changes, and part of that was them learning how to change, like how can we improve?
speaker And that if they did small things, got 1% improvement here, one, like 1% improvement there, the culmination of those improvements drove significant change for them.
speaker Our recommendation for companies thinking about where to start, where to focus, how to invest, is that really apply that principle to think about that efficiency, to focus on the foundational things a company's trying to solve for and to really start most of their investment on increment, doing what they're currently doing, just a little bit better, like finding those enhancements.
speaker And the goal there is direct, tangible return, but it's also learning how to use ai. It's learning how to improve how they think, how they operate, and embracing that change.
speaker There should be some investment also focused on improvement. So re revisiting processes, changing how they historically have operated, and there should definitely be someone focused on innovation, some other thinking through like, what are new things we can do because of this?
speaker What we're seeing generally is as companies think this way, as they think about mentation and starting there and put most of their focus in that area, their people get better with ai, they understand it better, they can apply it more effectively and efficiently, and there's direct immediate returns on those investments.
speaker One example, I love to use that as a thought experiment for this kind. Theme implementation comes from one of our product managers, Ken Boyer.
speaker Um, Ken loves asking this question, and I may ask it of you as well, what would you do today if you had a thousand interns, not just like interns, like the best interns, like really good interns.
speaker Um, this is a great place. You gotta focus on those efficiencies and implementation to kind of find those things that historically made it felt a little tedious, a little hard, a little frustrating. Things that you didn't, didn't just, didn't love doing. 'cause you knew that your brainpower and your effort should be, should be focused somewhere else.
speaker That's a great place for ai. Like those kinds of scenarios are a great place to apply AI and really drive significant improvement to the business over time.
speaker With that, i, I gonna pass this over to Nick to kinda walk deeper into what's changed in the market. Like obviously this is a little more kind of at the foundational levels, um, kinda like where you should focus, how you should think about it. Nick's gonna go deeper to what's changed, like why is this now a reality? Like what's this new paradigm? Because there is definitely a new paradigm. The the market has shifted. Um, it's not a small shift either. We're anticipating this being a pretty big impactful one and you have a, you have a reason to be thinking about what to do here, how to transform.
speaker So with that, I'll pass over to Nick.
speaker Nick, think I double click into what's happening with ag agentic ai, what does that mean and how can it have an impact for you and for your organization?
speaker Thank you Kuri, and for everybody joining us on the webinar today. Thank you. And welcome. My name is Nick sma and I manage our solution architecture team working with data analytics and gene AI customers here at AWS In the next 10 to 12 minutes, I wanna walk you through what, what agent t AI is and the reason for excitement behind it.
speaker I wanna start with this code from our CEO Matt Garman about our vision for agent TKI. We believe that there'll be billions of agents that will work alongside with us to augment our capabilities, absolute innovation and improve our productivity.
speaker We believe that we'll see this in every industry and in every setting, whether it is consumer, enterprise or industrial.
speaker So at AWS we really believe that AI agents will be truly transformative in how we build, deploy, and interact with the world.
speaker So what exactly is agent AI and how, how is it different than the way we have traditionally built software agent, a system as agent take? If it is autonomous, meaning that the software has freedom to make decisions, that means that you haven't told it exactly what to do When you code, it can act based on the context.
speaker We'll see, we'll see a, a concrete example of this, but how did we get here? AI itself is not new and co even last year we were hearing a lot about gene ai. So is that, is that gone and is is agentic the future? Let's, uh, let's walk through a little bit. Yeah.
speaker Itself is, as we know, is not new. AI actually started in the sixties and we at Amazon have been using AI a lot across all our businesses.
speaker If you have gone to our amazon.com site, when you see the product recommendation that is, that is an AI application. It's based on machine learning, and we have had that for a very long time now.
speaker So traditional AI is really was built to solve singular problems and it worked on a fairly small set of data.
speaker Now, as our computing power increased and as the data we had to train also increased, we were able to build what are called the last language models of foundation models.
speaker These last language models have the ability to understand information and create content. They're more general purpose in nature.
speaker So when you go, and I'm sure all of us have used chat, GPT or cloud or something else like that, you can go chat with that and it has a knowledge of a lot, lot of the world around it. It can create new content for you, but also answer the questions based on what it has been trained on.
speaker Agent Tke, I actually builds on this agent tke I uses LLM, but it uses other other things to help mimic human logic plan, reason, and act to perform a complex task on your behalf.
speaker So we'll see some examples of this and we'll make this a little bit more concrete.
speaker So here's an example of how you would do a very simple workflow. This is some pseudo code, which is, which we've all written. We have written something very similar with this.
speaker In this case, we want to route an email based on some subject that's coming in. Invoice goes somewhere and HR based thing goes somewhere else.
speaker You'll see that there's also a business rule embedded into this. You'll see that if the amount is greater than a thousand dollars, we want to send it to the a manager for approval.
speaker So this is the way we've always done things for a very long time.
speaker Now, if you look at the agent approach, it's a lot simpler. We basically give it a task and we give it some context and everything happens by magic.
speaker So the agent approach is almost you telling an intern or an employee what to do and assuming that they have the knowledge to do the job without a lot of step-by-step instruction.
speaker So let's dive into li dive into the differences a little bit more.
speaker So with agent workflow, the the, the path it takes is adaptive. It is not pre-coded by you, and also the data it can take is flexible.
speaker So let's say you don't have to say it has to take exactly an inte or a float. It could be text, it could be float, it could even be a scanned image.
speaker And the decision points are driven by the context. It is not, again, like hardcoded in the application.
speaker So the role of the AI out here, unlike in the traditional places, not to solve a specific problem, but the AI out AI in the age case plans does the research needed, the reasons acts and per and reflects on, on what it has done.
speaker And it it's a cycle. It it does this in a cycle to complete the task that you're given it.
speaker So an agent tech system understands the, understands the environments and takes action to perform a task for you.
speaker But how does it actually do it? There are four pieces to an agent system.
speaker The main one is the LLM, the foundation model. This is the brains behind the, you can think about it as the brain behind it.
speaker Now LMS have trained on a vast amount of data, but it's a point in time data. So they would not know what happened today in the news or what the weather is right now, or it would not know about your company's data as well.
speaker For example, we saw in the previous example where there was a business rule that is a thousand dollars. It needs a human workflow. So that information is not in the LLM. You have to feed that information when you're building an application and you do that through a knowledge base.
speaker The second thing you need in the systems is a way for you to fetch new information or perform an action. An example would be, get me the weather right now or make a change into my Salesforce system.
speaker To do this, you you need to access the APIs. That's where tools come in. You can think really, you can think about tools as giving, uh, as APIs that the LLM has access to.
speaker And finally you have memory. Memory helps an agent remember the context and personalize the experience for the user.
speaker So these four things, LLM data or knowledge base tools and memory helps you build an agent tech system.
speaker So is this the right time to build it? Why now? I have been working in the industry for over 30 years now and I've never seen the pace of change that I've seen with, uh, AI in the last two or three years.
speaker So the first thing is that improvement happening in the, with the models, models are becoming larger, they're becoming more accurate, and they're also becoming multimodal.
speaker So it's not only text, you can feed it, video, audio, scanned images and everything. The model is able to understand, understand and act on it.
speaker Other thing is the models also getting optimized for agents. Agent tech AI is becoming a primary use case for these LLMs. So the models are getting optimized for that.
speaker We are also seeing specialized domain models, whether it's for legal or for, uh, healthcare coming up as well. Together, the these powerful models are making these agent AI systems even more, even more powerful and impactful.
speaker The second is tooling. End of the day, this is software that you have to run and you want to run it securely at scale and so forth. So there is a, uh, there is a whole host of tools coming up from AWS but also from the larger ecosystem so you can build, run and scale these, uh, systems with confidence.
speaker Finally, there are frameworks for developers to make it really easy for you to build, build this, uh, build this applications. At AWS we provide a framework or strengths, but there are other open source frameworks that we support as well, like l chain crew, AI and so forth.
speaker Together these three things, models, tooling and frameworks are helping accelerate the development of each intake AI solutions.
speaker So this is the, this is a study from Gartner where they are predicting that 30 pro 3% of enterprise software application will include a AI by 2028. This is up from 1%.
speaker This is, this is exponential growth, but from what I've seen from our customers, they're underestimating the number of, uh, companies adopting ai, HTKI today in the segment that I work with with software companies, this number is actually closer to a hundred percent.
speaker Gartner also predicts that about 15% of day-to-day work decisions will be made autonomously by agent TKI by 2028.
speaker This is really exciting times and these are some of the use cases we are starting to see at AWS when you, to our customers across every vertical out there in our mode, we are seeing supply chain optimization.
speaker We are saying, um, contact center, uh, really exper uh, rethinking of contact center experience in multiple verticals, uh, energy trading, uh, workflow automation.
speaker Really every every part of the enterprise workflow is being reimagined with agent TKI today.
speaker Our vision at AWS is to be the best place to build and deploy the world's most trusted and useful agents.
speaker This is what we did with the cloud to give our developers the services and the tooling they need to build cloud, uh, application and scale, deploy securely with confidence.
speaker We want to do the same for agents as well.
speaker To do that, we want to provide you with prebuilt agents, the tooling. You need the choice of models based on use case, the infrastructure that is secure and scalable, and the expertise to help you build these applications.
speaker To talk more about what AWS provides in terms of services, I wanna now turn this over to my colleague Tamara, who will go deeper on this.
speaker Thank you again for your attendance and your, uh, and your interest.
speaker Now. I'll turn this over to you, Tamara.
speaker Hello everyone, and thank you for taking the time to join this session.
speaker I'm excited about today's presentation and the opportunity to collaborate with doma.
speaker I'll present real world generat use cases and s generative AI stack focusing on Amazon, uh, drug as the optimal solution for building and scaling, uh, generative high applications.
speaker Let me walk you through some of the most impactful use cases for generative high across different business areas to boost the play productivity.
speaker Uh, general Five helps join company knowledge and communications into actions through artificial intelligence to improve business operations.
speaker Uh, gentrify automates information workflows and streamlines document processing for creativity.
speaker Uh, gentrify enables, uh, creation of text, images and videos for various business needs to enhance customer experience.
speaker Uh, generify powers intelligent customer support systems and 24 7 chatbots that can handle customer questions continuously.
speaker Generify is delivering real business value across these four critical areas to turn these possibilities into reality, you need the right tools.
speaker Next, uh, let's look at how a a s gen GenFi, um, technology stack brings the solutions to life.
speaker Three layer AWS GenFi stack is designed to be both comprehensive and flexible.
speaker Starting from the bottom, we have our foundational the infrastructure.
speaker This includes Amazon maker AI for managed infrastructure and AI optimized tri and inferential instances, plus high performance G ps uh, four deep learning.
speaker In the middle layer, we provide models and tools, uh, to build identify applications.
speaker For example, Amazon bero offers both AWS and Partners Foundation models with, uh, built-in capabilities such as guardrails for safety agents for task, um, automation and customization capabilities to feed your use cases.
speaker At the top layer, we have ready to use applications to that boost productivity such as Amazon Queue business as a general five assistant to answer questions and Amazon queue developer to streamline software development.
speaker Each leader is designed to work, work seamlessly together, but you can start it any layer depending on your needs, such as building from scratch using existing models or implement ready-made solutions.
speaker As you can see from our stack, Amazon, the drugs sits in the middle layer between infrastructure and applications.
speaker Amazon bedro is the easiest and fastest way to build and scale generated applications.
speaker Now let's explore Amazon Bad Amazon bedro is structured in three key layers that work together.
speaker Starting from the bottom, we have inference and scale. Here you can scale seamlessly with on demand or provision and, uh, throughput, optimize performance and global deployment.
speaker In the middle, uh, you can choose, uh, from over a hundred leading, uh, foundation models across Amazon, Andro, pick, Miata and others, but provides building evaluation tools to help you choose the right model for your needs.
speaker At the top, you can customize models with your own data, uh, built, uh, workflows with agents and flows and develop in your preferred, uh, development environment.
speaker Also, the drug is compatible with land chain, land graph, and other popular open source frameworks, giving you more flexibility to protect your data and ensure compliance.
speaker Uh, the drug includes enterprise grade security and responsibility, features from VPC, private link and encryption to automated reasoning checks and guardrails plus regulatory compliance including GDPR, SOC or hipaa.
speaker All these features give you everything here needed to build and generat applications with confidence.
speaker As I said, uh, Brock offers our own, uh, foundation models called Amazon Nova.
speaker Now lets me hand all it over to Amman to discuss Nova and what makes these models unique.
speaker Uh, Amman, please. Now your time. Thank You, Tamara.
speaker I'm excited to dive into Amazon Nova Foundation models and share how we are delivering cutting edge AI capabilities while maintaining the highest standards of data privacy and security.
speaker Today I want to introduce you to Amazon Nova, our state of the art foundation models that are redefining what's possible with ai.
speaker We have built these models with one core principle, delivering frontier intelligence with industrial leading price performance.
speaker Amazon Nova consists of two main categories of models each designed for specific use cases.
speaker First, our understanding models. These accept text image and video inputs and generate intelligent text outputs.
speaker Second are creative content generation models. These take text and image inputs to generate stunning images or videos.
speaker Let me walk you through our understanding models, which form a spectrum from ultra fast and cost effective to highly sophisticated.
speaker Amazon Nova Micro is a text only powerhouse when you need lightning fast responses at the lowest possible cost micro delivers, it's perfect for high volume text processing where speed and efficiency are paramount.
speaker Amazon now light steps up the game as a multimodal model. It processes images, videos, and text at incredible speeds while maintaining very low cost. Think of it as your go-to model for real-time multimodal applications.
speaker Amazon Nova Pro represents the sweet spot on most balanced model offering the optimal combination of accuracy, speed, and cost for most enterprise applications requiring sophisticated multimodal understanding fraud delivers exceptional results across the wider range of tasks.
speaker On the creative side, we have two groundbreaking models. Amazon Nova Canvas is a state of the art image generation model that creates stunning visuals from text descriptions. Amazon Nova Real is a cutting edge video generation model that brings your ideas to life in motion.
speaker Now, let's address what I know is top of mind for everyone here, especially when it comes to generative AI data privacy and security.
speaker At AWS, we have built Amazon Bedrock with a fundamental principle. Your data is your data, period, our approach rest on three shakeable pillars, unshakeable pillars, privacy of your data.
speaker We share nothing, store nothing and keep everything local. Your customer data is never shared with foundation model providers.
speaker We don't store your prompt and response data within Amazon Bedrock when it comes to security of your data, everything is encrypted everywhere.
speaker Data in transit uses TLS 1.2 as the absolute minimum, and it is a recommended best practice to use TLS 1.3.
speaker Any data we do store is encrypted at rest using a ES 2 56 keys.
speaker And here's the key differentiator. Customer data stored at rest can always be encrypted using your encryption keys that you control.
speaker When it comes to monitoring, we believe in complete transparency. You can track every usage metric with Amazon CloudWatch, monitor all API activity and troubleshoot with AWS cloud trade, and you can do all of this knowing that Amazon Bedrock's meets over 20 compliance standards, including hipaa, SOC I-S-O-P-S-I, and FedRAMP moderate and higher.
speaker Here's the bottom line for your business. Since we don't store your inference data or response data, that means we can't leak it, can't share it, can't use it, and can't look it up, neither can you accidentally expose it later.
speaker This isn't just security, it's peace of mind that lets you innovate without compromise.
speaker With that foundation of trust capabilities and security in mind, I'm excited to hand things back to Tamara who will show you how our strategic partnership with Domo amplifies these benefits for your organization.
speaker Thank you, Tamara. Over to you. Thank You Aman for the excellent review.
speaker Now, I would like to highlight the a s and DOMA strategic partnership.
speaker DOMA is an AWS advanced ICV partner with uh, three a SI three competencies such as data analytics, machine learning, and retail, as well as Amazon SageMaker service ready designation.
speaker This means that DOMA has demonstrated deep AWS technical expertise and proven customer success across various industries, use cases and workloads.
speaker DOMA integrate seamlessly with AWS services and can be accessed through AWS marketplace.
speaker With that, let me head it over to Cody, who will demonstrate the powerful combination of DOMA antis in action, showing how to transform data into general high powered insights.
speaker Cody, please, now your time.
speaker Thank you Tamara so much for, for walking through that. It really is amazing what technology can do nowadays.
speaker It's fun to see these moments of kinda disruption in the market and fast evolution.
speaker Um, and I think it's fascinating what a, what a s is doing in this space.
speaker Um, we're gonna kind of double click now into more of a tangible example of how we can take a technology AWS provides and apply to real scenarios.
speaker Before we get into that, I wanna kinda revisit this, that earlier diagram around around people, process and technology, but really apply AI to it.
speaker And so kind of like expressing like how what you heard from Nick Tamara Mon applies to what you're gonna see.
speaker Um, again, organizations exist or organize people, process and technology to solve a problem. That's that's why they're there.
speaker Agent think AI makes it a little bit easier. Um, agents can leverage context and to automate processes that leverage tooling.
speaker And it really is fascinating. These are, these are intended to be agents that work with us, that work on our behalf and make us more efficient, more effective.
speaker We found that to really kinda conceptualize what an agent is. It helps think about it as a recipe, like what are the components that we put together to really formulate an agent.
speaker Agents really need a few things to be effective. Like one, they need to know why they exist, like what are they going after? This gets in that realm of context.
speaker Um, really us describing the goals and giving the AI instructions how to think who they are, how to operate, what to do, what not to do. That fits more in the role of guardrails.
speaker They need knowledge. They, they need information. These, um, LLMs these foundational models have a broad, uh, have a broad breadth, a broad breadth of knowledge.
speaker They have a lot of foundational knowledge. It gets really interesting when we apply private data, like personal information that as well.
speaker So in this case, we view knowledge primarily as private personal information that can be used in concert with the foundational information from those LLMs.
speaker Um, and they need things that will structured and unstructured. So structured being columns and rows, unstructured being things like images, PDFs, audio video, et cetera.
speaker Things that are still information, but they're not legacy. Analytical data agents also need tools. Things that they can do.
speaker These tools include things that they can do with data directly like analytically. They also include other systems and interfacing with those appropriately.
speaker Those things combined with an LLM acting as an orchestrator really create this scenario where an agent can act on our behalf and, and really help us reason through and solve problems effectively.
speaker With that, I'm gonna hop over to an actual example. We're gonna get a more tangible here. We're gonna show you what one of these looks like and this is one that hopefully as we're walking through, you're like, that's something that I could use for this scenario as well.
speaker For this scenario that we're gonna walk through. This is, this is moor to our demo instance.
speaker Imagine that you are a category manager for a company that produces barbecues. So you produce barbecues, you sell barbecues.
speaker Imagine that you manage a category and that category is the actual barbecues themselves.
speaker We're gonna go to a front end, kind of start through like what the end experience looks like, kinda back into how it operates.
speaker So this is our competitive intelligence dashboard. As a category manager, I'm required to go out and look at competitor products, identify, you know, how they're pricing the products, identify how price are being per, uh, perceived.
speaker What are the, what are the ratings or reviews are the positive they negative? What features are coming out. There's a variety of things are required to go out and see.
speaker Those things tend to be on webpages and printed ads. Um, they can even be like an, like an actual store, like looking at a product that is sitting there in a physical brick and mortar store.
speaker The process of doing that back to the thousand intern example is fairly tedious. It's not super exciting. I don't love doing it. Um, in some ways it's just gonna copy and paste nightmare, not the best use of my time.
speaker Like really my time is more effective in going out and actually analyzing the data and using it to make decisions.
speaker To tackle something like this, we're gonna hop over to a tool called file sets. Uh, behind the scenes it says Amazon knowledge bases that we're using to power this.
speaker Our file sets really are files, they're just what they sound like.
speaker And let's go ahead and click into one of these, the first one, and you'll see as it loads that it is a barbecue. It's exactly what what I said it would be.
speaker Um, and you can see here as, as human beings, like we understand what this thing is. Like we can see a title, we see a price, we see reviews, and we can see features there.
speaker There's things that we intuitively know, things that are to us data, but historically have been really hard to use as more structured data, like data we can use for analysis.
speaker So I wanna take these and start using these in this process of, of analyzing and going deeper into what's there.
speaker Okay, so we have, we have this barbecue, and again, what we want to do is go from this information as human beings, we interpret pretty well and put it into something's a little more analytically oriented.
speaker Back to our front end here, you'll see that we have two other buttons here. One's for processing all images. So this is basically the thing I I put in that, into this file set area.
speaker You'll see here I have S3 as well. We have the ability using file sets to actually tap into file stores. So you have to break these in the Domo. We can just reference S3 directly and grab those files for processing.
speaker We also have the ability to, to upload an image here as well. If we go through the upload, we actually get a form here that allows us to upload that.
speaker So we have a lot kind of front end elements for someone to get in there and kind of interface with, you know, what are the images? How do I get those in there? Kick out a processing effort here.
speaker I put a new one in there. So we have those kind of touch points into the experience.
speaker Behind the scenes. What's happening is we're leveraging a tool called workflows. And this really is what it sounds like. It's workflows, it's process automation.
speaker And we're kind of finding back to the efficiency theme here, that people really want to use AI for these kinds of operations. They wanna take things that felt tedious.
speaker One awesome thing about workflows, it's is that it's very deterministic in nature, meaning it's a step one, step two, step three kind of experience.
speaker But by putting AI in certain places, we can go probabilistic and flexible.
speaker We found that a marriage between that deterministic framework and a probabilistic concept give you control with a lot of creativity.
speaker So kind of a cool, a cool mix there.
speaker Um, we're gonna do any kind of talk about the agents are in here because there's actually three of 'em. These two are pretty similar.
speaker So I'm just gonna highlight one of these, not both of them, and I'll talk about that, the other one there, here in a second.
speaker By clicking the agent, you'll see I can go into an actual edit experience here and the things we talked about that back in that recipe, like having, um, that kind of knowledge, the, the instructions upfront, the goals and the understanding is part of the first step.
speaker It pertains to the UI here. And we also have tools and knowledge. So this first tap here really is giving the agent the things it needs to reason appropriately.
speaker The prompt here really is a session-based prompt. Like for this session, what do we want you to work on? In this case, I'm just telling you like I want you to extract this kinda information, product name, brand name number, BTUs, et cetera.
speaker Um, this problem, for those of you that have, have been involved more in machine learning historically may feel, this may feel like a little too simplistic, like just putting that in there. Like I want these things. Um, historically in machine learning, this is, I've been referred to as OCR, optical character recognition.
speaker We're fighting that. These foundational models are generally pretty good at working with images and extracting certain things, even images that differ.
speaker Like we could pull an image from Amazon and we can pull one from another retailer that has a different layout and there's a good chance that the LLM could still reason on what means what in there.
speaker So the prompt is that it's the session variable information.
speaker The instructions down below are more of the system prompt. It's like, what, who are you as an agent? What are your roles like, regardless of what's set up here, how do you think, how do you process this experience?
speaker Um, these, these, these instructions tend to want certain things as part of how they operate. Um, they want things like a personality. Like who are you? How do you think, how do you operate?
speaker Um, they tend to need a goal. Like what are you going after in this scenario? In some cases, we found that it's effective and helpful to actually have steps kind of roughly like how do we typically do this process?
speaker The agent's given some flexibility, kinda reason through that, how it wants to, um, what actions are important that you should do.
speaker These are the, these are the two dos as well as what things should you not do. These are the two not dos.
speaker Um, often the those two dos and the, and the, and the two not dos are the guardrails. You hear that term use pre pretty frequently and often the market agents do need roles. They need areas that they're safe to play and safe to work.
speaker The tools are what they sound like, they're the things that the agent can use.
speaker In this case, we're using tools that are partly double platform. This first one is an OCR tool.
speaker We're giving it a series of tools to append the data sets, like work with the actual data. We're giving the ability to search for files, and we're giving you the ability to send the email part of the instructions.
speaker We can go deeply into this. I want to send an email, it's processed and gimme some insights coming outta that process and experience.
speaker We're also giving it knowledge. So knowledge is that structured and unstructured data. In this case, I'm pulling from the my, my file sets and I'm pushing into a data set. So going from unstructured to structured.
speaker I'll mention as well, we have models up here at the top. We have the ability to use models, so, so wonderful, uh, models like Nova to do the actual reasoning as well as the generation summarization and tool calling.
speaker So these agents are incredibly flexible in how they operate. They're semi simple to set up, um, but very, very powerful.
speaker One thing I I will mention as well is kind of a, a nice, um, shortcut to do to work more effectively and efficiently. Building these can also feel tedious. Sometimes hand handwriting a prompt is not a super easy instant thing.
speaker Um, I definitely recommend leveraging AI to create ai. Um, that may feel a little inception oriented, but we have found that AI's pretty good writing props.
speaker Um, so it's not a bad thing to kinda actually collaborate with one of these foundational LLMs to generate your, your, your instructions and to kind of vet those out.
speaker It's also great as you're testing if things are a little off, you can send that that gap back to the AI and, and have it help you tune the experience part of that.
speaker Okay, so with that in mind, let's really quickly go through how this agent works.
speaker So as you can see here, IIII, I can test in that right hand pane and the agent will kind of call, it'll take the instructions, it will call one of our tools. It'll say, okay, from what I can tell here, the first thing I need to do is find the files.
speaker Um, one cool thing in how AI operates is it can reason and kind of flex into things. So it did find the right tool here, but the first time it ran, this was on purpose. We did this, it did not find any files. They use that tool inappropriate like incorrectly.
speaker As you see that, I see that below it actually changed how it's interfacing with it and actually got back the right information the second time.
speaker We are purposely making the logs here quite verbose. So you can see like what's actually happening at the raw level.
speaker As we get deeper into this, you'll see that the AI has decided to grab a second tool. So this is the OCR tool.
speaker It's writing its own prompt to use based off of what we put in our session prompt. And it's pulling back, it's converting that image into text.
speaker And you'll see here, and actually like those of you are kind thinking like unstructured is structured. That's not quite what that is.
speaker What this is doing here is the AI has decided that the best way to do this is to actually go and convert that to a text block. That, that, that includes all the information that we need later in this flow.
speaker It'll actually go through and write that information to a data set.
speaker So it goes through reasons against each of these. Does that with the result being that we get nice structured data, we get columns and rows that include brand name, price, features, number of reviews, um, number stars, et cetera.
speaker So not our process. And this can be easily extended. We've seen, we talked to a number of companies about like where this could be used. Obviously when it comes to things like competitive intelligence, it is interesting.
speaker We're also getting quite a few requests for invoice processing. If you have a process, it requires taking invoices, PDFs or something else and do some attestation against your actual accounting numbers or your bill cycles. That's pretty simple to do. We can pull those in and do some reconciliation there.
speaker I did mention earlier that there's a another agent here in here as well that's pretty interesting. And this is getting more into the analytics, the analysis. So doing that conversion is, is great, but you still need to analyze it.
speaker If we go down here to that next agent, this is my strategy analyst. That strategy analyst is really out there to kind of help me reason across that data that was created and some data that I, that i, that I own.
speaker So my private data. So if I click into this one, take a look at it, we'll see the same layout here. We have our prompt and instructions. We have tools and knowledge.
speaker In this case, I have a little more detailed prompt where I'm give you a little more context on the data. Like I have some data I'm working with here. I want you to kinda go through and help me find some things that can help me make decisions on changes that to make to my product lines.
speaker On the tool side, I have one tool here and this tool really is for kind of storing off the insights. Um, Domo has a tool called App DB that's used for storing off transactional data.
speaker It's, it's a, behind the scenes is, it's a Mongo database that we use for that technology.
speaker Um, the knowledge here is what's a little more interesting. So we have that unstructured and structured data. This is why what we created the process on that previous agent.
speaker We also have this, this barbecue grill, um, sales data. So this is private data. This is what has sold, like what have we as as a company sold, um, in the space. Like what are we doing?
speaker And if we run through this, I'm not gonna run it here directly 'cause we already run it once and I, I can show you the output of how it operates and how it thinks.
speaker But back to that main dashboard, we get something like this on the right hand side where we actually have the AI using some query tools and my prompts to go do other reason through that scenario and find information that can help me make better decisions.
speaker So for example, one of the one I love, I love the most in here is down under the price. We have a series of segments that we operate within and thing and and and uh, girls that we offer.
speaker And it's kinda showing us like what competitors are charging, what we're charging. As you see in here, we're way premium, like probably overly premium in our, our pricing. And it's, it's flagging that and saying, Hey, this is a risk and we recommend these make a change here.
speaker Like to really hit that entry level correctly, you're not dramatically off, but we recommend a 10 to 15% reduction to be a little more competitive in that space.
speaker Um, for the mid tier it's saying, Hey, like you're far enough out, you probably need a new model that they can play this area. And it comes that premium side.
speaker It's also kinda saying, Hey, if you wanna play there, you can still play there, but you really need to find some way. It's like justify being that premium or you have to move down.
speaker So the things are actually really interesting where it's actually showing me like, this is data coming from our unstructured structured process. This is our data.
speaker It's finding the commonality between those and it's not, it's not like tight mapping, like this is like a little looser key value mapping. It's reasoning through that and saying, Hey, you're a little off here. You probably make a change back to like when we started this conversation, like how do you become more efficient? This is an efficiency play as you can see here.
speaker The process of getting that going, that unstructured data to structured can be very quick. It can be as simple as like, hey, I'm gonna go grab an image and I'm gonna gonna throw it through this process and I'm gonna have it in a data set I can use for analysis.
speaker Once you have that data, the process of analyzing that, finding like these kind of needles in the haystack, these like nuggets of information can take a lot of time.
speaker Um, historically, like you know, I've worked heavily in my career in the data engineering and BI space. I can take weeks to kind of go through and find those things. You have to even longer sometimes.
speaker Um, the fact that AI can at least give us like some direction, um, out its analysis is fascinating. I can do it in minutes. Like this is within like sometimes seconds within like, like very quickly we can get back these insights that can help us derive what to do next and how to improve.
speaker So over the course of the day, we walked through a number of things. We covered a lot of ground. Um, having four speakers over 45 minutes, um, is a lot of information and we kinda started from the foundation all the way through the deep technology that can be used here as well as an example.
speaker But we didn't wanna, kinda like before we kind of walk away kinda you next steps on things that that can happen.
speaker We wanted to revisit what we talked about at a, at a high level when it comes to ai, we encourage you to really view it as a transformational opportunity and to really start on efficiency by Cal, can we use AI to become more efficient and to kind of drive immediate, immediate value to the company.
speaker Um, that's possible because of vient ai. Like the market has changed. Like there's something very new. Um, there's a promise of something amazing in AI and the technology's starting to prove out.
speaker We're starting to see cool things happen, cool use cases, cool opportunities.
speaker Um, this space is moving very quickly from week to week, day to day. I feel like there's new things constantly happening there.
speaker Um, Amazon asked me to raise services in the space. Um, Domo is a customer and a partner of Amazon. We love working with Amazon. They do amazing things.
speaker Their Nova experiences are and bedrock are incredibly fascinating what they can do. And we definitely encourage you to look into those, talk about what's next.
speaker Um, we encourage you to take a look at Nova. Like NOVA is a great service. Um, it's very easy for it to be private and locked down. It scales well, it's priced well. Um, very cool things happening there.
speaker We also encourage you to find more of these tangible examples to work with, to to look at a a W S's to AI solution listings, what they have out there, and also check out our templates.
speaker So we've gone through like, one thing Domo's very good at historically is, is connecting technology to business. And we've gone through and and captured a number of business stories, um, that you can use to inspire and to drive what you're doing there.
speaker Um, thank you so much for the time today. Um, we welcome any questions you have. Um, we'd love to kinda walk through.
speaker Yes, anything you wanna go deeper into, um, please share those, uh, that I believe there's an experience you can use as part of the the hosting here.
speaker Um, please submit your questions and we'll, uh, we'll go through those right now.


Cody Irwin is the AI Adoption Director at Domo, where he partners with organizations to accelerate AI-driven transformation and deliver measurable business impact. He brings a unique blend of technical expertise, product leadership, and business strategy from roles at Google, Domo, GUIDEcx, PwC, and Backcountry.com. Throughout his career, Cody has helped companies modernize by applying data and insights to core business processes. Today, he leverages that experience to help leaders confidently embrace generative and agentic AI, unlocking new efficiencies, growth opportunities, and competitive advantage.


Tamara Astakhova has over 20 years of IT experience in systems architecture and development of large-scale IT systems with an emphasis in the past six years on AWS cloud strategies and implementation of analytics solutions.
She has also served as a Technical Lead for the AWS Data & Analytics competency and the AWS Redshift Service Ready program for the AWS Partner Network.


Aman Tiwari is a General Solutions Architect working with independent software vendors in the data and generative AI vertical at AWS. He helps them design innovative, resilient, and cost-effective solutions using AWS services.He holds a master’s degree in Telecommunications Networks from NortheasternUniversity. Outside of work, he enjoys playing lawn tennis and reading books.


Nick Simha is a seasoned tech leaderwith over 20 years of experience, currently serving as Head of SolutionsArchitecture at AWS, where he leads Data, Analytics, GenAI, and Emergingcategories for software companies across North America with multi-billiondollar revenue targets. Nick has pioneered multiple successful initiatives atAWS, including a comprehensive GenAI strategy and the AWS for SoftwareCompanies program, building upon his extensive background in cloud computingand IoT platforms at companies like Siemens, GE Digital, ServiceNow, andSalesforce.
See how Domo and AWS make Agentic AI both practical and powerful by transforming unstructured data into structured, actionable intelligence. This session demonstrates how to automate manual data processes—like collecting and analyzing information from images and webpages—using Amazon Bedrock and Domo’s platform. Through a real-world retail example, you’ll see how organizations can move from raw, unstructured inputs to market-ready insights in minutes, not hours.


Cody Irwin is the AI Adoption Director at Domo, where he partners with organizations to accelerate AI-driven transformation and deliver measurable business impact. He brings a unique blend of technical expertise, product leadership, and business strategy from roles at Google, Domo, GUIDEcx, PwC, and Backcountry.com. Throughout his career, Cody has helped companies modernize by applying data and insights to core business processes. Today, he leverages that experience to help leaders confidently embrace generative and agentic AI, unlocking new efficiencies, growth opportunities, and competitive advantage.


Tamara Astakhova has over 20 years of IT experience in systems architecture and development of large-scale IT systems with an emphasis in the past six years on AWS cloud strategies and implementation of analytics solutions.
She has also served as a Technical Lead for the AWS Data & Analytics competency and the AWS Redshift Service Ready program for the AWS Partner Network.


Aman Tiwari is a General Solutions Architect working with independent software vendors in the data and generative AI vertical at AWS. He helps them design innovative, resilient, and cost-effective solutions using AWS services.He holds a master’s degree in Telecommunications Networks from NortheasternUniversity. Outside of work, he enjoys playing lawn tennis and reading books.


Nick Simha is a seasoned tech leaderwith over 20 years of experience, currently serving as Head of SolutionsArchitecture at AWS, where he leads Data, Analytics, GenAI, and Emergingcategories for software companies across North America with multi-billiondollar revenue targets. Nick has pioneered multiple successful initiatives atAWS, including a comprehensive GenAI strategy and the AWS for SoftwareCompanies program, building upon his extensive background in cloud computingand IoT platforms at companies like Siemens, GE Digital, ServiceNow, andSalesforce.
Domo transforms the way these companies manage business.




