Implementing AI for fun and profit [eng]

AI seems to be everywhere! In this fun and highly practical talk, we're going to take a look at how we, as developers, can leverage the power of AI.

First, we'll take a look at how we can use AI to in our development workflow. Next, we’ll code dive on one of our products to learn how we added AI to it.

Freek Van der Herten
Spatie

Talk transcription

So, before I start, a little bit of storytime. I was watching this conference in the comfort of my home office space. Literally two minutes ago, my neighbors decided to start some construction work right next to my office. So, I had to flee. I'm now sitting in my bedroom, hoping that everything goes well here, especially with the internet connection. Now, let's talk a bit about AI in this talk. But first, a little about me. I'm Fake, and I work at a Belgian company called Spasi. I have a blog at fake.dev where I discuss modern PHP development and Laravel, which is my primary focus. You can also find me on Twitter with the handle @fake.dev. If Twitter isn't your thing, I'm also on Mastodon – that's the second link.

I've had the privilege of working on a couple of exciting projects with friends and colleagues. The first is Flare, an exception tracker specifically built for PHP and Laravel apps. The second is Odir, an uptime tracker that scans your entire website and notifies you when any page goes down. The last project I worked on is Mail Coach, which I dare say is a better version of MailChimp. Not only do I work on products, but I also contribute significantly to open-source software. With Spasi, we've built around 300 packages, mainly for the Laravel ecosystem, but there are also framework-agnostic ones. These packages have been downloaded 607 million times and are currently being downloaded about 30 million times a month. This is quite remarkable, considering Spasi is a small company with only 10 people. If you're interested, you can find a comprehensive list of everything we've done in the open-source space on our company page. I'm confident you'll find something valuable for your next project.

While open source is fantastic, it doesn't pay the bills entirely. We also develop and offer paid products such as courses and software tools. If you want to support our open-source efforts, take a look at our paid products as well. Our open-source projects come with a special license called PostGuardware. If you use one of our packages in your production environment, you are required by law to send us a postcard. We're delighted to receive postcards from all over the world daily, and we share them on our company website. Check it out. Now, with that out of the way, let's talk a bit about AI. AI seems to be everywhere this year. It really exploded last year, around November-December, with ChatGPT, and it hasn't gone away.

I'm not a big AI expert, so I won't delve into everything that's possible. Instead, I'm here to share my personal journey and the things I've picked up from AI. I started using AI just for fun, to explore its possibilities. Later on, I used it to add features to our open-source projects and products. I want to share both sides of this story. Let's go through my personal journey chronologically. The first time I encountered or used AI was something I'd been doing for a long time, and it has this logo or looks like this – Siri by Apple. To me, it was a bit of a disappointment because it wasn't as smart as I expected. This marked a false start for me in the AI journey.

Where AI really started for me was when I stumbled upon something called DALL·E, a product of OpenAI. I believe many of you are already familiar with DALL·E. You can give it a textual prompt, and it will generate an image for you. For those who don't know it yet, let's see a quick example of this. So, what are we going to get an image for? Some happy developers, maybe pairing together on some code. Together here. Together on some code. And let's do it photorealistically. Photorealistically. I hope it understands that—probably a typo in there. And it only takes a couple of seconds to get some good images. I hope my internet connection is fine here too.

Yeah, we see here that we get a couple of images. They look fine at this resolution, but if I zoom in a little bit, you can see some strange things happening sometimes. But yeah, it's a good starting point. So, how did I use this for myself? First, I should say that DALL·E can only generate new images. It also has a feature called outpainting, which means that it can continue an existing image. Here we have a famous artwork, and the creators of DALL·E fed that to it. The AI produced this, which I think is pretty astonishing since it matches perfectly the style of the original painting. I was really surprised by this. So I wanted to use this.

I play in a music band, and we were recording our first album together. Then we needed some cover art. We thought, let's try outpainting. We took a photograph of ourselves in our rehearsal space, intentionally not showing all of our faces. These are my bandmates. This is me. So that the AI would have to complete our faces. Let's see the result. It's a little bit strange, but it's actually quite good. What amazed me is that it tried to continue some details in our original photo, like the light cord and the room. We were pretty pleased by this, and it ended up being the cover for our album. I think it's pretty nice.

Okay, next up on my AI journey was GitHub Copilot, which I still use to this very day, very extensively. Copilot is created by GitHub, and it uses an OpenAI model called Codex, purposely trained for auto-completing and understanding code. GitHub has provided plugins for all major IDEs, such as PHPStorm. Copilot is a paid product but free for open-source contributors. The exact criteria aren't clear, but if you've done enough PRs or opened enough issues on open-source repos, it will say it's free for you. Let's take a look at a simple example of Copilot.

I'll use my OpenAI talk application, a simple Laravel application. In this part of the talk, we'll focus on this simple command. Here we have a collection. We're going to make that a string and then output it. If you don't know how to make things uppercase in such a collection, it suggests it. But how I normally do this, a simple example, is something like this: Uppercase each name. And then it gives the right thing. If I want to sort the names, it suggests how to do that. It can also comment on existing code, providing a description of what's happening. Join the names using commas and the word 'end' at the end. This is a simple example, but it works very well on real-life projects because it seems to understand the context of your application. It can also do more advanced things. Here's the result of the code it produces, and it even knows that. You can do fake stuff like the Rolling Stones. Let's see. It just does the same thing with the names of the Rolling Stones. I think it's pretty amazing. Let's do another quick example here.

This is a user model in Laravel. Let's add a method to give the full name. It already suggests that. Let me just type it. Get full name attributes. That's that. Let's get all the blog posts and see if it recognizes that. Yeah, and then it should import this. But you can see that you can generate some relatively good code quite fast, especially in languages like PHP. I don't necessarily need it that often in PHP, but when working in a language like JavaScript or Rust, something I don't know too well, then Copilot is a really good tool to have. So, yeah, I'm quite pleased with Copilot. Let's go back to the presentation here. Oh, like that.

So, the next thing on our way is ChatGPT. I really don't need to introduce ChatGPT; I'm pretty sure you all know it at this point. It's really amazing that you can just ask it about anything, and the answers it gives are relatively good. One tip I'll give for people using macOS and Ray – Ray has built-in AI now. So if you type something here, say hi to the people at the conference, and you ask AI, tapping here immediately gives you a result from ChatGPT, which is much more convenient than using a website. Okay, you all know ChatGPT. I thought it was mind-blowing, but what really was the next step is that they open-sourced it, or not open-sourced it, but they started to create an API for this. So you can use it in all kinds of tools and applications, and that's where the fun started to morph a little bit into the work thing – how can I make codes, how can I make better tools, how can I make better services through AI?

Yeah, the API of OpenAI is relatively easy to use, and it's also kind of cheap to use. It only costs a couple of dollars for thousands of requests. This API is used by most of the tools that added AI features to their feature set. It's really easy to get started, also in PHP – I'm going to show you an example of that later. But let me first show you a tool that is using the OpenAI API, and that is AI Commits. So what AI Commits will do, it's a command-line tool, it will generate a commit message for your commit. Let's try it out. I have it installed here. Let's see. These user things are gone. Let's make it a little bit more easy. I'm going to remove this one again and maybe just add 'sort' here. So we have a simple example, and let's commit this. So I should first... Yeah, I should first show you what commit on my computer does. So 'commit' is a command or a function, a bash function that I've written for myself. Let's take a look at how it is defined. So it's a function, and you can give it an argument. If you give it an argument like this, 'hello there,' oh, I hear a bell here. I don't know why, but I hope it's okay. Okay, thank you.

So you can give it a commit message, and if a commit message is given, then it will just use that. But if there isn't a commit message, then we will run the AI Commits tool. So let's maybe not give it a commit message. So we hit this tool. Okay. 'Commits: Refactor the demo command to sort names before imploding.' And that is indeed what has happened here. So I can just use this one. It's authenticated, and I've committed using this message, and I didn't need to think about this. Now, I think for smallish things, this is good because a good commit message shouldn't only say what has happened but also why things are happening. This tool basically doesn't know why things are happening. It tries to describe it a little bit, but it's certainly better than what I sometimes do, like 'commit whip,' which doesn't say anything. Then this is better.

So let's take a look at how this tool actually works behind the scenes, because it seems quite magical, but it's actually very, very simple to do stuff like this. I have here the source code of that tool open, and you can see it's written in TypeScript. If I search in this source code for 'generate commit message,' then I can see that there's a function here and here, 'create chat completion.' It receives an API key, so this is the information that will go to OpenAI to generate content. What do we send to it? We send it a diff, which is the output of git diff. I should have shown that; most of you probably know this. If you type git diff, it shows you the files that have changed and the lines that have changed. So that will all be sent to OpenAI. It will also send something called a prompt, and the prompt is basically the text that you type, which will be sent to the AI. Let's take a look at the prompt that is being used in this tool.

So the prompt is: 'Generate a concise git commit message written in the present tense for the following code, if given the specifications below.' This is basically what is done. An API call is being made with this piece of text and the git diff, and OpenAI responds. So on a technical level, this is just an API call to a fancy service, but it is just an API call. It's really nothing fancy from the standpoint of a developer. Okay, let's move a little bit along.

With the knowledge that building in an AI in some kind of application is just an API call, I thought, wouldn't it be fun to add it to one of our own projects? Something that I've built with my colleagues is a thing called Ignition, and it's basically the best error page for PHP. It has an interesting feature. Most error pages will just show you the error that occurred in your application, but since Ignition was released, we also had a feature where we would propose a solution for the error that you're seeing. If you're using a method in a class that doesn't exist, we would, using reflection, get all of the methods of that class and try to detect if there's some method that seems like it, and then we would suggest maybe you tried to use this.

I thought, wouldn't it be nice to use AI to generate solutions because we can only handcraft our solutions to a certain point, but using an AI, we can suggest solutions for all kinds of problems. Now, I forget to mention that Ignition isn't a small pet project. It is the default in all Laravel projects for a couple of years now, so all Laravel developers can enjoy this now. I should also say that Ignition isn't Laravel specific; it's framework-agnostic and could be built into Symfony as well. If I have some time, I will show you all of that as well. Okay, and I'm going to talk about Flare; maybe if I have some time, we'll see. Let's first do the demo of Ignition.

So let's go to my Laravel application again. I need to close this one; this is the application that I want. And let's open this one up in the browser, and this is the default Laravel page for a new application. Let's try to get an error in here. So let's maybe do this; this won't work, right? So yeah, this is how Ignition looks like, the simple version. Here's the error; we show some source codes and everything we know about a request. That's it. We don't have a solution for this one; it's a syntax error. This could be really anything now. How would you enable the AI features that we've built into Ignition? Well, you simply have to provide your own API key. And then it will use that to send everything that it knows about your error and send it to the AI. So let's try and do that. Let's refresh again. Normally, it goes quite fast, but I'm on a slower internet connection.

You can see that it now also displays a solution for this, and it's actually quite good. It says, 'Yeah, the word function is misspelled here.' As a bonus, you also get some documentation about links to documentation about routing in Laravel because you're in a file that is all about routing. Okay, let's take a look at how this is used, how this works behind the scenes. Let's go to the source code of Ignition itself. So let's open it up. We actually did quite everything about solutions, and here we have all the code that is surrounding OpenAI. Now, how Ignition is built using the solutions is that it recognizes solution providers.

Solution providers are classes that you can give a trouble (an exception), and it will generate solutions for you. It can first decide if we're going to be able to fix or get a suggestion solution for this kind of error. We have a couple of other solution providers here. You can see that in the 'canSolve' method, there are some conditionals to determine if we can fix this error. With the OpenAI solution provider, we just can't solve anything. If this solution provider is selected, it's going to return the OpenAI solution, which gets the trouble, the Open API key, and some extra stuff, with the most important thing being caching. I'll get back to that.

Let's open up the OpenAI solution and see how it works. The first thing it will do is generate a prompt, which is like the text you saw in the OpenAI tool. Let's take a look at the 'generatePrompt' method. This will return a string. Instead of concatenating a very long string, we decided to use a view model. A view model is just a class that provides a structured way of passing information to a view. Our view that will be rendered is called 'AiPrompt,' so let's find 'AiPrompt.' Here you see that I'm using the prompt to get the view. I want to render the view model being passed in. This view model is this type, and now we have, I think, autocompletion on everything that there is on the view model.

So what is our prompt? Our prompt starts with 'You are a very skilled PHP programmer.' Why is that? If you don't add stuff like this, if you don't give the AI some motivation, then it will tell you, 'I don't know PHP; I can't help you. Use a professional programmer.' But if you say, 'AI, you are a good programmer,' then it will actually start generating solutions. So here we say, 'You are working on a Laravel application. Use the following context to find a possible fix. Use this format.' So do the fix in between 'fix' and 'end fix.' And we do this so we can parse out this later. We also say that you should add some documentation. My screen, yeah, like this, also includes a few links to documentation, and we say that the documentation should be JSON so we can parse that out. Here comes all the context and the exception message. So it's this line; the error's on this file. Here's a code snippet.

That's the exception call. So that is what we are sending. In my OpenAI app (Ignition AI talk app), where do I have it? This is Ignition. AI talk app. It's this one. I'm not switching perfectly here. It's right here. Ah, here it is. No, I'm not switching. I have like an example somewhere. Here it is filled in the prompt. So this is the prompt that we're sending to the AI, and I've already filled in some things from some errors, and the stuff that we're actually sending. Let me send that to the AI. Hello. I'm stumbling a little bit. Here. Yeah, let's open up the AI thing and just give it our prompt. You can see that it answers with the fix between 'fix' and 'end fix,' and that the links are in JSON. So it really does what we ask it to do.

Back to here. So in that OpenAI solution, we were at 'generatePrompt.' So now we have that prompt, that piece of text. And now we're going to use that to get an AI solution. Here, we are going to check if we already have a solution for this exact exception, and then we're just going to get it from the cache so we don't need to make another trip to the AI. If there is a solution, then we're going to return it. Otherwise, we are going to use the OpenAI client using the API key to send this prompt to the API. Then we are going to cache the solution. So if we get the same thing back ever again, we're just going to use the cache. And here we have a small cost, the OpenAI solution response that will just get the stuff that OpenAI responded to. Here we can just get the text between 'fix' and 'end fix.' And here we just get the syntax. We want to get the links. We are going to JSON decode this. So yeah, that's the cost to just easily work with that.

Let's go back to the OpenAI solution provider. Let me open this up again. I'm working on a much smaller screen than I'm used to. Yeah, it's this one. Now that we know what's in the OpenAI solution that will be returned, and I'm not going through the rest of Ignition, but you know now that we have like a structured answer and we use that to add to Ignition here. That is basically how that works. Now let me show you also that Ignition really is framework-agnostic as well. I prepared a little bit for this, so I think it's called Ignition Up. Let's open it up in PHPStorm. So this is basically the simplest PHP application that you can make. There's really nothing here; it's just vendor autoloads, register Ignition, and yeah, let's cause an error here. Let's open this up in the browser. It should be HTTP.

You can see that it renders beautifully the error, and this is without OpenAI. It just detects like you have probably a typo here and tries to give you a solution. But all the beautiful things about Ignition are here. If you want to say that you, for instance, want to do this in a Symfony application and want to use the AI features that I've just shown you, then you can just add a solution provider. Just add the new OpenAI solution provider, and here you can pass your OpenAPI key, the cache that you want to use, the application type that you want to use. If you want to integrate this into another kind of PHP app, a non-Laravel application, it's basically very simple to do. I think I just want to make it very clear that none of these things that I've shown you are Laravel-specific, even though Ignition is the default error page in Laravel.

Okay, let's go back to the keynote. So we've, I think, only scratched the surface of what is possible with AI here. But I do think that OpenAI, as a developer using their API, is probably one of the easiest ways to get started with OpenAI. I also want to give you a little bit of advice. I think that we've seen an explosion of small AI tools that have AI, excuse me, as their main feature, and I do think that is a little bit risky because somebody else can build such a thing very fast and with some marketing efforts just eclipse what you've done. Excuse me. I think it's much safer to just build a product that really stands on its own, and then think, 'How can I make this better using AI?' so that AI isn't your core feature, but that's your product. That's your main feature, and it's just made a little bit better with AI. I think you're much safer if you do stuff like that.

So that is everything that I have for you. The first link here is Flare, which I didn't have the time to show you. Flare is basically Ignition but then in production, so it's an exception tracker, and it also has AI features built in. If you want to know more, I've written an entire blog post about how this works with a little bit more details. Yeah, and if you want to hear my music with, yeah, that cover that was generated by AI, just hunt for Topologies on Spotify or Apple Music. So this is all that I have for you. Hope that you enjoyed it and that you enjoy the rest of the conference.

Sign in
Or by mail
Sign in
Or by mail
Register with email
Register with email
Forgot password?