Conferences, Prism image support and server

Chris Gmyr (00:00)
Hey, welcome back to the slightly caffeinated podcast. I'm Chris Gmyr

TJ Miller (00:03)
I'm TJ Miller.

Chris Gmyr (00:04)
Hey TJ, so what is up in your world?

TJ Miller (00:08)
man, mostly just prism and work and fall and that's about it. Not a whole lot. of been a nice boring week. Definitely enjoying the colors changing. Fall in Michigan is so incredible. Just like the colors changing on the trees.

the very satisfying crunching of leaves going out for walks. I've been a little stagnant the last couple of months, but the last couple of weeks I've been back into getting in like a nice daily walk around the neighborhood. And my son's been pulling me out for walks and we've been playing football in the yard. it's been a really nice, nice week of just enjoying the fall weather as much as possible.

jamming away on Prism, making good progress there, keeping that thing moving forward. So feeling good about all that, man. What's new in your world?

Chris Gmyr (01:06)
Yeah, that's awesome. So what's new in my world is just feels like lots of craziness. Got a bunch of days off for travel, going to a conference next week. Family is traveling, so taking a few days off coming up to handle some family things and stuff like that. So trying to juggle the days on work, days off at work, all the things that need to be done in between.

and trying to get something accomplished day to day. yeah, lots of different challenges with that, but all good stuff. Tomorrow we're camping, so we're recording Thursday. Tomorrow's Friday, taking the day off to go camping with the scouts and my son. So that'll be an exciting weekend and hopefully not too chilly. It is dropping down a little.

cold overnight, 40 to 45, which isn't too bad. But when everyone is just in a tent and hanging out, it tends to get a little chilly. And there's some water nearby too, so tends to drop it down a little bit more. Should warm up during the day. And it's always a good time. And the kids have a good time with that, getting to eat s'mores and stuff like that at night around the campfire and do some hiking and do some activities.

Come back on Sunday and yeah, back to regular life after that.

TJ Miller (02:28)
Yeah, man, it's been getting into the like this morning. think I woke up to like 35 and had to scrape my windshield wiper off. Like it's definitely dropping here, man. But I think I'm definitely a bit further north.

Chris Gmyr (02:36)
Oof.

Yeah, yeah, for sure. And we haven't had too much color changes here. Like, it's kind of boring in central North Carolina. It's basically going from green to dead, like pretty quickly. And then it's not until you get up in the mountains a few hours away when you really get the pop of colors on the Blue Ridge Parkway and areas like that.

TJ Miller (02:55)
Yeah

Chris Gmyr (03:06)
So it's kind of like all or nothing here. either have leaves or you don't and everything comes crashing down.

TJ Miller (03:12)
man, that's a bummer. Yeah, we've with the weather kind of being all over the place. Like we had a week of doing what it's doing now, where it's like dipping into the thirties and like highs in the mid sixties. But then we had a handful of days that were like back in the seventies and eighties. And with like all of that back and forth, I think that's where you get some really, really loud color changes. Just kind of that that gradual, like up and down, like really kind of extending things out.

Whereas it sounds like you get the like runs up to like fall and then all of sudden just like drops off. So I think that's where everything just dies.

Chris Gmyr (03:49)
Yeah, yeah, totally. And it's very different than upstate New York where we're used to and grew up. It's very similar to Michigan type weather. So it was also always like super nice in the fall and driving around, even just locally or going up to the Adirondacks we were very close to. So we'd go like camping up there and just drive around and do some hikes and explore that area. And that was always beautiful and always great colors.

TJ Miller (04:17)
yeah, that sounds awesome, man. So you said you had a conference you're going to. What conference is that?

Chris Gmyr (04:22)
Yep, it's called All Things Open. It's in downtown Raleigh for two or three days, depending on when you want it to start. It technically starts on Sunday, but it's kind of optional for some different tracks and like working groups and things like that. then full days, Monday and Tuesday, and it's all different tracks. I think there's like 12 to 15 different tracks. So there's like an AI track. There's

a front-end track, back-end track, mobile, DevOps, cloud, all the things. And you can go basically into any of these different tracks, into the different rooms and various sizes, and just kind of go to see whatever seems interesting to you. And it's a big conference. So there's probably about 43 to 4,500 people attending this year.

And it's just like a massive, like cool conference and it's right down the road. So it'll be a lot of fun.

TJ Miller (05:24)
Yeah, that's pretty cool, man. So how do you feel about multi-track conferences? I think for a while, most of my conference experience has been Lerocons, which is a single track. And I think I've only ever been to a few multi-track conferences. Like WavePHP was multi-track. And then...

PHP tech, which I went to and spoke at last year was, is also multi-track. mean, both put on by the same folks. and so I, I've got more experience with single track stuff. I, I like the multi-track and like the variety that that brings. And as a speaker, think that also takes a little pressure off you, which is, which is nice, but I always feel like every multi-track conference there's like guaranteed to be a clash of like, really want to see.

these two talks at the exact same time like sucks. How do you feel about multi-track?

Chris Gmyr (06:17)
Yeah. Yeah.

similarly. So this conference is, really big. And like I said, it has numerous tracks and they have so many options that they actually put out a mobile app that is a schedule builder. So you can basically either pick your entire, track. you like only wanted to go for like AI stuff, you can just like dump that in there, or you can like pick and choose from all different tracks and basically fill up your calendar.

And then you can see what is overlapping or not. And sometimes that's good because some of the rooms are actually kind of small. They only fit maybe like 50 people in or something like that. So if you get into the room late or to that area late and it's full and you don't want to just stand in the back or in the doorway, it's good to have a backup to then go to your second choice or even third choice. But you always inevitably

TJ Miller (06:57)
Mmm.

Chris Gmyr (07:12)
miss out on something that you wanted to see. So like I definitely feel that where like the the lyrical being a single track like you said, you're just there for the entire time you get to see everything. If you want to see that you can always step out into the back. But I really like just enjoying seeing everything because it it forces me into some of those different talks that maybe I wouldn't have chosen for myself.

TJ Miller (07:14)
Yup.

Chris Gmyr (07:38)
maybe like some of the software talks or I don't know, maybe like a design talk or something like that. Like I always get value from them, but it might not be like my first choice if I wanted like more technical things. So the Lericons kind of force you into seeing everything and getting more diversity in those talks, which I think is great. And in the multi-track, you don't really

I don't know, have that option as much because you are forced to pick things that you either want to choose or maybe diversify on your own, but it's kind of putting the onus back on you, which I think is fine. But yeah, just definitely different vibes between the two. And it's also hard to end up on a multi-track conference if you're going there with people that you know or a group.

and everyone is doing kind of something differently. That's kind of a pro and con too, cause you can share either what you learned from other tracks. but it's also hard to kind of talk about the same thing all the time, you know, at Laricon, you're just kind of meeting up afterwards and you're like, let's, you know, talk about this talk or this presenter or, know, this cool thing that came out of it, where if you're in a multi-track, not everyone has that same context or

TJ Miller (08:47)
Mm-hmm.

Chris Gmyr (09:03)
is excited about that thing that came out of the talk. So it's just different vibes across the board.

TJ Miller (09:08)
Yeah.

Yeah, having that shared experience to be able to talk about and share together with people you go with is definitely like, I don't know, I think that's a big part of going to a conference too. But I think it's interesting that this conference has not only multiple tracks, but they've got themes spread out throughout the tracks too. I think that's something that would be cool to see at PHP Tech where they've got like

multiple tracks is just kind of this like shotgun splatter. I mean, I think they've been like thoughtful about what they put at the same time, like during the same time slots and everything. So there's like a little bit of being able to kind of pick a theme and run with it. But I think it's really cool that there's like kind of this, these broader themes that are spread throughout the tracks of like AI or cloud ops and all that kind of stuff. I think that's like a really neat way of going about it. that's cool, man. I hope you have a really good time. I can't wait to hear about it.

Chris Gmyr (10:04)
Yeah, we'll report back next time. Yeah. Yep.

TJ Miller (10:07)
Yeah. Yeah. I expect a book report double spaced. Yeah, that's cool, man. That's cool. So, I don't know, we can dive into a few topics. We can, like, I'm down to talk a little bit about the latest in Prism. Latest not even, not even merged in yet, but just, yeah, super fresh. Actually, I'm pretty decided on the implementation.

Chris Gmyr (10:25)
super fresh.

TJ Miller (10:31)
You know, what I've been working on this, like for this week, really, I've been kind of trying to pick like one or two things to, to run with for a week. And then maybe if I can get that done during the week, then like pick something to knock out over the weekend too. Just try to keep things moving along, but also try to keep focused. So this week was working on image support. So, less like.

sending a request to Dolly and getting an image back, that's still something I'm going to have to tackle. But more so being able to send an image to the AI and do things like, hey, tell me what's in this image. Or you could send a screenshot of code and be like, hey, what code is in here? tell me what's going on. Explain this code. Or sometimes I'll send a screenshot and be like, hey, can you convert this to actual text for me so then I can do something with it.

there was, Aaron Francis had a tweet talking about doing some AI stuff. And I like quoted that and like re-implemented what he had built in Prism. Just partially I wanted to show, I just wanted to see what, what it would look like implementing that feature with Prism. But also I think it was like something cool to show, but he had posted screenshots of his code.

And what I had done was, like, I sent it to an AI and said, or I think I sent it to Claude, and I was like, hey, convert this to, like, text so I can work with it. And it worked out really well. So I've been working on bringing in image support for, like, sending images. Yeah, it's been, it was really, I know, that was, actually pretty challenging trying to come up with the abstractions. And I've been trying to work in the open a fair bit more with it. And...

Chris Gmyr (12:04)
Nice.

TJ Miller (12:17)
try to work with early stuff. So here's the initial implementation. Put that out on socials and just tried to gather some feedback on, hey, how does this feel? this look like for anybody needing to do image support, does this look like it would suit your needs? They had some pretty valuable feedback come out of that and went back and iterated a little more.

I think I've got a solid set of abstractions now. And the cool thing is that also kind of paves the way for like file support also in the future. So moving it along feels like implementation feels clean. It's working pretty well so far. I've got it implemented for open AI and the pull request. And so I got to go around and implement it for the rest of the providers. And I think, I think we're good to go.

The tricky bit is getting the abstraction together and then implementing it across different models to make sure that that abstraction holds up against OpenAI verse Anthropic. Is this abstraction going to work for both providers considering that they're very different APIs and handle things in different ways for sure?

Chris Gmyr (13:30)
Yeah, have you seen a lot of divergence between the different models? Like what you're having to set up for the drivers in Prism? Like have you had to do like a lot of work and like heavy lifting to get those to output like a standardized response or API like interface contract, anything in the package? How much work has that been so far?

TJ Miller (13:57)
It's not been too bad. That's mostly because like right now, Prism supports OpenAI, Anthropic, Maestro and Olama. And the thing about that mesh of providers is that everybody but Anthropic all uses the same API. So like they all use the same API spec. Maestro and Olama.

and OpenAI, they all use the same OpenAI compatible set of endpoints. So implementing those has been really pretty easy because you can kind of just copy the OpenAI provider and then layer in any additional things that they have. But core functionality is sort of copy and paste with the OpenAI provider. Anthropic is totally different. They handle things in a very different way.

And so there's definitely some challenges there. But the nice thing is I know Anthropix API really pretty well. the challenging bits haven't been too challenging. However, I do have an open pull request from some contributors adding Gemini support. And that, again, is very different than either Anthropix or the OpenAI spec. So that one, I...

I'm going to have to go learn a little bit more about it to make sure that provider is written in a way that I'm open to maintaining. So I think there's going to be like, it should be interesting to see once I add in more different things than adding a lot of more similar things. So I don't know. We'll see how the abstractions hold up. that's like, I've kind of, for almost all of Prism has been starting with like the API first.

like making sure like the ergonomics feel well and everything just like it makes sense to use and then going back and figuring out like how to actually implement that. that's so far it's just layering up a lot of abstractions and putting them in a place of like lots of value objects and then just leaving it on the provider classes themselves to take those abstractions and those value objects and then like map them.

to whatever the implementation is. And that was a big piece that I was missing from Sparkle. I think that's the major piece of how I got stuck on Sparkle was I didn't add enough abstractions and enough value objects to be able to pass around to these different providers in a much higher level fashion. And now with like,

a significant amount of abstractions and value objects and stuff. Like it's, it's not been, it's not been too bad.

Chris Gmyr (16:49)
Nice, yeah, that's awesome. Yeah, I'm curious. What could happen to like that open? AI. Spec in the future with like multiple providers and models using that and if open AI. Could possibly like diverge from that in the future and like how quickly everyone else might be able to catch up.

I'm not sure if that will actually happen. But I see it as kind of similar to Amazon S3, where you have other storage services that have like an S3 compatible API to them. And that seems to be like a solid model, at least for that example. I just don't know how, I don't know. Especially like in a fast moving company like OpenAI.

open AI if they'll still stick to that or if there's been agreements between all these different companies and models for keeping that API spec. I just wonder how actually solid that's going to be in the future.

TJ Miller (17:56)
Yeah, so that was actually one of the first major refactors to Prism that I did was initially it was structured almost identically to how like cache and cache drivers work inside of Laravel. So you would have like the definition for open AI and then inside of the like...

config file that you'd find similar to like what you'd see in Laravel, you'd have like OpenAI and then nested inside of there, you'd have the driver mapped to OpenAI and then like the configuration details for the driver. Then you would have like Maestral and then Maestral would also use the OpenAI driver and then have like the same configurations to map that driver to like Maestral's config. But OpenAI has like already diverged in like deprecated different fields that

are shared across all of them. For example, what was it like? Max tokens, I think had changed or something. One of the configuration values, like they deprecated for their like newer series of models. I just, someone had pointed that out in the pull request. And I'm like, you know, they're going to be moving

you know, I made the decision that they're just going to be moving too fast and like too much stuff is changing. And I can't guarantee that like these other providers are going to keep their APIs up to date. Plus I think all of the other providers also provide like additional functionality that I would never be able to account for inside of this like kind of driver pattern. So I decided to kind of back out of that pattern and just make it a one to one ratio of like

providers to drivers. So I got rid of the like driver concept and now it's all just like the different provider classes because yeah, I just, I don't think that everyone is going to stay on the same page with that spec. And then I don't have the affordance of like custom features that that provider might put in, like might make available. So yeah, that's, that was a great call out. That was something that changed. Like that was the first major refactor.

Chris Gmyr (20:04)
Yeah, totally. And I think how you have it set up now makes it a lot more flexible for those changes in the future. And like you said, these different models tack on additional functionality or something that you want to tap into within the Prism package, then at least this pattern makes it a lot easier to do that in the future, which is really nice.

TJ Miller (20:24)
Yeah, the reuse was really, I thought was really cool and I was really looking forward to it. was like, yeah, this is like super neat. Like I want to add Maestro support. Great. I literally only have to add four lines of code and like Maestro just works. And I think, I think Grok does the same thing. I think they also implement like the open AI spec. And so I was like, great. I can like add another provider of like four lines of code. This is awesome. But then like,

things start breaking down to a certain point. they were, they were whatever, I can't remember what the values were, but whatever it was, was something that I couldn't, I wasn't going to be able to handle at the abstraction layer. And since they're all using the same driver, like that abstraction's gone for each provider. So it was just easier to like, yeah, if we're going to spin those up, you can kind of start by just copying and pasting the open AI provider.

gets like the ball rolling and if it's just basic text completion support, which is really only what I'm working on right now, then yeah, that's really all you have to do. So, yeah, we'll see where it goes. But I think just kind of having this like big pool of abstractions is key to making the whole thing work.

Chris Gmyr (21:41)
Totally. So yeah, getting back to the image changes in Prism that you had the PR up for, can you explain the workflow of how to use the image option within Prism? Where's that input coming from? Where are the current options? And where is maybe this going in the future? And what are the capabilities of the image handling?

TJ Miller (22:07)
Yeah. So there's like, there's two ways to like introduce prompts into your like Prism request. There's the system prompt method there. And then there's like the prompt method, but instead of the prompt method, you can also send an array of messages. And there's like a whole abstraction for messages. So inside of the user message, there's now this additional like constructor property called additional.

What is it now I've got to go look it up

I think it's just like, it's additional content. And so inside of additional content, can, there's like these different pieces of content that I make available. So one's like a text content piece and the other one is like now an image content piece. So along with the, like the message text, you can now send an array of, for example, like image message, like additional content pieces.

And one of the things that I wanted to offer up was different ways to construct this image based on what you're trying to pass in to make it flexible. So there is an image from path where you give it a file path. There is image from URL, which would be a remote image. And then there's also from.

like image from Base64. So if you already have a Base64 encoded image, you can just like drop that right in. And under the hood, URLs just get passed as URLs. But the from path and from Base64 use the Base64 value. So like at the end of the day, if it's not a URL, it's getting converted to Base64. Because that's what, as far as I can tell,

that's what all of the models support is either URL or a Base64 encoded image. So under the hood, most of that stuff gets normalized down to a Base64 value. And someone had also requested adding some image magic in GD image support. So there might be some of that coming to you from GD image.

So mapping out to this additional content array is, I think, going to be super useful because it keeps the API really simple for simple things. If you were just creating a user message and didn't need to add images or anything else, then you're just saying, new user message, and then passing in the string of the content. So it keeps it like,

really simple API, I think for 90 % of use cases. But then when you want to do additional stuff, you can just pass in an array of all these additional things. And I said earlier that this kind of paves the way for file support because that's going to kind of be the exact same thing. In this additional content array, I'll add support for like a file class and probably similar like static constructors of like file from path and

you know, like file from, you know, base 64 or whatever, whatever other file methods like make sense. I'm sure we'll do the same. I'm sure the methods will be almost identical at the start of like from path from base 64 and from URL. and then there might be some like provider specific stuff, but this like seems flexible enough to handle images and then also paves the way for, you know, more stuff in the future, which is really what I was aiming for was like, let's

If we can get down to one abstraction for all the additional content pieces, like, I think it'd be really good.

Chris Gmyr (25:56)
Yeah, that's really awesome. It definitely seems like it be not too big of a lift to go from like images to files to accept PDF or know, Word docs or text files and things like that. And I think you have all the. The image stuff covered, because I guess what I was thinking of like if I'm building a CLI or even wanting to build my own UI like what are the?

integration points that I can pull in and use with Prism. And this definitely covers all of it. Like if I'm building a UI and having the user type in a bunch of prompts, upload a few images, like I want to keep that like local on the server or, you know, S3 or whatever. And then, you know, basically execute those set of commands or messages or files, images, whatever. And that can be like kind of batch pushed to the

package and then up to the AI model and pull everything back. So I think we got like all the bases covered for whatever you would want to do. So I think that's pretty awesome. Either like local, like you said, URL images or files in the future.

TJ Miller (27:12)
Yeah, it definitely starts to make the Prism server implementation a bit more complicated because that's like an open AI compatible endpoint to interact with all of your Prism. So it's like another abstraction layer on top of Prism as like an API endpoint. And when you work with different UIs, like I use open web UI with Prism server.

as part of my daily workflow. So I was starting to add in like this image support and testing it with open web UI. And since they send like they're kind of built to send directly to open AI, I've got to kind of take a step back and undo some of the pre-base 64 encode things. And so I've got to like strip, I've got to do some like regex to like strip some things off in order to pass it to like my abstraction. But then that's not always the case because

you can pass URLs and stuff too. It starts getting really crazy. So I'm going to hold off on Prism server support until I can go back and clean that up and make it something that can handle more complex workflows. Because right now it's just super straightforward. And it's not built to handle that kind of complexity right now. But if I'm adding image support and it's part of my workflow, it's definitely something I'm going to have to...

to circle around and tackle.

Chris Gmyr (28:36)
Yeah, gotcha. And I guess that's something else that I didn't really grasp fully is prism server. Like why is it there? Why do we need it? Do we need it all the time? And like what are the use cases of that? Because I get like the basic functionality understanding of the prism package and interacting with these different AI services. But the prism server itself.

I guess got me a little confused of why this is useful in the package.

TJ Miller (29:08)
No, that's a good point. And I think that you just pointed out somewhere where I can probably go and write some more documentation around. Prism server was really kind of born out of the Sparkle package as a way to easily like spin up and like chat with your different agents that you're creating without having to like build your own API yourself. You can just drop in and use it. But

Partially was just to like scratch my own itch for my like personal workflows. So like I have a handful of now like Prism agents built up that I use as part of all day. I've got like a coding agent. I've got one that's like a specifically built for like creative writing. I've got another one that's just like kind of general use and that one has a handful of like different tools attached to it. So.

I kind of have these different things that I work with and I needed a UI to interact with them and I did not want to build my own. And so there's this project called Open Web UI that can handle like, really, it kind of was born out of, and originally was the Olamma Web UI. So was kind of built to be this user interface in front of Olamma and they still have like pretty premier support for Olamma.

and their models and like the whole ecosystem. But it also works with OpenAI and I think maybe a couple other providers too, but I know for sure OpenAI and Olamma. But the UI is like pretty advanced and they have a whole suite of tools available inside of it too. So it's like, it's more significantly more than what I use it for, but I needed a way to like interact with these like

prisms through a UI. And so I figured it'd be an interesting piece of the project to like make an abstraction that is an open AI chat compatible endpoint, but you can register all of your prisms to it. And now they're available to choose as like you would choose a different model to talk to you. So all of your prompts work, all of your tools work. And I don't know, I just, I thought it would be like a

an interesting way to be able to like spin this stuff up and work with it like really fast without having to, you know, you're able to use existing tools and because it's an open AI compatible endpoint, you could in theory also use like if you were building something with Python, you could use the open AI client in Python and just use your Prism server endpoint.

as the like, you can like override the host name. So instead of sending to OpenAI, you're sending to your Prism server. And like, all of a sudden that SDK now works with your like, your various Prisms. So it's just kind of like a bit to scratch my own itch, but I figure someone might find it useful. Like, I don't know. It does seem...

Chris Gmyr (32:15)
Yeah. It sounds like it's almost like a proxy server. It's basically like passing through, you're deploying the Prism server somewhere, either local or in the web. And then you can call that, as you would any other LLM, but it's passing through the Prism server. And then that figures out what prompts or what

connections that has to the actual LLMs and then passes it through and handles the connection back and forth and brings the response back through.

TJ Miller (32:51)
it'll like map an open AI API request. It'll map that to Prism's abstractions and then pass that to Prism. And then that passes it to like whatever LLM that Prism is like configured for. you know, like almost all of mine are set up with Anthropic. So I'm using this like open AI endpoint and yeah, like basically proxying that open AI request.

through Prism to Anthropic, but with all of the settings that I like set for that particular like agent or whatever. Like I wanted to have this temperature, this top P, I wanted to have this system prompt and this set of tools available to it. And yeah, it's like kind of, I think that's a great, it's it's a kind of like a Prism proxy.

Chris Gmyr (33:42)
Gotcha, okay, that makes a lot more sense to me now. I think it'd be awesome when you got things a little bit more solidified, is doing either like a live stream or just some video or multiple video like walkthrough of like how you've set up the open web UI for your tools, how you've like deployed or use the Prism

proxy and like some of the things that like you build. So kind of from start to finish soup to nuts of like I want to build this thing, this UI and here's all the tools and steps and process to do that. I think that would be super awesome once you got a chance and got things a little more solidified on the package.

TJ Miller (34:31)
Yeah, there's a, I know there's a video on my YouTube channel that talks about Sparkle server. And that's basically the same implementation. I, I ported it over and made like a few changes just to kind of like map over to Prism, but it's, was mostly the same. So there's a little bit of info on it there. And then I believe in the docs, under the Prism server section, I do have like a Docker compose file for how to like,

link up a Laravel app to open web UI and get all of that running. that was, yeah, no, this is like, this is really good because it's now making me kind of like think about it a little bit different. And I think there's definitely some room to explain it a little better. And I really like your mapping it to a proxy because that's basically what it is. It's like it's proxying an open AI API request to Prism.

and then forwarding that request on to like whatever else. So yeah.

That's cool. Yeah. It's, it's, I, there, there wasn't like a real vision for it outside of like, it's, I needed it for what I use every day. And I might as well throw that out there for whoever else to use it to.

Chris Gmyr (35:48)
Yeah, that makes a lot of sense. That's super cool. And it definitely solidifies that in my mind a lot better, too. So for all the apps and UIs that you have set up, are those deployed to the public somewhere? Are you using Vapor or Forge or something like that? Or is it just all local development to use all the tooling that you've set up and the UIs?

TJ Miller (36:12)
Mine's all local based. I've actually been to a point, because now I've got two machines, right? I've got my personal machine and my work machine. And there's definitely times where I want to chat with my agent. And I've got it set up with Cloudflare Zero Trust and everything. So I can access it anywhere, but it's hosted on my laptop. And so that's kind of a pain of like, I've to make sure my laptop's awake and opened and running.

I do have like a small server here at my house and I've been debating on if I want to spend the time, like it shouldn't be too much, but like a little bit of getting it like all packaged up into a container and pushed off to that so I can access it anywhere. But there's times that I also use like my Olamo models. And so my, my laptop's more powerful than my server. So if I want to like.

use my Olamma models, I definitely am using my laptop. So, you know, there's, I've been bouncing back and forth about it, but there's probably going to be a point once I hit like 1.0, which we've talked about being just a solid foundation of text generation. And like a previous episode, once I get to that point, I'm probably going to publish a like example.

Laravel app. And this is something that I don't know if they still do, but Honey Badger did at some point is they had their like their Laravel SDK. And then they also had a like example Laravel app that you could spin up and like see how everything's integrated. And I thought that was really cool. And that's probably something I'll do too, is like spin up a Laravel app and like sprinkle in some of my tools. And that's something that I even thought about doing at a

like GitHub sponsorship level of like, do you like hop on it, like a sponsor level, you get access to basically like my private tool set of like, here's the things that I use. I want to reach out to a couple people that I've purchased prompts from and see if I can work out a deal with them to like also publish those prompts. you get like not only my tool suite that like I use, but

Also, access to all the premium prompts and everything that I also take advantage of. It'll be early access to tools that I'm building and kind of a one-stop shop of, yeah, spin this up and you have instant workflows.

Chris Gmyr (38:46)
Yeah, I think all those are great ideas. I love the public Laravel example application for a couple of walkthroughs or a few different options with that kind of showing off the Prism package within a full Laravel app. And I love the sponsor-based perks and tools and prompts and things like that. And I think you could probably build up

a lot of those prompts either within repo or some other tool to promote the sponsorships as well. I think that's an awesome idea because I've seen a few of the prompts that you've been working with and they're just like huge text file and they make no sense at a quick glance. So being able to have those inside of a library somewhere or repo or whatever and then having use cases for them

TJ Miller (39:26)
Yeah.

Chris Gmyr (39:39)
even like annotating them. You if this is what this section does, this is what that section does. Here's here's the part that you want to like key in to do your alterations like in your own application. I think any of that additional support and hand holding for people who definitely aren't as comfortable with the AI tooling and prompts. I think. That would be a huge value add for sponsorship for you.

TJ Miller (40:07)
Yeah, it's like, here's, here's how it'd be like cool to see. Like here's how I use Prism. Here's like a suite of agents that I use that are like pre-configured for stuff. So it's like, here's your creative writing agent. Here's your general use agent. Like here's an agent with like these set of tools available to it. yeah, I just like, I thought that'd be like a fun way to expose stuff and like get people up and running really quick.

then also provide a foundation of support for myself that I can afford to spend more time on Prism also. So kind of like a happy medium. And I saw, I've seen I think Nuno do that as well. And that was a big inspiration for that idea is kind of like tucking some stuff away behind a little bit of a sponsorship and getting that access out that way.

Chris Gmyr (41:05)
Yeah, I think that's a great idea. Awesome. Yeah, so before we wrap up, anything that we missed with Prism updates, image updates, anything else that you want to share or shout out?

TJ Miller (41:17)
No, big shout out to a handful of contributors. I'm blanking on some names right now, but there have been, you know what, I'm going to go look it up real quick. I threw a shout out out on Twitter earlier to Sam slash equip Matt Glover push pack.

Like these are a few folks that have been like pretty active in contributing to Prism and offering like a bunch of feedback on new features that I've been working on and like prototypes. like huge shout out to y'all. Pushpack actually is the one that added like Maestro support. And I know they're working on Grok support also. And yeah, like it's crazy to see people actually like contributing back to the project.

It's super fun. yeah, huge shout out to the, to y'all for, for helping me out.

Chris Gmyr (42:17)
Yeah, that's awesome. Sweet. So with that, why don't we wrap up for today? Awesome. So thanks for listening to the Slightly Caffeinated podcast. Show notes and mentioned links are available at slightlycaffeinated.fm. Join us on Twitter or X at slightlycaffepod and be sure to email us any content suggestions, feedback, or anything to chat about at hey at slightlycaffeinated.fm.

TJ Miller (42:22)
Yeah.

Chris Gmyr (42:43)
And thank you all for listening and we'll catch you next week.

TJ Miller (42:46)
Yeah, thank y'all.

Creators and Guests

Chris Gmyr
Host
Chris Gmyr
Husband, dad, & grilling aficionado. Loves Laravel & coffee. Staff Engineer @ Curology | TrianglePHP Co-Organizer
TJ Miller
Host
TJ Miller
Dreamer ⋅ ADHD advocate ⋅ Laravel astronaut ⋅ Building Prism ⋅ Principal at Geocodio ⋅ Thoughts are mine!
Conferences, Prism image support and server
Broadcast by