Concert Memories, AI Tooling, LLMs, and Custom Workflows
Chris (00:00)
Welcome back to the slightly caffeinated podcast. I'm Chris Gmyr.
TJ Miller (00:03)
I'm TJ Miller.
Chris (00:04)
Hey TJ so what's up in your world this week?
TJ Miller (00:07)
Ooh, yeah, man. this week has been a big week. First half of the week was just kind of kicking back, taking some downtime with, my wife and son, just spending, spending a little time before kicking off the new gig, over at GeoCodeo. which has been the, at the time we're recording this, it's been, this is my second day in and I am just so excited. I'm, I'm even more excited than before starting cause...
We're diving into all the fun stuff and.
Yeah, I was excited to just get my first PR open even on like on day two. So, it feels fantastic. I'm so excited to be there, man. So that's the big news in my world. How about you, man? What's new in your world?
Chris (00:52)
Yeah, I mean, pretty slow week for me. But last weekend took my son to his first official concert, which is pretty sweet. He's eight and like we've been to see bands and shows at, you know, like the different farmers markets or state fair type of thing, but nothing like this scale. So we went and saw all that remains mud vein and Megadeth.
TJ Miller (01:01)
Ooh.
Chris (01:21)
So quite the first concert out. And he was a little overwhelmed because there was a lot of people there, probably, I don't know, 30, 40 ,000 people towards the end. it's a massive outdoor arena. And we just went all out. Like I got VIP kind of section tickets. So we got to like hang out there, get some drinks. And he got all the popcorn that he could want because they handed out for free in there.
TJ Miller (01:30)
Holy cow.
Wow.
Yeah.
Chris (01:51)
Got him a t -shirt and we were fourth row left of the stage. So just got to be like right up there and just like in it. So he really enjoyed the show, all the bands. So he wants to go to more concerts very soon and get some more t -shirts and just rock out.
TJ Miller (01:57)
Wow.
Yeah, I imagine, man. What was what was your first concert?
Chris (02:15)
My first concert was, I forgot what year it was, but I think it was in high school sometime so late 1990s, was Bon Jovi in New Jersey. And I went with my cousins and it was whatever the stadium was in New Jersey at the time, it was like the last big show of that stadium before they tore it down. So it was just jam packed of
TJ Miller (02:26)
awesome.
Chris (02:42)
people and got to hear some like cool songs and hang out with some family so it was a fun time. How about you? What's your first concert?
TJ Miller (02:49)
That's very cool, It seems so on brand. My first concert was Weird Al. Yeah, that was back in like early middle school. So I don't know what year that makes it, but it was quite a while ago. But yeah, that was just it's I look back and laugh at it. I mean, I'd still go to a Weird Al concert today. Like it was just a good time. But it's just, it's
Chris (02:59)
awesome.
Yeah, Good. was great.
TJ Miller (03:18)
It's funny, it's just very on brand for me, I guess. man, Mudvayne, that's definitely a concert I'd love to see though. That was, I don't know if I've listened to them in quite a while, but they were on pretty heavy rotation back in high school for sure. So.
Chris (03:22)
Yeah, totally. Love it.
Yeah, they were, yeah, they were one of my favorites. We covered them in a couple of the bands that I played with back in the day. And I was bummed when they took a hiatus and they did, hell yeah. And a couple of other projects, which I saw them a couple of times, which was great, but different. wasn't, you know, obviously the whole mud vein band, but now that they're back together, they're, they're touring and it was just awesome. And.
TJ Miller (03:53)
Mm.
Chris (04:02)
One thing that I realized when I was there watching it, they had a backing guitar and vocal singer in the back kind of playing and singing at the same time, doing some growls and supporting vocals and stuff like that. Like that guy looks really familiar. And it was my old singer in one of the bands that I played in back in upstate New York. yeah, he, he was a
TJ Miller (04:27)
No way!
Chris (04:32)
guitar tech and like staging like crew member for like all these bands. So we did like Lamb of God. He did Hell Yeah a bunch. So we toured with them. And then now he's actually like in Mudvayne as a supporting artist like in the back. So I'm just like, man, like that's cool. Like it's my friend up there. Like I haven't talked to him for years, but yeah, it was just awesome to see him. Like that looks like him in it. I looked him up on.
TJ Miller (04:49)
Holy smokes.
Yeah
Chris (05:00)
Facebook again and it was, yeah. So that was pretty awesome.
TJ Miller (05:02)
Dude, that's so cool. Supporting growls, I love it. That's fun, man. That's fun. dude, you want to dive in and talk about some LLMs a little bit?
Chris (05:05)
Yeah.
Yeah, let's do it.
TJ Miller (05:15)
Yeah, so friend of the show, Sean Kegel, hit us up and suggested that we talk about some different LLMs and kind of different use cases and maybe talk about our experience a little bit. So really appreciate him reaching out and hurling a topic our way.
Chris (05:36)
Yeah, totally. like, definitely if anyone else has any topics, feel free to reach out on Twitter or the email. We'll share at the end of the episode or just reach out any via the website. Be happy to answer any questions or take any topics.
TJ Miller (05:48)
Yeah.
For sure, man. How much, like, I think it's given that I've kind of run the gambit on like whole bunch of these. But I'd love to hear a little bit about like some of your experience and maybe some of like tools and workflows that you use. And I think we can then like kind of talk about it a little bit more broadly, but I'd love to get like your take and a little bit about your experience.
Chris (06:20)
Yeah, totally. So mine, I would say, is very basic at this point. I've pretty much used the just UI tools. So jumping into chat GPT and recently started tinkering with Cloud, then have GitHub Copilot enabled in PHPStorm and a few other things. So I've typically used either the standalone tools. So there's a bunch of other out there like
Gemini from Google and my journey for images and things like that. And then there's tools that are integrated with AI features. So Google Docs has Gemini. Notion has a bunch of AI stuff. I use Grammarly. The podcast software that I listen to is called Snipt, and that has AI features built into it. So you can enable AI on a specific episode.
So it'll automatically generate a transcript if there isn't already one, and then do like smart chapters, and you can highlight, and then that highlight will go into my other content systems that we'll probably talk about at some other time. So I've used the standalone and the integrated tools, but I haven't taken the next step into custom AI solutions. So I know this is really big in companies, so.
TJ Miller (07:39)
Mm -hmm.
Chris (07:43)
If you're like medical company, like maybe you're looking at like medical diagnosis via AI or Seeing what's in like an image upload and seeing if you can like diagnose some sort of issue on the skin or on the face or something like that or just like going through Medical charts and things like that. So there's lots of ways that People could build their own custom AI solutions and I feel like that's where kind of you come in and sparkle comes in is like
how to choose some of these LLMs, different options, what's all this prompt engineering stuff that makes the AIs better and smarter or having a more solid output and more useful output. So yeah, I guess I'll pause there, throw it over to you if you want to add it onto anything that I said or any of the tools that maybe that you use.
TJ Miller (08:23)
Yeah.
Yeah.
Yeah, you touched on a ton of awesome pieces along the way there. And I love the way you broke down the categories, There's the standalone interfaces that have some integrations into them, depending on what options you choose, like the chat sheet PT, Claw, Gemini, Mid Journey. There's a handful of them out there. There's Grok through Twitter.
That's another pretty good one. Another standalone, sort of like, I don't know, it's like a standalone plus. There's a whole suite of user interfaces out there, and there's a couple of the ones for Mac that I can think of off the top of my head, like Bolt AI, Friday.
One that I really like to use a lot is Open Web UI. So there's all these different interfaces that connect to a lot of these different models that stand standalone. Open Web UI, I think all of them have some other advanced features too. Open Web UI has a whole suite of solutions that you can build into it and really customize it. there's just a whole bunch of different ways to access those standalone models too.
get some level of enhancement. And then, we're seeing a ton of AI integrations into existing tools. I know you mentioned Notion, Grammarly, Google Docs and Slide with Gemini. then, dude, Snip sounds so cool. I'm going to have to check that one out. And then with Copilot, there's solution that I like to reach for sometimes.
And it's for like VS code. There's the continue code assistant plugin And the really cool thing about that is it allows you to leverage solutions like O llama Which you can run local and offline private models with So if you're using VS code, that's definitely something to you to take a look at and dig into you Yeah, like I I on a day -to -day basis at this point I'm using Claude for
just about everything. And Claude's from Anthropic. And I use that both through their user interface, which is super cool, especially on the code generation front. But I also use their API for a lot of the things that I build for myself with Sparkle right now, too. I've just, I don't know, I've found I'm a little preferential to the output. Something I posted on Twitter a couple days ago was kind of comparing the output from
GPT -4 from like the chat GPT stuff and Claude's output and you'll notice that like chat GPT's responses like a sentence or two and then out of Claude you're getting like a whole paragraph with all sorts of extra details. So I'm just I'm a little preferential to Claude's output at the moment.
Chris (11:41)
Yeah, that's really interesting. Did you only start with chat GPT and the AI and then just recently to Claude? Or did you kind of use both of them, kind of prefer chat GPT and then now you're preferring Claude? like, what is your journey between that, now you realize that you prefer Claude? Because I'm just wondering how these
versions of AI in the models have changed either for like the better or worse
TJ Miller (12:09)
Yeah, things have changed a lot. I've been really into AI for I'd say like maybe the last two and a half -ish years. I got involved a little bit, like maybe a couple months before chat GPT came out. And at the time that was using their GPT 3 .5 model. And I think even just today they released a new series of models. They're like 01 series. So they're...
they're constantly upgrading and releasing new models behind the scenes for chat GPT. you can still use chat. With chat GPT, you can use GPT -4, GPT -4 .0, and then now I think the new 01 models. So you kind of have some options inside of there. But for the longest time, the models coming out from OpenAI were really, I think, at the forefront.
of where everyone, know, where all the different like models were at. And when Anthropic and Claude came out, they were great, but the responses just kind of weren't at the same quality as Chat GPT for a little while. And then they started, then, you know, Claude and Anthropic started releasing like new models on their end too. And their new most recent set of models, like their 3 .5 series, I think is
some of the best stuff out there right now. So it's kind of evolved as the models have evolved and as these companies have put out different models with different sets of capabilities. That's the other piece of it is like the capabilities. Somewhat recently, Anthropic added function calling to their API. So that's where you can kind of define, this is like the big centerpiece of Sparkle, you can kind of define a callback.
and a JSON schema for how it should respond to pass those things to the callback. That wasn't available in Claude for a long time. So I was kind of relegated back to using like ChatGPT for the most part, because that's where all the tooling fit in. But now that Claude supports it, I've kind of shifted back towards that, not only because I like the output from the new models, but they've now got feature parity for a lot of the stuff that I'm using.
So, ever evolving and I kind of find different use cases for different models work best. Like if I want a really concise output, I'll probably pick like Chat GPT. If I want something that's got a little bit more flavor and it will go a little bit more in depth, I'll probably pick like the newer model from Anthropic, which is like the Claude, it's like Sonnet 3 .5 is their like kind of flagship model right now. And that's...
That's been fantastic.
Chris (15:06)
Awesome. Now you mentioned UI versus API. And at least at the time of this recording, what do you have to do to open up the capabilities of API access to these AI tools? Is that something that's free across the board? I know there's like pro or paid type versions in Chetjiputti and Claude and I'm assuming some others, but those are the ones that
I've only used, are you able to use the AI API without paying for it? Or is that something that you have to jump into and pay and that's a very pro paid feature?
TJ Miller (15:44)
Most of the providers will offer a small bucket of free credits to use their API off the bat to kind of just get used to it and start playing around with it. But pretty much all of the API stuff is you got to pay for your usage. And those are typically they charge by the token. And what tokens are are either whole words, if they're shorter, or parts of words. And they're kind of like,
interchangeable so you can take a token and make up several words with a few different tokens. So it's kind of like the smallest unit of completion that comes out of an LLM. So you typically pay by the token or pay by the thousands of tokens. There's kind of like a pay for amount with all of that. So you kind of just pay depending on your usage as opposed to the UIs, which are
subscription based. know ChatGPT and Anthropic, think both of them are each like $20 a month. So not super crazy.
But what you get from using their UIs and what you don't see behind the scenes is that like when you interact with ChatGPT, they're applying a ton of prompt engineering that you don't see behind the scenes.
every time you send a message to chat GPT, they're also including like a page or two of additional prompting and additional text and context that gives it that personality that you interact with. So like,
Chris (17:22)
interesting. I didn't even know that they sent that much information to it. I don't know if it's on the free or the paid, but I know there's some options in there as far as adjusting that default prompt. So you can say, always interact with me as a 20 -year experienced PHP developer, something like that. Or I'm a school teacher for third
level. always respond with content that I can use in my class, know, type of thing. But I didn't know that they were including that all kind of behind the scenes for you by default. That's really interesting.
TJ Miller (18:03)
Yep. Yeah. So if you ask the, like, you ask like chat GPT to like introduce itself and like get it to like talk about itself a little bit, you'll start to see a lot of that, you know, that prompt engineering shine through as opposed to if you sent that same request, you know, just straight to the API. Like you're, if you're interacting with the API, then you're responsible for all of the prompt engineering that goes into that. There's a certain level of that personality.
with finger quotes in there. There's a certain level of that personality that's just trained into the model through all of its base training, but there's a significant amount of prompt engineering that goes into the interactions when you're using their UIs. Plus, you get a whole bunch of neat tooling. I don't know if you've interacted with the Cloud UI, it's code snippets right now.
kind of it's got this like fly out window and it'll provide you all the code snippets in there. It's really, really cool to interact with, you know, their UIs and they're trying to be at the forefront of, you know, different capabilities that you can provide with like user interaction. And there's a couple of repos floating around where people have reverse engineered and gotten these like base prompts. And so you can kind of explore how OpenAI
does their prompt engineering for ChatGPT. I don't know how current the ones out there for Claude are, but they actually published a paper on their initial set of prompts for their prompt engineering that was a really cool read. And that's something that I used for a good while because it was just, it's a really good prompt.
Yeah, that's kind of like the big difference between using their UIs and the APIs.
Chris (19:59)
Nice. It's really awesome.
TJ Miller (20:01)
So along with that, there's, like, prompt engineering, think, is...
probably like the most important part of interacting with LLMs. And I think you, if you've spent any time like working with any of these models, you'll definitely see if you like even add, like add a sentence or just like kind of rearrange how you're phrasing things, you can get drastically different output, either more accurate, less accurate with more personality, less personality.
And there's like a whole bunch of prompt engineering strategies and frameworks. One of the like prompt engineering frameworks that I like a lot is the, it's like the CoStar framework. And I'll have to look that up.
because I'm forgetting the different pieces of it. the CoStar framework is like, CoStar stands for Context, Objective, Style, Tone, Audience, and Response. So when you're like prompting an LLM, that you kind of want to supply a piece of each of this where it's like, here's the context where you're kind of like setting the stage for the AI and like how it's going to respond your objective.
you're getting really clear about what you want the AI to do. You know, this is great for generating content. So you want to add in like the style. You know, I want it to, I don't know, it's just kind of like, I want it to be like this specific writer or act as like a business guru or a friendly neighbor or like someone next door. Your tone is just kind of like the whole vibe for everything.
your audience, like, who is this for? this for, I think you touched on this, is this for a PHP developer with 20 years of experience or is this for like a third grade class? And then finally, you can kind of like shape the response of like, I want it as like a bullet list or paragraphs. You know, we're getting in a lot more advanced now with being able to choose structured output. So I want it to be, I your response to be in this JSON format or
that format I want, it is marked down, or this is for an email, those kind of things. So there is like whole frameworks around prompt engineering. then like getting into other strategies where you're providing like zero, one, two shot prompting is what they call it. And those are like each shot in finger quotes is.
breaking down like the number of examples that you're going to include. Like I want the output to be like this or you know if we're generating code I'll normally provide it like sample code of what I expect it to look like kind of coming out of the other end. So if I've written a couple classes and I really like the way that they look and they're structured and how we've broken down like different methods in that class and like kind of naming conventions like I'll provide that code as context.
as examples of like, this is kind of what I expect you to output. Because all of this is just kind of guiding and nudging the LLM towards giving you the response that you want. Because at the end of the day, these are all just auto -complete engines on steroids. So you're just kind of trying to guide it to auto -complete with what you want it to.
Chris (23:26)
Yeah, totally. And that could not only be code examples, but even like, hey, here's a couple of last emails that I sent that I really like the sound of, like use these as a base to then alter it to this other person in the same type of way. Or if I'm wanting to like tweak a blog post, like, hey, here's my last three blog post URLs. You know, read those and keep the same
tone and energy and reading level and things like that into the prompt. So it will just give even more of that context and hopefully give you closer to the result that you want. You still might always need to adjust it a little bit, but at least it'll be a lot closer than just saying, write me a blog post about XYZ. Then it's going to be a very generic prompt. It's going to give you a very generic response.
TJ Miller (24:23)
Yep. like in growing up, like my parents always drilled into my head and garbage in garbage out. And that is just so, so dang true with LLMs. Like, yeah, if you've, if you've got like a real, real junky prompt to like, you can't expect a whole lot out of your response, but the more examples and the better, the better context you provide it and the more you kind of guide it towards what you want, the better outputs you're going to get for sure.
Like, and then we can like talk a little bit about tooling too. Like one of my favorite workflows that I've put together is, I've got a telegram bot and like telegrams awesome for, doing all sorts of stuff with bots. Cause it's super easy to integrate, but really powerful. So I wrote a telegram bot and I send audio notes to this telegram bot all the time.
because I'll be out walking around just thinking of something. So I'll throw it at an audio note. And then I've got a pipeline of Sparkle agents. And what this pipeline does is the first agent will take that audio file. It'll use AI to transcribe it. Then the second agent in the pipeline will take that transcription and turn it into a bullet note. Then the next two both extract things out of that bullet note. One will extract out any decisions that were made.
And then the other one will extract any sort of action items that were made. And then the last step of the pipeline is just to like make sure everything's formatted correctly. And then it sticks it in a Notion note and puts it in a database. So I can go back and reference it later. And then you can use Notion's AI on top of like all of that too. So you can kind of continue to like chain all these different tools together to, you know, make some pretty powerful workflows. And that's something that I use on a day -to -day basis.
Chris (26:15)
Yeah, that's really cool. Because I definitely haven't brought in AI to my full workflow and day -to -day actions. It hasn't made that much of a connection yet. I'm trying to force myself as I think about it to be like, put these couple of bullet points in Claude or ChetGBT. And given this, here's the output that I want.
summarize these things into two or three sentences type of thing. Some code examples here and there, but that's really what you just described as taking it to the next level of actually building a lifestyle tool or assistant for you in your specific of use case. because you're a
power user of Telegram and you know all the ins and outs of these AI tools and also a power user of Notion. Like you used all those connections to build a tool that you can just seamlessly integrate into your life and doing these other things when you're not on the computer or just have a mobile device. It'll just be sitting there for you when you get back and you can do whatever you need to do with it. Or like you said, like roll it into some other AI workflow or utilize it in a database.
later on or anything like that. So I think that's really interesting. And can you walk us through how you came up with that idea for choosing why Telegram, why some of these multiple prompts, and why that whole workflow? What were you running into?
before having this and how has some of these workflows and tools helped solve some of those issues or things that you wanted to improve.
TJ Miller (28:06)
Yeah, I mean, in, in full transparency, like I've got super severe ADHD and like that was a huge catalyst for me getting into LLMs in the first place. I kind of stumbled across it, as, as the hype was building around them. And I just, I, I saw so much opportunity for me to build tools to kind of like augment and sort of sidestep like some of the
the downsides of having ADHD. And this was really born out of that of like, I've got ideas all the time. And a lot of times I think so much faster than I can type. And so I got into using like Apple, like audio notes to just like record things, but that's not like searchable. And I wasn't going to sit there and try to like,
remember which audio note had what thing and end up listening to a bunch of audio notes and like it's just it's not super effective. So I was just kind of sitting around like thinking of a way of like, well, I use notion to, it's like my sort of like personal knowledge management system. So that'd be great if this stuff lived in there and then that would be automatically searchable. And I know that AI does like really good transcription.
I prefer bullet notes. And so was just how, like, how can I take a bunch of preferences of mine and put it all together in one go? And this like evolved out of a couple different iterations. But it was just a bunch of tools and different things that I already used. And I figured, you know, there's got to be a way to wire it all up together. And so the first one was like, the first iteration was really just
transcribe this and shove it in a notion page. And that was nice, but I found myself a lot of times like wishing it was a little bit more concise and I really prefer bullet notes. So that was kind of like the next iteration was like turn this into a bullet note. But then, you know, it's not just audio notes. Like I could take audio from a Zoom call that I have. And, you know, at that point,
I've got this nice bullet note, but coming out of a Zoom call, I'm going to have action items or certain decisions were made in that call or something. And so I was just like, all right, well, how do I continue to do that without necessarily having to put in additional work on my own? Well, I'll just extend the pipeline, give it two more steps. Now, given this bullet note, find all decisions that were made and pull those out into their own section.
doing the same with action items. And so after a call or if I'm just rambling into a telegram for a while, I can come back to it and find out, there's been times where it's pulled out action items that I weren't even sure were action items from a note that I was making to myself. And it was like, hey, you might want to do x, z then. I'm like,
That's genius. would love to do those things. Like that's exactly what I should have done, but I didn't necessarily see the action item in that, you know, initially. And it was just kind of able to like tease those things out from the transcription. So it's been super helpful, but it's just kind of been like one step at a time, just assembling tools that I already knew and things that can be toolified. If that's, if that's a word like notions got.
a ton of APIs and makes that really available. At the time, I was actually trying to switch from Notion to Obsidian Notes, but that broke that workflow entirely. And that's something that I was starting to use a lot. And that actually kind of like forced me back into sticking with Notion because I can use those APIs to build like a whole bunch of different, you know, different tools and
create different types of interactions. And so some of the decisions around the tools that I use also are made based on what different things I can build with them.
Chris (32:31)
That's a really good iterative approach to how this tool came to be. I think that's maybe a really good starting point for anyone else who wants to do something similar. It's just like, the basics up and running. And like you said, use the benefits of what AI has really easily out of the box, transcribing, summarizing, and pushing that somewhere.
and then, you know, seeing if that suits you and go to the next step. So, yeah, I think that's really interesting. And I think, is it just like a one way push through the system right now? So you do an audio recording, it pushes through the AI models and then eventually gets to Notion. Where can you now start like asking the Telegram bot, which you have in Notion or past audio files or conversations, you know, with it?
TJ Miller (33:28)
I haven't got that far. That's definitely something that I'd like to do. And the tooling and technology definitely exists for that. Part of why I haven't built that yet is I haven't got that far in building those types of tools for Sparkle. And I've been really shifting off of all other tools to dog food my own stuff. And I haven't built the tools in order to do that yet.
The nice thing, that was kind of one of the decisions to push it into Notion was not only is that tool capable of doing that, but kind of automatically get search, some level of search functionality out of it. But I would love to be able to make like that reverse integration where I could go back to that Telegram bot and be like, you know, I think I was talking about something last week to do with this. What was that? You know, and it...
be able to go back and find out and at least be able to surface a link back to that notion page of, you you mentioned something about that here or something similar about that here, maybe go back and take a look at these. I think that'd be awesome. And it's something that's, you know, well within the capabilities of LLMs because we haven't hit AGI. These things lack all common sense. And so like you've kind of got a
You kind of got to stick in the realm of things that LLMs are actually capable of doing and are good at. And this kind of thing happens to be like right up LLMs alley. So yeah, I'd love to do that. We'll get there. We're getting there. The base functionality exists in Sparkle. Now it's just a matter of building all of the tools to plug into that.
Chris (35:17)
Yeah, totally. Well, this definitely gives me a lot more things to look into on the AI front and tooling, because I can definitely see myself setting up some little side projects to do some of these tools. I'm not really, I haven't been one for like audio notes, but I could definitely see that being a benefit, because I'm always like listening to things and, you know, walking around, you know, with the phone and stuff like that outside of.
you know, the space where I my computer. just to jot something down and like you said, speaking and talking is a lot better or faster than typing in a lot of ways. And sometimes like if you're not a super fast typer, then your brain is going on to the second, third, fourth item and it kind of jumbles what you're trying to type in and get through your fingers onto the page where the audio part like could be.
TJ Miller (36:04)
Mm -hmm.
Chris (36:15)
a lot faster and easier for that workflow.
TJ Miller (36:18)
And there's a ton of just, there's something to be said about just being able to ramble into an audio note and then have something that makes some level of sense come out the other end. Cause that'll be too, like I like to think out loud. So I'll be thinking out loud over the course of, you know, an audio message. And I may start somewhere and end up somewhere totally different, even changing my opinion as I'm rambling onto it.
So for it to be able to just kind of make sense of a bunch of me rambling and, you know, train of thought out loud, is super helpful. But, you know, I found that tool to be also really helpful with, you know, recording zoom calls and being able to like come out the other end of like, yeah, here's like a little recap of our meeting in a form of a bullet note and calling out people's action items. Like it ended up being super like.
super helpful in that context too and that's not even what I planned to use it for.
Chris (37:17)
Yeah, totally. Lots of use cases for that. So forward to it.
TJ Miller (37:21)
so before we wrap things up, I wanted to talk about Tryhard Studios. They just announced that they're going to have their Mastering Postgres course.
Chris (37:36)
Yeah.
Yeah, this looks awesome.
TJ Miller (37:40)
I did not expect to emotional watching an announcement video for a database course.
Chris (37:48)
Yeah, looks like a full -blown Hollywood trailer for an amazing movie and is for a database course, which is also amazing, but in a totally different way.
TJ Miller (37:58)
Yeah, yeah, I've, I've, I've not had a ton of experience with Postgres. and honestly, almost all of my experience with Postgres has been on the vector database front, which is relatively new. So I don't even have a ton of experience using it as like a relational database. how about you, man?
Chris (38:18)
Only a little bit of experience so far. In one of my packages, I had to support a few special cases with Postgres, so like pulling that in. And then at Ork, some of our newer services, we preferred Postgres because we handle a lot of webbooks and JSON data and a few other things that were available in Postgres or a little bit more solid in Postgres, especially in the infrastructure that we have. So...
We pulled the trigger on that, and it seems to be going well so far, but definitely haven't gotten into the inner workings of it or tweaking all of the settings and figuring out how to make it even more performant or better for our use cases. So definitely looking forward to this course and picking it up on, I guess, October 15th. So in a few days, it'll be out.
TJ Miller (39:10)
Yeah, that's crazy. I cannot believe how fast they put that together also, like coming off the heels of SQLite and like their SQLite course that I'm super impressed at how fast they were able to put that together. So I'm really looking forward to it. I'm specifically really looking forward to learning more about views. So that's somewhere that I've always been really intrigued by, never really had an excuse to dive too far into it.
So that's one piece that I'm especially looking forward to.
Chris (39:46)
Yeah, so we'll look forward to that.
TJ Miller (39:48)
Well, cool man, you want to wrap things up?
Chris (39:50)
Sure, let's wrap it up. Thanks for listening to the Slightly Caffeinated podcast. Show notes are on slightlycaffeinated .fm with all the other mentioned links and resources that we've talked about. If you want to reach out or share it on Twitter or X, look for Slightly Caff Pod on there. And then if you want to email us with any feedback, suggestions, talking points, anything that you want to hear on the podcast,
You can email us at hey at slightlycaffeinated .fm. So yeah, thanks TJ for the conversation and we'll catch everyone next week.
TJ Miller (40:30)
Yeah, thanks Chris, we'll see you.