Scheduled Tasks in Spring: Scaling Challenges, Standard and Non-Standard Solutions

How Setronica Created an AI Slack Bot, Part 2: From Code to Conversation

Welcome back to our three-part series on creating Setronica’s AI Slack bot! In Part 1, we explored the initial stages of our project, from brainstorming to planning. Now, we’re diving into the heart of the matter: the development phase. This is where our ideas start taking shape in code, and our bot begins to come to life.

Setting the stage

Before we could start coding, we needed to ensure our development environment was primed for success. Here’s a quick reminder how we set things up:

 

The core is based on using Spring Boot, a framework that simplifies the creation of web applications in Java. The project also incorporates Spring Web for building RESTful APIs, along with several API clients for interacting with external services like Slack, OpenAI’s ChatGPT, and Google Gemini.

Core development

We started by setting up the project structure, ensuring that each component was well-organized and easy to navigate. The main Java directory was divided into several packages: configuration, models, services, controllers, and utilities. This organization was crucial for maintaining clarity and efficiency throughout the development process.

 

The configuration package was the first step, where we set up connections to various APIs. This involved creating SlackConfig, ChatGPTConfig, and GeminiConfig files, each containing the necessary credentials and URLs for their respective services. These configurations were essential for initializing API clients within the services, ensuring seamless communication with external platforms.

 

Next, we focused on the models, which represented the data structures used in the application. By creating classes like SlackMessage, ChatGPTResponse, and GeminiResponse, we ensured that data from Slack, ChatGPT, and Google Gemini APIs could be handled effectively.

 

The heart of the project lay in the service layer, where we implemented the core logic. SlackService was designed to handle sending and receiving messages from Slack, while leveraging ChatGPTService and GeminiService to process these messages through external APIs. Each service was responsible for sending requests, receiving responses, and managing any errors that occurred during these interactions.

Controllers acted as intermediaries between user inputs and the services. We developed SlackController to process incoming Slack messages and coordinate with SlackService for managing responses. Similarly, ChatGPTController and GeminiController were responsible for handling requests to their respective APIs.

 

Finally, we created utility classes like MessageFormatter and ResponseHandler. These utilities were crucial for formatting API responses for Slack and managing error logging, ensuring that the bot operated smoothly and efficiently.

 

So, this is an example of a conversation with our Slack AI bot:

AI bot conversation

Testing

After internal testing, we released the bot to a small group of volunteers, ranging from hardcore developers to content managers. The goal was to check how the bot worked for different purposes: writing code, composing SEO titles, or simply translating HR documents.

 

The overall feeling was positive:

 

“It works, and it is convenient to integrate two systems in one interface. It has given us more inclusivity in using AI systems, as it is a unified and truly cost-effective mass solution with two leading competitor bots. Some functionality needs to be added and errors corrected, and it will be perfect.”

 

Speaking of the errors, we got a bunch in the first round of testing:

"

When you choose ChatGPT, it doesn’t translate your text correctly. It stops after the first line, and refuses to continue the rest of the paragraphs if they were written with the ‘tab’ button.

"

Bot answers are scarce, when compared to the web version of ChatGPT.”

"

You need to choose between ChatGPT and Gemini every time you send a message. Plus, the bot doesn’t mark the answers, which AI it used to answer you.

"

Gemini adds a lot more text into the translation than it was in the original version

"

Gemini is too ‘creative’ when it has to be precise.

But these were good errors. They meant we’re on the right track, and we have something cool in the making. All we need is fixing the bugs and add more features.

What’s next?

In the final part of our series, we’ll explore the implementation phase, focusing on bug fixing, scaling it for larger teams, and introducing new features. Stay tuned to learn how we’re taking our bot from a local development environment to a robust, production-ready tool!

Let’s start building something great together!

Contact us today to discuss your project and see how we can help bring your vision to life. To learn about our team and expertise, visit our ‘About Us‘ webpage.




    This site is protected by reCAPTCHA and the Google
    Privacy Policy and Terms of Service apply.

    Related posts

    • All Posts
    • Artificial intelligence

    SETRONICA


    Setronica is a software engineering company that provides a wide range of services, from software products to core business applications. We offer consulting, development, testing, infrastructure support, and cloud management services to enterprises. We apply the knowledge, skills, and Agile methodology of project management to integrate software development and business objectives effectively and efficiently.