Manasvini Krishna, Founder, Boss as a Service

I
Authored By

ITAdvice.io

This interview is with Manasvini Krishna, Founder at Boss as a Service.

Manasvini Krishna, Founder, Boss as a Service

Can you introduce yourself and share your background in AI accountability? What sparked your interest in this field?

I am essentially a lawyer by education and a coder and entrepreneur by passion. I've built a lot of tech tools for work optimization and productivity, with my primary platform being Boss as a Service. Just to be clear, this is a service that provides human accountability through real bosses. But with AI taking over all fields, including productivity, I've found myself thinking more and more about AI-powered accountability systems.

In your experience, what are the most pressing challenges in ensuring AI accountability, particularly in business applications? Can you provide a specific example you've encountered?

AI is still quite nascent, as is evident from a lot of the gaffes and errors that have come up with its use. It is a good fit for business accountability because it is scalable—you could potentially sign up large teams to one AI accountability system. But a couple of problems that may creep up are people finding the loopholes and using them to get away with things, and also the AI system itself not really accounting for pivots and shifts in circumstances.

You've mentioned that AI is starting to take over the accountability field. How do you balance leveraging AI tools while maintaining the crucial human element in accountability processes?

AI tools should be used as a supplement to the accountability system, overseen by humans. AI can be used to break down goals, set reminders, and track and review progress. But whatever insights it provides must be discussed by humans to ensure the full context is covered.

Given your expertise in recognizing AI-generated content, what practical advice would you give to IT professionals to enhance their ability to distinguish between human-created and AI-generated work?

Right now, AI-generated content is quite generic in quality. For instance, if you ask AI to write an article for you, it will likely use a lot of clinical language that makes it obvious it has been generated by a machine. So quality check would be the first step. For more sophisticated content that is hard to spot, perhaps putting in safeguards and devoting time to check the metadata of the content will help.

How do you navigate the rapidly changing landscape of AI technologies and regulations in your work? Can you describe a situation where you had to adapt your approach due to new developments?

When AI first came to the fore, I was excited to see how it would help in productivity applications. But its use is still quite contentious, and not everyone is comfortable using it, especially in terms of sharing data. I think regulations are also still trying to navigate the complex and intricate implications of AI use. So for now, I recommend its basic use—to break down big goals and help devise timelines and productivity strategies.