How to use AI and machine learning to accelerate tasks and safely get work done

You’ve likely seen the large influx of new AI/ML tools, but how do you use these effectively, and safely? This post hopes to help you understand the benefits, and risks, of using these tools, so you can navigate the landscape effectively.

Possibilities

Probably one of the most logical questions to ask first, is “why even bother?”

And while I agree it’s not a magical one size fits-all fixes-all solution does not exist, there are definitely benefits to using these tools in your daily work and even spare time projects.

Do keep in mind that I’m listing the examples below without their caveats, as those are for the next chapter.

Here are a few examples:

  • AI-powered grammar and spellcheckers go far beyond just doing those two things; Do you fear your message may be too on the nose? Offensive to certain audiences? Too long-threading and hard to understand? Do you want to re-use an older text in a more professional environment? AI/ML tools can help you tackle all of these issues and more, simply by giving you examples, alternatives, and even additions, that are all up to you to choose in the end without losing any control over your messages.
  • AI/ML-powered image generation tools can help you visualize things and work out details before going to an artist- It doesn’t just help you give shape to the things you’re trying to do, it also helps the artist find what you’re actually looking for, which is often a very pervasive step in commissioning pictures.
  • AI/ML-powered question answering tools can help you formulate your question, fill gaps in your knowledge, and find creative answers to problems that you might’ve not considered yet.
  • AI/ML-powered (code/text) summarization tools can help you quickly understand the gist of a large text, or the function of a piece of code, without having to go through all of it yourself.

Obvious problems

There are, of course, some obvious problems with these tools, especially in these early days of this still fairly new technology, which is arguably still in its infancy.

Here are some of the obvious problems:

  • They may not catch all problems (grammar/spellcheckers, but also other tools). The tools are slowly getting better, but it’s important to remember AI/ML-based programs are not absolute in the way they work, they are neuralnets, much like those in our brains, and they are only able to correlate information, sometimes with a very high accuracy, but it can never be guaranteed to be correct 100% of the time, as there’s no absolute math or algorithms behind it that we can empirically deduce the functionality and cause-effect of.
  • AI/ML-powered generative tools use datasets to base their creations on. There is a lot of debate about the copyrighted materials used in some of these datasets (I would personally urge not to use those), but also the general fairness of the usage of the data that was crawled from websites that allowed the crawling by search engines as defined in their robots.txt, but may object to the use of their data for training generative AI/ML tools.
  • Image generation tools may not always produce the desired effect, especially if their dataset simply doesn’t contain the examples of the things you are looking for.
  • As mentioned before, answers provided by code/text LLMs are not guaranteed to be correct, as they do not reason, they only correlate information. There’s more research being done on ‘grounding’ these LLMs in reality, but it’s important to remember these tools are just neuralnets that statistically correlate information in the end.

Non-obvious problems

The problems listed above are not the only problems however, and while you might’ve guessed a lot of the above were the case, here are some examples of things that a lot of people accidentally mess up- even the developers of these tools!

  • Data Leakage – One of the most forgotten aspects when interacting with LLMs, is that the input you give them may be used to train the LLM further. If you do not see any mentions of logging being explicitly OFF, it’s the safest to assume that it’s actually on. Any code or texts with factoids you will input may end up in the data set, which can easily lead to leaking confidential information, code, or personal data.
  • False assumptions – Unverified but approximately correct information, can easily turn into misinformation when presented as the truth, so it’s really important to mark these kind of answers as AI-generated, to give people the full context and ability to understand the information may need some extra verification.
  • Not Up To Spec – Even when the answer is entirely correct, or the code you’ve been given does the exact job you need done, it may simply not meet the standards for security, safety, or other aspects that your organization has set.
  • Bias – Even when the information is factually correct, and up to spec, it may still be biased towards certain things, which could harm specific cases where non-average people or contexts are used with the AI or creations thereof.

Okay, so what CAN I do?

The list of problems shown above may look daunting, and make you wonder if AI is ‘even worth it’, but the truth is that the majority of these problems do not just apply to AI/ML- They also apply to humans and interactions with others around you.

As long as you keep in mind to anonimize data you input, or make sure you use tools that explicitly state to not log or use the input, it’s quite fine to use these tools in basically any context, even with the most confidental or personal information; just as much as you’d otherwise use computers and software tools to achieve your goals.

Tools I personally recommend:

  • ChatGPT, Bard, Claude, and so forth, are all fine to use for non-personal and non-confidential data, or anonimized data.
  • Often in the developer playgrouns of these tools (such as the OpenAI Developer Platform pages) you can query the model without it being logged or used in any way. This is also often the case with APIs in general- but it’s important to double check.
  • DuetAI in Google Docs does not log anything you write or tell it
  • Anything you host on your own servers, including GPT3, HuggingFace’s Models and Datasets, LibreTranslate, and much more.
  • This also goes for many tools you host on e.g Azure, AWS, gCloud, and so forth.
  • Ask around in your organization if there are internal-only tools. I know that Google, Nvidia, and Microsoft have such LLMs, which you can query with any question, and they’ll give results from the entire internal repositories of documentation, code, question/answer pages, and so forth.
  • Smaller organizations can also decide to do the above fairly easily, as you only really need a beefy GPU/TPU-based server, something like the GPT3 source mentioned further up, and access to those internal datasets for training. These tools, since they’re already trained on confidential information, can usually be given most if not all types of confidential information without having to worry about it ending up in the dataset (that’s the intention, after all).

Practical examples of things you can do right now:

In addition to the things mentioned in the possibilities section, here’s a few real world examples of what you can do:

List of tools and repos worth checking out

I am of course not able to list every tool, API, software project or method out there, but if you have suggestions, feel free to let me know at https://mastodon.derg.nz/@anthropy, and I will try to add it to this page.

Non-exhaustive list of selfhosted code completion tools:

Any suggestions are welcome!

Updated: