top of page

Learnings from Building Agentic Workflows: Are 'best practices" even possible?

  • Writer: Mike Neuenschwander
    Mike Neuenschwander
  • Mar 14
  • 5 min read

Updated: Mar 15

I spent most of last week building several Agentic workflows, because, why not? And even though—in the thick of it all—creating these workflows seemed painstaking and time consuming, by the time I was done, I realized it took me a only few days to get quite a lot done. So, the first learning is simply this:


Working through the minutia of agentic workflows is a small upfront price to pay for the longterm benefits

Given the unpredictable nature of LLMs, however, it’s not really proper to call anything about implementing agentic workflows a “best practice”—even though it’s easy to find lots of advice out there. Still, most of this free advice is high level and exhibits what we call “a firm grasp of the obvious.” Here, I hope to provide some ideas on how to get started with your own workflows and provide you with enough confidence to plow through some of the tedious moments that are certain to come.


Work Patterns for Creating Workflows Part I: Getting Started


Building agentic workflows requires a much different mindset from building traditional workflows. The unstructured and unpredictable nature of LLM responses means that you’ll spend a lot of time experimenting, formatting, and testing at every step of the way. The important thing is to accept this new normal: learn to enjoy discovering the nuances of various models and learn to treat these agents as helpful, yet eccentric friends. Plan to spend a lot of time getting to know them.


Friendly, yet quirky assistants helping to build workflows
Friendly, yet quirky assistants helping to build workflows

Note that, since this is my first post on the subject of learning from building, I’ll provide just the basics of getting started with building agentic workflows. I plan to make this a regular part of our coverage, so be sure to subscribe to our Substack stay informed on the full range of practices that we’ll cover.


For this post, I’ll discuss the tools to get started and how workflow prompting differs from web-based prompts using frontier models such as ChatGPT, Gemini, Grok, and (a personal favorite) Venice.ai.


1. Start by going through the entire workflow manually


Although it’s already aphoristic that “you can’t automate what you don’t understand,” it still can’t be overstated. Begin by defining what you hope to accomplish and then work through every single step personally. For example, if you need to prompt ChatGPT something, then do so. If you need to put the response in a Google Sheet, then do that. If you need to generate and run some Python code, then do that.


And here’s the important part: document every step you take. Your documentation becomes the architecture of your workflow.


2. Prototype in a no-code or low-code environment


Because my objective was to validate some ideas with prototypes, I used make.com, a no-code platform for getting workflows figured out quickly and iteratively. Another useful tool is n8n, which provides an even broader range of tools for enterprise use, including the ability to include code in your workflows. When working independently, such tools enable rapid collaboration to build a concept, because they offer a one-stop shop for building, hosting, and operating with minimal setup.


3. Try, try, try …


The only way to know how a model will behave is to try it … A LOT. This means you have to use a lot of data, tokens, and output files but it's a necessary sunk cost to investigate how the model will react to your prompts and inputs.


4. Start with the cheap models


The latest models DO produce better responses, but there are so many integration things to work out that substance doesn't matter until you've passed all the integration tests.


5. System prompts help


Using the system prompt enables you to set the role the LLM will play and provide sufficient context to simplify the main prompt. It also initializes all of the parameters needed for complex tasks. System prompts should also be generic enough to be copied and reused throughout your workflow and across workflows, so it saves time and increases consistency to develop solid “preamble” system prompts that can be widely reused.


In make.com, it’s also important to note that the “Text Content” fields are limited in the number of characters each one supports (you won’t find this limitation in the documentation, but Make actually truncated my prompts on several occasions without informing me). So using multiple Messages and Text Content entries—including the System Prompt—is a great way to break your lengthy prompts into discrete functions while increasing the accuracy of the output.


6. Workflow prompts must be much more specific than chat prompts


When you run through your workflow manually (per item #1 on this list), take notice of the steps you take to turn one type of format into another. This process needs deterministic replication in agentic workflows, and it’s not at all simple.

Particularly when it comes to formatting, the prompt needs to instruct the model which format is desired (csv, json, markdown, etc.) and what NOT to do. These instructions may need to be repeated two or three times and even highlighted, for example:

**DO NOT include leading or trailing apostrophes or quotation marks or ' in your response**

7. Examples are a must


Providing detailed examples vastly improves the consistency of the output. For example, if you want a csv-formatted output, include an example with the same heading row you’re looking for and sample data in at least one row.


8. Read your prompts out loud


First save the prompt, so you're certain you're looking at the active prompt. Then read it out loud. There are scant few IDEs or editors for prompts at the moment, so it's easy to make typos and stupid mistakes.


9. Use a GPT for help


In-line help is useless and API references are like looking up single words in the dictionary. Chat bots from frontier models are great at providing help for the full context of your workflows.


10. Use filters, error handling, and reviews


Use features of tools that enable error handling. Some errors can be ignored so that the remainder of the workflow can continue. Also, use filters on the output of any content generated from an LLM. You can either include a review in the original prompt (by prompting the model to check its work before responding) or add an extra LLM module to review the output and send the workflow back if the response isn't well formed.


In my case, I found it easier to copy the response to a Google Doc or a Google Sheet during the testing phase, because I could track outputs from dozens of runs, test the formatting with various checkers (such as csv, json, and markdown validators), and use proper outputs as examples in my prompts. Once I got to a good state, I simply removed the output to these files.


11. Prompt management


Tools like Make require premium subscription for Custom Variables. At scale, this would be worth it, if only to establish a reusable prompt library. Visibility into prompts isn't easy and a single word can make a ton of difference. For example, if you need to change the outputs from json to csv in a dozen prompts, and each prompt has several uses of the term “json”, then it's very tricky to ensure you've got it all done right.


To be Continued …


My main goal is to motivate you to get started with building workflows. There’s no real “how to” guide of any use, it’s mostly an undiscovered country that will test your curiosity and inquisitiveness. But your experiences will make you invaluable to your organization and will likely win you lots of friends, as well.


As mentioned earlier, there’s a lot more to know about agentic workflows, particularly in handing output formats and switching between structured and unstructured data. So, to stay informed, be sure to subscribe to our Substack!

bottom of page