TL;DR:
I turned my iOS Share Sheet into a one-tap content brief generator. Anything I save from LinkedIn or Substack becomes a clean post summary, highlights, comment insights, and thought-provoking questions—auto-sent to my Buffer Ideas backlog. No new habits, just smarter capture.
Why I Built This
I wanted to get consistent on LinkedIn again without adding friction. Buffer's Send to Ideas works great, but the raw inputs felt flat: just a link and a line or two. I needed context-rich ideas—something I could react to in minutes, not hours.
So I built a micro-pipeline that:
- preserves my exact mobile workflow (Share → Shortcut)
- cleans noisy scraped content (no CTAs, auth walls, or counters)
- extracts the original post, separates comments, and enriches with:
- a tight summary
- highlights
- insights from comments (themes/sentiment)
- 5–15 questions to spark a POV post
- pushes everything into Buffer Ideas automatically
What You'll Need
- Apple Shortcuts (iOS/iPadOS) — to grab date/content/URL via Share Sheet
- Google Sheets — 2 tabs
Dump
:date | title | url
(“title” currently holds the scraped content blob)FFT
:date | url | title | summary | clean content | content highlights | clean comments | comment insights | thought-provoking questions
- Zapier — to transform & deliver to Buffer Ideas
- An LLM provider in Zapier (e.g., ChatGPT/OpenAI with gpt-4o-mini)
Optional: Generate an RSS feed from your FFT sheet so Buffer's AI can learn your topics.
Architecture at a Glance
Capture: Share → Food for Thought Shortcut → Dump row
Enrich: Zap picks up new row, cleans text with Regex, makes one LLM call to produce strict JSON
Write: FFT row is created with enriched fields
Distribute: Zap #2 pushes new FFT rows to Buffer Ideas; FFT sheet also powers RSS for smarter suggestions
Option A (3 Steps): The Fastest Setup
If you have Zapier's ChatGPT: Extract Structured Data action, you can keep it minimal:
Trigger: Google Sheets → New Spreadsheet Row (Dump)
ChatGPT → Extract Structured Data
- Inputs:
date
,url
,raw_content
- Prompt: (see the Unified Prompt below)
- Output fields:
date
,url
,title
,summary
,clean_content
,content_highlights
,clean_comments
,comment_insights
,thought_provoking_questions
Google Sheets: Create Spreadsheet Row (FFT)
This skips JSON parsing entirely because the action gives you mappable fields.
Option B (Pro): Bulletproof & Cheap at Scale
If your account doesn't have “Extract Structured Data,” or you want tougher pre-cleaning, use this. It's still mostly no-code.
Zap Steps
Trigger: Google Sheets → New Spreadsheet Row (Dump)
Formatter → Text → Replace (Normalize newlines)
Find: \r\n?
→ Replace: \n
Formatter → Text → Replace (Collapse 3+ blanks)
Find: \n{3,}
→ Replace: \n\n
Formatter → Text → Replace (Strip UI furniture)
Use RegEx: ON
Find:
(?im)^(?:skip to main content|linkedin|join now|sign in|sign up|log in|open app|read in the substack app|read in app|get the app|view profile|more from this author|more from|explore topics|welcome back|email or phone|password|forgot password?|user agreement|privacy policy|cookie policy|guest controls|community guidelines|your california privacy choices|brand policy|copyright policy|language|start writing|previousnext|discussion about this post|comments? restacks?|restacks?|likes?|^like$|^comment$|^reply$|^share$|see all|see more comments|see more|show more|read more|to view or add a comment, sign in|create your free account|new to linkedin?|continue to join|continue to sign in|subscribe(?: now)?|ready for more?) $
Formatter → Text → Replace (Remove counters/badges)
Use RegEx: ON
Find:
(?im)^(?: *\d[\d,. ] *(?:likes?|restacks?|comments?|reactions?).$| *like *\d+\d+ *$| *\d{1,3}(?:,\d{3}) *)$
Formatter → Text → Truncate (token control)
Max length: 7000
OpenAI (ChatGPT) → Send Prompt
Model: gpt-4o-mini
Prompt: (use the Unified Prompt below; map {{date}}
, {{url}}
, and this step's input as {{cleaned_text}}
)
Code by Zapier (JavaScript) — Parse JSON & fix \n
function toStr(v){return typeof v==='string'?v:(v==null?'':String(v));}
function unescape(s){return toStr(s).replace(/\\n/g,'\n').replace(/\\r\\n?/g,'\n');}
function clip(s,m=48000){s=toStr(s);return s.length>m?s.slice(0,m):s;}
let obj;try{obj=JSON.parse(inputData.json||'');}catch(e){throw new Error('Invalid JSON: '+e.message);}
return {
date: toStr(obj.date||inputData.fallback_date||''),
url: toStr(obj.url||inputData.fallback_url||''),
title: clip(obj.title||''),
summary: clip(unescape(obj.summary||'')),
clean_content: clip(unescape(obj.clean_content||'')),
content_highlights: clip(unescape(obj.content_highlights||'')),
clean_comments: clip(unescape(obj.clean_comments||'')),
comment_insights: clip(unescape(obj.comment_insights||'')),
thought_provoking_questions: clip(unescape(obj.thought_provoking_questions||''))
};
Input Data: json ← ChatGPT output; fallback_date ← Dump.date; fallback_url ← Dump.url
Google Sheets → Create Row (FFT) — map each field 1:1
Unified Prompt
Works whether you use Option A or Option B.
You are an expert content extraction and summarization assistant.
RETURN RULES
- Return ONLY valid JSON. No markdown, no code fences, no commentary.
- All fields must be strings. If information is missing, return "".
- Preserve meaningful emojis and original language.
SCHEMA (strings only)
{
"date": "",
"url": "",
"title": "",
"summary": "",
"clean_content": "",
"content_highlights": "• bullet 1\n• bullet 2\n• ...",
"clean_comments": "[author]\n[headline]\n[comment]\n\n[author]\n[headline]\n[comment]",
"comment_insights": "• insight 1\n• insight 2\n• ...",
"thought_provoking_questions": "Question 1\nQuestion 2\n..."
}
TASK
You will receive pre-cleaned scraped text from a LinkedIn/Substack post that may still mix the original post with comments. Produce the JSON above with:
- "date": echo the provided date.
- "url": echo the provided URL.
- "title": extract if present; otherwise generate a concise factual title.
- "summary": 120–180 words on the ORIGINAL POST ONLY (no comments).
- "clean_content": ONLY the original post text, with remaining platform furniture removed (auth walls, CTAs, footers, reaction widgets, counters, "see more", etc.). No comments here.
- "content_highlights": 5–10 bullets, each starting with "• ".
- "clean_comments": ONLY the comments, formatted EXACTLY as blocks of three lines per comment:
[author]\n[headline]\n[comment]
Separate comments with a single blank line (\n\n). If author or headline is unknown, keep the line but leave it empty.
- "comment_insights": 5–8 bullets (each starting with "• ") summarizing themes, sentiment, notable disagreements from comments.
- "thought_provoking_questions": 5–15 questions I (a senior growth marketer in AI/automation for consumer tech & SaaS) could answer as a POV reaction; one question per line; no numbering.
INPUTS
- date: {{date}}
- url: {{url}}
- text: {{cleaned_text}}
Output ONLY the JSON object.
Turning FFT into an RSS Feed (for smarter Buffer suggestions)
In Google Sheets, create a public publish-to-web CSV for the FFT tab or use a simple Apps Script to render RSS XML.
Add columns if you want a neat <title> and <description> (e.g., map title and summary).
Plug the feed into Buffer so its AI nudges reflect what you've been saving.
Simple Apps Script RSS (minimal example):
function doGet(){
const ss = SpreadsheetApp.openById('YOUR_SHEET_ID');
const sh = ss.getSheetByName('FFT');
const values = sh.getDataRange().getValues();
const [headers, ...rows] = values;
const idx = Object.fromEntries(headers.map((h,i)=>[h,i]));
const items = rows.map(r=>({
title: r[idx['title']]||'',
link: r[idx['url']]||'',
description: r[idx['summary']]||'',
pubDate: new Date(r[idx['date']]||new Date()).toUTCString()
}));
const xml = '<?xml version="1.0" encoding="UTF-8"?>\n<rss version="2.0"><channel>\n<title>FFT Ideas</title>\n<link>https://yourdomain.com</link>\n<description>Food for Thought feed</description>\n' + items.map(i=>'<item><title>'+escapeXml(i.title)+'</title><link>'+escapeXml(i.link)+'</link><description>'+escapeXml(i.description)+'</description><pubDate>'+i.pubDate+'</pubDate></item>').join('') + '\n</channel></rss>';
return ContentService.createTextOutput(xml).setMimeType(ContentService.MimeType.XML);
}
function escapeXml(s){return String(s).replace(/[<>&]/g,c=>({'<':'<','>':'>','&':'&'}[c]));}
Deploy as a web app (Anyone with the link). Use the URL as your RSS endpoint.
Costs, Quality, and Reliability
One LLM call per idea keeps costs low; gpt-4o-mini is inexpensive and solid.
Pre-cleaning reduces tokens and hallucinations.
Add a de-dup step (lookup by url) if you clip the same post twice.
Fail-safe: If JSON parsing fails, retry once with the same prompt but a final line: “If you output anything except valid JSON, fix it and return only valid JSON.”
Troubleshooting
- Model invents names/headlines → Make sure you keep: “If information is missing, return "".”
- Newline rendering in Sheets → Wrap text (Format → Text wrapping → Wrap). If needed, convert literal \n to real newlines in the Code step.
- Weird platform text sneaks in → Drop more terms into the UI regex.
- Zapier “Parse JSON” missing → Use the Code-by-Zapier snippet above (recommended anyway).
What's Next
- Add a secondary Zap that detects certain keywords in content_highlights and auto-creates a Notion/Linear task.
- Build campaign tags (columns) to group ideas by topic.
- Gate higher-cost models (e.g., “Deep Dive” button) only when you choose to expand an idea.
Final Thought
I didn't overhaul my process. I kept my muscle memory intact—Share → Shortcut—and made what lands in Buffer Ideas richer. That tiny upgrade turned random links into launch-ready prompts.
If you want my exact Zap blueprint and prompt, copy from this post and adapt—or ping me and I'll publish a starter kit.
Buffer Food for Thought Automation in Action
Want to build your own automation stack?
I'm always happy to share more details about my automation workflows and help others build their own content creation systems.
Get in Touch