My news in my terminal: Building a Feedly MCP tool for Gemini CLI
Over time, I’ve been using Feedly more and more. Over the weekend I went through my email inbox and forwarded about a dozen newsletters into Feedly, cleaning up my inbox while still having access to the content. In the past, I have used their API to generate an audiobook so that I can stay on top of what’s happening. Ultimately, I like how Feedly gives me control over the news.
More recently I’ve been working on building MCP tools. Building custom tools for LLMs, along with new command-line tools to query local servers, actually holds the promise of the agentic future.
Naive API calls
My first approach to building this was to hardcode my access token in the code and build a straightforward API call to fetch all my unread articles. Feedly’s API gives back a JSON response. I wondered if I could just dump that result into Gemini and have it sort it all out.
def feedly_api_call(access_token: str, path: str, method: str = 'GET', data: dict = None):
"""
Makes an API call to the Feedly cloud API.
Args:
access_token (str): The user's Feedly access token.
path (str): The API endpoint path.
method (str): The HTTP method (e.g., 'GET', 'POST').
data (dict): The data to send in the request body.
Returns:
dict: The JSON response from the API.
Raises:
Exception: If the request fails due to network issues or API errors.
"""
url = f'https://cloud.feedly.com/v3/{path}'
headers = {
'Authorization': f'Bearer {access_token}',
'Accept': 'application/json'
}
try:
response = requests.request(method, url, headers=headers, json=data)
response.raise_for_status() # This will raise an HTTPError for bad responses (4xx or 5xx)
print(response.json())
return response.json()
except requests.exceptions.RequestException as e:
# Re-raise the exception with a more descriptive message
raise Exception(f'Request to {url} failed: {e}') from eMy MCP tool itself is fairly simple, just calling this API call with a continuation token until I have everything. By default, the API returns 250 items. It’s possible I could have many more than that. While a few hundred articles might end up being a few megabytes of data, I also need to keep in mind the token limit that LLMs have. A token limit of one million, assuming one token is about four letters, means I can’t actually return more than 4 megabytes and probably less.
all_articles = []
continuation = None
print('Beginning to download articles...')
while True:
query = f'&continuation={continuation}' if continuation else ''
try:
res = feedly_api_call(
access_token,
f'streams/contents?streamId=user/{user_id}/category/global.all&unreadOnly=true&count=250{query}'
)
items = res.get('items', [])
if not items:
print('No items found or end of stream.')
break
all_articles.extend(items)
continuation = res.get('continuation')
if continuation is None:
break # Exit loop when there's no more continuation token
except Exception as e:
error_message = str(e)
if 'status 401' in error_message:
print('Error: Access token expired. Please request a new one.')
elif 'status 429' in error_message:
print('Error: API rate limit reached.')
else:
print(f'An unexpected error occurred: {error_message}')
breakSo my first approach was just to parse out the article titles only and return a small array.
This actually works fairly well. Just asking for a summary was able to sort through all the categories without being explicitly told what each one was.
Based on this success, I decided to just return the entire JSON object from Feedly with all the extra fields and the full article content and see how well it could handle it.
To my surprise, it did a good job of making sense of all the data. Queries about a slice of this data allowed it to understand and summarize article content like that upcoming developer event.
Take note, though, of the context indicator in the bottom-right. This query alone took up the majority of my Gemini session. Even with only 80 unread articles at this moment, the sheer volume of data could easily be overwhelming.
I also could also ask very specific questions that let it parse specific article text in order to provide an answer. That’s very cool.
It can also answer specific questions that pulled in results from several news articles and blended them together.
Saving Context
Playing around with a bit more experiments a few days later, when I had closer to 150 articles unread, I ran into a token limit. You can see the JSON response slightly above, with fields like mentions and salienceLevel. These are part of the large Feedly JSON response which includes a lot of fields that are not necessary for my project.
So I went back to my code and cleaned up the response system in order to greatly prune the API response and only return a handful of key fields.
@mcp.tool()
def fetch_feedly_articles():
"""
Fetches news articles from Feedly, stores them, and returns an array of
their titles, content, author, and publisher.
If you want to fetch news articles from a particular published, like
El Economista, you can fetch all of the news articles and then pick out
specific articles from that.
Args: None, they are hard-coded for this demo.
Returns:
A JSON object containing an array of headlines.
"""
all_articles = []
continuation = None
print('Beginning to download articles...')
while True:
query = f'&continuation={continuation}' if continuation else ''
try:
res = feedly_api_call(
access_token,
f'streams/contents?streamId=user/{user_id}/category/global.all&unreadOnly=true&count=250{query}'
)
items = res.get('items', [])
if not items:
print('No items found or end of stream.')
break
all_articles.extend(items)
continuation = res.get('continuation')
if continuation is None:
break # Exit loop when there's no more continuation token
except Exception as e:
error_message = str(e)
if 'status 401' in error_message:
print('Error: Access token expired. Please request a new one.')
elif 'status 429' in error_message:
print('Error: API rate limit reached.')
else:
print(f'An unexpected error occurred: {error_message}')
break
print(f'Loaded {len(all_articles)} items.')
context_articles = []
for article in all_articles:
context_articles.append({
"content": get_content(article),
"origin": article.get('origin'),
"title": article.get('title'),
"author": article.get('author')
})
print(context_articles)
return {
"articles": context_articles,
}Now, I use less than half the total context without materially impacting the quality of the model responses.
Since the Gemini CLI is able to link multiple functions or MCP tools in a single prompt, I can do neat secondary actions like translate articles in the terminal itself.
Authorization
Up to this point, my user_id and access_token were just constants being stored in my Python file. That’s not good. I’d like to have a way to store these fields in some sort of environment that gets loaded when Gemini calls my tool.
I actually can add HTTP headers in my settings.json file as part of my MCP configuration:
"headers": {
"foo": "bar"
}The FastMCP library has additional tools to fetch HTTP headers in-line:
from fastmcp.server.dependencies import get_http_headers
@mcp.tool()
def fetch_feedly_articles():
"""
...
"""
# https://gofastmcp.com/servers/context#http-headers
headers = get_http_headers()
print(headers)Then, when I call my tool:
{'foo': 'bar', 'content-type': 'application/json', 'accept-language': '*', 'sec-fetch-mode': 'cors', 'user-agent': 'node', 'accept-encoding': 'gzip, deflate'}So I added my user ID and access token into my settings and am able to obtain these values when my tool is called:
headers = get_http_headers()
print(headers)
access_token = headers['access_token']
user_id = headers['user_id']
if not access_token or not user_id:
return {"error": "Feedly access token or user ID not found in environment variables."}Taking Actions
Right now my tool just fetches all the news and that’s it. In the future, I could explore using more of Feedly’s API to perform actions. I could ask for a summary of an article and have my tool then mark it as unread. I could mark articles to read later. I could have it look at every article about space and add it to my space news board.
Those are things I can explore in the future, when I have a little more trust in using agents to actually take actions. Until then, I am glad to have yet another location to catch up on news.
PS, here’s the get_content function:
def get_content(article: dict):
"""
Extracts and cleans content from a Feedly article dictionary.
Args:
article (dict): A dictionary representing a Feedly article.
Returns:
str: The cleaned article content.
"""
article_content = (
article.get('content', {}).get('content')
or article.get('summary', {}).get('content')
or article.get('fullContent')
or ''
)
# Remove <img> tags from the content
return article_content.replace('<img .*?>', '')