A CLI tool to convert Supernote .note
, Atelier .spd
, PDFs, and images to text using any LLM supported by the LLM library.
- Converts the source file to PNG images.
- Sends the images to the LLM to convert to text (markdown is the default format, but is customizable)
Sample output: 20240712_151149.md
The default LLM prompt (with gpt-4o-mini) is configured to convert to markdown:
- Supports markdown in .note files (#tags,
## Headers
,[[Links]]
, etc) - Supports basic formatting (lists, tables, etc)
- Converts of images of diagrams to mermaid.
- Handles math equations using
$
and$$
latex math blocks.
pip install sn2md
Setup your OPENAI_API_KEY environment variable.
To import a single Supernote .note
file, use the file
command:
# import one .note file (or Atelier .spd, PDFs, or image):
sn2md file <path_to_file>
# import a directory of .note files (or Atelier .spd files, PDFs, or images):
sn2md directory <path_to_directory>
Notes:
- If the source file has not changed, repeated runs of commands will print a warning and exit. You can force re-runs by running with the
--force
flag. - If the source file has not changed, but the output file has (b/c maybe you modified it manually by adding your own notes?) repeated runs of commands will print a warning and exit. You can force the command with the
--force
flag.
A configuration file can be used to override the program defaults. The
default location is platform specific (eg, ~/Library/Application Support/sn2md.toml
on OSX, ~/.config/sn2md.toml
on Linux, etc).
Values that you can configure:
template
: The output template to generate markdown.output_filename_template
: The filename that is generated. Basic template variables are available. (default:{{file_basename}}.md
).output_path_template
: The directory that is created to store output. Basic template variables are available. (default:{{file_basename}}
).prompt
: The prompt sent to the LLM. Requires a{context}
placeholder to help the AI understand the context of the previous page.title_prompt
: The prompt sent to the OpenAI API to decode any titles (H1-H4 supernote highlights).model
: The model to use (default:gpt-4o-mini
). Supports OpenAI out of the box, but additional providers can be configured (see below).api_key
: Your Service provider's API key (defaults to the environmental variable required by the model you've provided. For instance, for OpenAI models$OPENAI_API_KEY
).
Example instructing the AI to convert text to pirate speak:
model = "gemini-1.5-pro-latest"
prompt = """###
Context (what the last couple lines of the previous page were converted to markdown):
{context}
###
Convert the following image to markdown:
- Don't convert diagrams or images. Just output "<IMAGE>" on a newline.
- Paraphrase all the text in pirate speak.
"""
template = """
# Pirate Speak
{{llm_output}}
"""
The default prompt sent to the LLM is:
###
Context (the last few lines of markdown from the previous page):
{context}
###
Convert the image to markdown:
- If there is a simple diagram that the mermaid syntax can achieve, create a mermaid codeblock of it.
- When it is unclear what an image is, don't output anything for it.
- Use $$, $ latex math blocks for math equations.
- Support Obsidian syntaxes and dataview "field:: value" syntax.
- Do not wrap text in codeblocks.
This can be overridden in the configuration file. For example, to have underlined text converted to an Obsidian internal link you could append - Convert any underlined words to internal wiki links (double brackets).
.
You can provide your own jinja template, if you prefer to customize the output. The default template is:
---
created: {{year_month_day}}
tags: supernote
---
{{llm_output}}
# Images
{% for image in images %}
- 
{%- endfor %}
{% if keywords %}
# Keywords
{% for keyword in keywords %}
- Page {{ keyword.page_number }}: {{ keyword.content }}
{%- endfor %}
{%- endif %}
{% if links %}
# Links
{% for link in links %}
- Page {{ link.page_number }}: {{ link.type }} {{ link.inout }} [[{{ link.name | replace('.note', '')}}]]
{%- endfor %}
{%- endif %}
{% if titles %}
# Titles
{% for title in titles %}
- Page {{ title.page_number }}: Level {{ title.level }} "{{ title.content }}"
{%- endfor %}
{%- endif %}
Several variables are available to the template.
Basic data about the source file (.note, etc):
file_name
: The file name (including its extension).file_basename
: The file name without its extension.year_month_day
: The date the source file was created (eg, 2024-05-12).ctime
: A python datetime object of the file creation time. You can use this to make your own formats (eg,{{ ctime.strftime('%B %d') }}
forNovember 15
). See strftime docs for formatting details.mtime
: A python datetime object of the file's last modification time.
Data extracted when converting the source file:
llm_output
: The content of the source file (deprecatedmarkdown
field still available as well).images
: an array of image objects with the following properties:name
: The name of the image file.rel_path
: The relative path to the image file to where the file was run from.abs_path
: The absolute path to the image file.
Data available in .note source files:
links
: an array of links in or out of a .note file with the following properties:page_number
: The page number the link is on.type
: The link type (page, file, web)name
: The basename of the link (url, page, web)device_path
: The full path of the linkinout
: The direction of the link (in, out)
keywords
: an array of keywords in a .note file with the following properties:page_number
: The page number the keyword is on.content
: The content of the keyword.
titles
: an array of titles in a .note file with the following properties:page_number
: The page number the title is on.level
: The level of the title (1-4).content
: The content of the title. If the area of the title appears to be text, the text, otherwise a description of it.
This tool uses llm, which supports many services. You can use any of these models by specifying the model, as long is it a multi-modal model that supports visual inputs (such as gpt-4o-mini, llama3.2-vision, etc).
Here are a couple examples of using this tool with other models.
To use Gemini:
- Get a Gemini API key. Set this as the
api_key
in the configuration file, or as theLLM_GEMINI_KEY
environmental variable. - Install the gemini llm API.
- Specify the model in the configuration file as
model
, or use the--model
CLI flag.
export LLM_GEMINI_KEY=yourkey
llm install llm-gemini
sn2md -m gemini-1.5-pro-latest file <path_to_file>
Notes: The default prompt appears to work well with Gemini. Your mileage may vary!
You can run your own local LLM modals using Ollama (or other supported local methods), using an LLM that supports visual inputs:
- Install Ollama, and install a model that supports visual inputs.
- Install the ollama llm plugin.
- Specify the model in the configuration file as
model
, or use the--model
CLI flag.
# Run ollama in one terminal:
ollama serve
# In another terminal, install a model, and plugin support:
ollama pull llama3.2-vision:11b
llm install llm-ollama
sn2md -m llama3.2-vision:11b file <path_to_file>
Notes: The default prompt does NOT work well with llama3.2-vision:11b
. You will need to provide a custom prompt in the configuration file. Basic testing showed this configuration provided basic OCR capabilities (probably not mermaid, or other markdown features!):
model = "llama3.2-vision:11b"
prompt = """###
Context (the last few lines of markdown from the previous page):
{context}
###
You are an OCR program. Extract text from the image and format as paragraphs of plain markdown text.
"""
Please let me know if you find better prompts!
You can output other formats besides markdown. Contributed examples of configuration files are listed below.
Thanks to @redsorbet, who contributed this configuration for org.toml.
A simple Supernote to HTML configuration html.toml (using tailwind for image styling).
Contributions are welcome. Please open an issue or submit a pull request.
git clone https://github.com/dsummersl/sn2md.git
cd sn2md
poetry install
pytest
This project is licensed under the AGPL License. See the LICENSE file for details.
- Supernote for their amazing note-taking devices.
- supernote-tool library for .note file parsing.
- Atelier-parser for how .spd files are generated/parsed.
- llm for LLM access.