Extract structured, domain‑specific summaries from a single sentence or title.
The package turns unstructured text (e.g., “How Scams Worked In The 1800s (2015)”) into a concise, standardized output that lists key historical scams, their mechanisms, and societal impact—all without any additional manual parsing.
- Uses
llmatch-messagesto enforce an output regex and recover only the relevant information. - Defaults to the free tier of ChatLLM7 via the
langchain_llm7wrapper. - Works with any LangChain LLM instance – OpenAI, Anthropic, Google Generative AI, or custom models.
- Returns a
List[str]– one string per extracted entity (scam, scheme, etc.).
pip install historical_scam_summaryfrom historical_scam_summary import historical_scam_summary
# Minimal usage – relies on ChatLLM7 (free tier)
response = historical_scam_summary(user_input="How Scams Worked In The 1800s (2015)")
print(response) # e.g. ["Confidence trick (3800s): ...", "..."]If you prefer another provider, instantiate the desired LangChain model and pass it to the function:
from langchain_openai import ChatOpenAI
from historical_scam_summary import historical_scam_summary
llm = ChatOpenAI() # <-- provide your own API key via env var or param
response = historical_scam_summary(user_input="Masonic scams of 1920s", llm=llm)from langchain_anthropic import ChatAnthropic
from historical_scam_summary import historical_scam_summary
llm = ChatAnthropic()
response = historical_scam_summary(user_input="Pyramid schemes in the 1980s", llm=llm)from langchain_google_genai import ChatGoogleGenerativeAI
from historical_scam_summary import historical_scam_summary
llm = ChatGoogleGenerativeAI() # set `api_key` via environment
response = historical_scam_summary(user_input="Early Ponzi schemes", llm=llm)| Parameter | Type | Description |
|---|---|---|
user_input |
str |
The raw text to process (title, sentence, etc.). |
llm |
Optional[BaseChatModel] |
LangChain LLM instance to use. If omitted, the default ChatLLM7 is instantiated. |
api_key |
Optional[str] |
API key for ChatLLM7. Either pass it directly or set the environment variable LLM7_API_KEY. |
Note
The default free tier of ChatLLM7 is sufficient for most use cases. If you require higher rate limits, supply your own key.
Register for a free key at https://token.llm7.io/.
You can then provide it via:
export LLM7_API_KEY="YOUR_KEY"or directly in code:
historical_scam_summary(user_input="...", api_key="YOUR_KEY")- Open an issue or submit a pull request via the GitHub repo:
https://github.com/chigwell/historical_scam_summary - Contributions are welcome! Please follow the standard PR process.
Eugene Evstafev
hi@euegne.plus
GitHub: chigwell
MIT © 2025 Eugene Evstafev