Skip to content

chigwell/historical-scam-summary

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 

Repository files navigation

historical-scam-summary

PyPI version License: MIT Downloads LinkedIn

Extract structured, domain‑specific summaries from a single sentence or title.
The package turns unstructured text (e.g., “How Scams Worked In The 1800s (2015)”) into a concise, standardized output that lists key historical scams, their mechanisms, and societal impact—all without any additional manual parsing.


Features

  • Uses llmatch-messages to enforce an output regex and recover only the relevant information.
  • Defaults to the free tier of ChatLLM7 via the langchain_llm7 wrapper.
  • Works with any LangChain LLM instance – OpenAI, Anthropic, Google Generative AI, or custom models.
  • Returns a List[str] – one string per extracted entity (scam, scheme, etc.).

Installation

pip install historical_scam_summary

Quick Start

from historical_scam_summary import historical_scam_summary

# Minimal usage – relies on ChatLLM7 (free tier)
response = historical_scam_summary(user_input="How Scams Worked In The 1800s (2015)")
print(response)   # e.g. ["Confidence trick (3800s): ...", "..."]

Custom LLM

If you prefer another provider, instantiate the desired LangChain model and pass it to the function:

OpenAI

from langchain_openai import ChatOpenAI
from historical_scam_summary import historical_scam_summary

llm = ChatOpenAI()          # <-- provide your own API key via env var or param
response = historical_scam_summary(user_input="Masonic scams of 1920s", llm=llm)

Anthropic

from langchain_anthropic import ChatAnthropic
from historical_scam_summary import historical_scam_summary

llm = ChatAnthropic()
response = historical_scam_summary(user_input="Pyramid schemes in the 1980s", llm=llm)

Google Generative AI

from langchain_google_genai import ChatGoogleGenerativeAI
from historical_scam_summary import historical_scam_summary

llm = ChatGoogleGenerativeAI()   # set `api_key` via environment
response = historical_scam_summary(user_input="Early Ponzi schemes", llm=llm)

Configuration

Parameter Type Description
user_input str The raw text to process (title, sentence, etc.).
llm Optional[BaseChatModel] LangChain LLM instance to use. If omitted, the default ChatLLM7 is instantiated.
api_key Optional[str] API key for ChatLLM7. Either pass it directly or set the environment variable LLM7_API_KEY.

Note
The default free tier of ChatLLM7 is sufficient for most use cases. If you require higher rate limits, supply your own key.


Getting an LLM7 API Key

Register for a free key at https://token.llm7.io/.
You can then provide it via:

export LLM7_API_KEY="YOUR_KEY"

or directly in code:

historical_scam_summary(user_input="...", api_key="YOUR_KEY")

Issues & Contribution


Author

Eugene Evstafev
hi@euegne.plus
GitHub: chigwell


License

MIT © 2025 Eugene Evstafev


Releases

No releases published

Packages

 
 
 

Contributors

Languages