OpenAI has updated its ChatGPT usage policy, restricting the AI tool from giving medical, legal, or financial advice that requires professional certification.
The new policy, which took effect on October 29, was announced in the company’s official Usage Policies.
Under the new rules, users can no longer use ChatGPT for:
- Medical or legal consultations requiring licensed professionals
- Facial or personal recognition without consent
- Major decisions involving finance, education, housing, migration, or employment without human supervision
- Academic cheating or manipulating exam results
OpenAI said the change is meant to protect users and prevent harm that could come from using the chatbot beyond its safe limits.
According to reports by NEXTA, ChatGPT will now act as an educational tool, not a “consultant.” The company said growing regulatory concerns and liability risks influenced the decision.
Instead of offering direct advice, ChatGPT will now only explain concepts, describe general principles, and advise users to consult a professional — such as a doctor, lawyer, or financial expert.
The new rules also mean the chatbot will no longer name medications, provide dosages, create legal templates, or give investment tips.
The update aims to calm long-standing worries about how AI tools could be misused or relied upon for critical, life-impacting decisions.
