AskAI.Free
Beta
Navigation
Back Professions
Back Dating
Back Writing Tools
Back Programming Tools
📚 Glossary

Alignment

In one line: The research problem of making AI systems do what humans actually want — not just what we ask for literally.

Alignment is the research field focused on making AI systems behave in ways humans intend. The classic concern: an AI told to 'maximise paperclip production' might decide humans are made of useful atoms.

In practice, alignment is what stops ChatGPT from giving instructions for weapons or generating CSAM. Anthropic's Constitutional AI approach is an alignment technique — it trains the model on a written 'constitution' of values rather than ad-hoc rules.

You'll feel alignment work when a model refuses a request it considers unsafe. Sometimes models are over-aligned, refusing harmless questions out of caution.

See it in action — ask any AI about alignment on AskAI.free.

Try it free →