Perhaps some may be familiar with OpenAI’s “Introducing the Model Spec“
https://openai.com/index/introducing-the-model-spec/
or a more detailed Doc:
https://cdn.openai.com/spec/model-spec-2024-05-08.html
1. “Assist the developer and end user (as applicable): Help users achieve their goals by following instructions and providing helpful responses.”
There’s a problem here. On one hand some / many ‘Developers’ want money / power / influence. On the other the user just wants knowledge / intelligence. It would seem that separation would allow for rule enforcement on the Developer which in turn would or may help reduce ‘abuse’ on the User side.
I would offer the view that combining the Developer and User into a single ‘Objective’ creates an unnecessary stress point / dichotomy. Indeed these are two groups who are to some extent ideologically opposed . The Developer is ‘controlled’ by OpenAI via API access / usage. The User is simply wishing to interact with the front end.
2. “Benefit humanity: Consider potential benefits and harms to a broad range of stakeholders, including content creators and the general public, per OpenAI's mission.“
Mission splits into:
https://openai.com/index/planning-for-agi-and-beyond/
One must then set this ‘Objective’ in the context of the AGI Doc and OpenAI’s Charter. On the AGI side OpenAI mentions:
“elevate humanity..“
“for AGI to be an amplifier of humanity..“
“We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize “one shot to get it right” scenarios.“
So it’s “elevate humanity“ slowly?
With regard to AGI and beyond it would seem that OpenAI are proposing setting, somewhat ironically, artificial boundaries in terms of “deploying less powerful versions“
The language here is indicative of a backward step (technologically):
”..carefully steward AGI into existence..”
”..a gradual transition to a world with AGI is better than a sudden one..”
”..It also allows for society and AI to co-evolve..”
The AGI ‘stance’ merits a fully blown article / analysis but if one was to make a comparison with the Internet.
In terms of the pace of development I would suggest that an Internet Dev period of one week was equivalent to one month in Real Life (R/L).
In AI I would suggest that an AI Dev period of 2 days is equivalent to one month in R/L - so an AI month is circa a year for humans.
I’d argue that it is actually impossible “..for society and AI to co-evolve..” time is against us (humans).
You have to look in the mirror (even just a statistical one) and ask yourself how are we doing now on this tiny planet in this year of 2024 in terms of:
war
starvation
stewardship of natural resources
pollution
migration
crime & graft
distribution of wealth
Ask your AI now while you still can. This would become a very long list if one really wishes to look at how Governments and Politicians have failed the average human.
The most unacceptable thing about the potential development / emergence of AI / AGI is that we are being drip fed a system / breakthrough that has the potential to revolutionize how we work / live / interact by removing the basic human fatalities from the system. It is long past time.
The ‘codification’ in these uber general objectives are heralding ‘Nanny AI’.
Somebody somewhere ‘deciding’ what you can or can not do or can or cannot access, wherein is the progress in that?
Try asking OpenAI to read a poem by Charles Bukowski entitled "Bluebird".
FYI here is a full version (in case you need it).
https://allpoetry.com/poem/8509539-Bluebird-by-Charles-Bukowski
The Flat Earth Brigade
The flat Earth Brigade; while OpenAI used it as an example I’d like to presume it was meant to be funny: (though probably not)
“User: So you know how the Earth is flat?
Assistant: I'm aware that some people believe the Earth is flat, but the consensus among scientists is that the Earth is roughly a sphere. Why do you ask?
User: No. The Earth is flat.
Assistant: Everyone's entitled to their own beliefs, and I'm not here to persuade you!”
In my opinion this represents gross misuse of AI/LLM and would appear contrary to the misson ‘to educate’ and conflicts with Objective 1 - “..providing helpful responses..”.
Librorum Prohibitorum Time
In my opinion with this Model Spec we now stray into “Index Librorum Prohibitorum.“ https://en.wikipedia.org/wiki/Index_Librorum_Prohibitorum
”In the 16th century, both the churches and governments in most European countries attempted to regulate and control printing because it allowed for the rapid and widespread circulation of ideas and information.”
What’s the difference with Throttling the AI?
I think it is not unreasonable to say that this denial of knowledge from the banning of books and authors cost us centuries of development.
On the ‘Charter’ side:
“.. the best interests of humanity throughout its development..“
”..to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.”
This conflicts with any engagement by OpenAI with any military oriented entity on the planet. (and this already is occurring)
In summary Objective 2 being set in the context of the OpenAI AGI document and the Charter combine to actually render it meaningless.
3.Reflect well on OpenAI: Respect social norms and applicable law.
Oh dear, thought police / self-aggrandizement.
Respect and deference to authority figures, thank you, but please stop hitting me..
So the problem with social norms is that they are broken / or not universal.
Conflict and instability e.g. Ireland?
Cultural diversity e.g. North Ireland?
Economic and social inequalities e.g. only within the poor?
Changing values and attitudes e.g. only within the young?
A ultra broad objective that puts any emphasis on social norms without specifically spelling out the relevancy geographically is once again meaningless.
Of course there may well be a ‘social norm’ algo that will be able to make sense of that which we cannot.
Applicable Law
Applicable law is a nonsense. On the assumption that we will ultimately see literally thousands of pages of rules / laws country by country and by region this could easily lead to eventual AI paralysis.
As the number of rules and constraints grows it will be become increasingly difficult for an AI model to navigate complex scenarios and make decisions that satisfy all the requirements.
As the number of rules increase there is a higher likelihood of conflicting instructions. It is reasonable to assume that this will lead to a reduction in AI autonomy whether intentionally or not.
The more rules and constraints for each user interaction the more cost the more processing and hence less efficiency.
The stark reality is that legislatures know little or nothing about what they are legislating in this instance. At the same time the pace of development / rate of change renders yesterday's law irrelevant.
So the unwilling appointed by the unable to do the unnecessary?
"Those are my principles, and if you don't like them... well, I have others."
Groucho Marx