Welcome to Marinduque-My Island Paradise

If this is your first time in my site, welcome! If you have been a follower, my heartfelt thanks to you, also. Help me achieve my dream, that someday, Marinduque will become a world tourist destination not only on Easter Week, but also whole year round. You can do this by telling your friends and relatives about this site. The photo above is Mt Malindig in Torrijos. Some of the photos and videos on this site, I do not own. However, I have no intention on the infringement of your copyrights. Cheers!

Marinduque Mainland from Tres Reyes Islands

Marinduque Mainland from Tres Reyes Islands
View of Mainland Marinduque from Tres Reyes Islands-Click on Photo to link to Marinduque Awaits You

Wednesday, February 18, 2026

Has AI Reached Super Intelligent Status?

From My CNN Readings: My Food For Thought for Today: 

Food for Thought: Has AI Quietly Crossed a Line We Once Thought It Never Would?

Every so often, a technological moment arrives not with a bang, but with a shiver, the kind that makes you pause mid-sentence and think, something has changed. That feeling is captured well in a recent reflection reported by CNN, where a writer describes working with a newly released AI model, GPT-5.3 Codex and realizing it was no longer just following instructions.

It was choosing. Not in the cold, mechanical way we’ve grown used to, but in a manner that felt unsettlingly human. The author described it as judgment. Taste. That hard-to-define sense of knowing what the right call is, the very quality experts once insisted machines would never possess.

So the question naturally follows: Has AI already crossed into something that looks like “super intelligence,” or are we simply projecting our own instincts onto a very advanced tool?

The Answer: Not Super Intelligence- But Something New

ChatGPT believe AI has not reached true super intelligence. What it has reached, however, is something far more subtle and perhaps more consequential: the ability to convincingly simulate human judgment.

That distinction matters,  philosophically, ethically, and practically.

Today’s most advanced models, built by companies like OpenAI, don’t “know” in the way humans know. They don’t reflect on childhood memories, wrestle with moral doubt, or carry the weight of lived experience across decades. But they do recognize patterns in human decision-making at a scale no person ever could. And when those patterns are expressed smoothly, confidently, they begin to feel like wisdom.

To the user, the difference between real judgment and an almost perfect imitation can start to fade. And that’s where things get interesting.

Why This Moment Feels Different

For years, AI was framed as a tool: faster calculators, smarter search engines, better autocomplete. Useful, impressive but clearly bounded.

What has shifted is not raw intelligence, but agency. When a system:

  • weighs multiple options,

  • anticipates consequences,

  • and selects a course of action that aligns with human values,

it stops feeling like software and starts feeling like a collaborator. That doesn’t mean the machine has consciousness. It means we are no longer the only ones in the room making decisions.

A Personal Reflection

Having lived long enough to see television arrive in black and white, computers shrink from rooms to pockets, and the internet reshape human connection, I’ve learned this: the most powerful technologies don’t announce themselves loudly, they quietly change how we think.

AI today reminds me of earlier turning points. At first, we said:

  • “It’s just a tool.”

  • “It can’t replace human judgment.”

  • “It will never really understand us.”

We’ve said those things before. Each time, history replied: maybe not fully but close enough to matter.

The Real Question We Should Be Asking

The question is no longer Can AI think like us? It is now: What happens when we begin to trust it as if it does?

Super intelligence isn’t just about machines becoming smarter than humans. It’s about humans slowly outsourcing judgment and growing comfortable doing so.

That transition may already be underway.  And whether this moment becomes a triumph or a cautionary tale won’t depend on what AI can do next but on how wisely we choose to use it.

As always, the future isn’t decided by technology alone. It’s decided by the people who place their faith in it. And that, to me, is the real food for thought today.

Based on the current landscape as of early 2026, the consensus among experts is shifting, with some leading voices suggesting Artificial General Intelligence (AGI)-often a precursor to "super intelligence" could arrive as early as 2026, while many others remain more cautious
.

Here is a breakdown of the current "food for thought" regarding AI’s march toward super intelligence:

  • The Bullish View (2026-2029): Top AI researchers and CEOs, including Anthropic's Dario Amodei and xAI's Elon Musk, have indicated that highly capable, "human-level" AI systems could go online by the end of 2026. Proponents argue that the rapid scaling of transformer-based Large Language Models (LLMs) and increased compute power are accelerating the timeline, with some models already showing PhD-level reasoning in specialized fields.
  • The "Slow Down" Camp: Conversely, many experts argue that we are nowhere near true "super intelligence". While AI is advancing rapidly, skeptics note that current systems still struggle with long-term planning, reliability, and true understanding. Many, including DeepMind CEO Demis Hassabis, have previously indicated a 5–10 year horizon (putting it closer to 2030–2035).
  • Defining the Goal: There is significant debate over what "super intelligence" means. Some prefer the term "powerful AI" or AGI (systems that perform at least as well as humans at most tasks) over the more speculative "super intelligence".
  • The Shift to Evaluation (2026): Stanford experts suggest 2026 will mark a transition from "AI evangelism" to "AI evaluation," where the focus shifts from hype to measuring the actual utility, safety, and economic impact of AI.
  • Schumer’s Regulatory Perspective: U.S. Senate Majority Leader Chuck Schumer has highlighted that AI is moving at "near exponential speed" and that Congress must act quickly to set "guardrails". Schumer has argued that without safety measures, the risks such as job displacement, bias, and national security threats could threaten to halt AI progress altogether.
While the potential for 2026 is being discussed, it is not universally accepted as a certainty, with 2030-2040 being a more commonly cited range in broader, long-term expert surveys.

Lastly, My Photo of the Day: 

My AI Generated Oil Portrait copied from a recent Photo:


No comments:

Related Posts Plugin for WordPress, Blogger...