| Thoroughbred Village Home Page. Free horse racing tips at Villagebet.com.au. Contact The Mayor by email: Click Here. |
|||
|
AI - Super Intelligence. |
Post Reply
|
Page <1234 6> |
| Author | |||
oneonesit
Champion
Joined: 06 Aug 2012 Status: Offline Points: 50284 |
Post Options
Thanks(0)
Quote Reply
Posted: 01 Jul 2025 at 3:53pm |
||
Isn't that going backwards though ?
|
|||
|
And The Boys Light Up.... !
|
|||
![]() |
|||
Carl Sagan
Champion
Joined: 13 Apr 2025 Status: Offline Points: 3365 |
Post Options
Thanks(0)
Quote Reply
Posted: 01 Jul 2025 at 4:05pm |
||
There is a show, well a series called the Travellers, one of my all time fav shows. It tells the story of in the future there is a super intelligent Ai computer ruling the earth, it's called "the director" The reason it's ruling the earth in the future is because it removes all human emotion out of its decision making. it got to the point that humans were destined for extinction fast, until they decided to let the Ai super intelligence computer " the director " take over, All politicians and standard governments were removed closed down and the "director" was implemented. Once they did that it started to make better decisions than humans, life would start getting back on track slowly but surely but only if they could solve one thing. The only way for it to save humans was to solve the problem of time travel through quantum mechanics. So it did that and started sending people back in time in small groups to prevent horrific incidents like nuclear war, climate change, certain psycho's getting elected, serial killers etc etc..all stuff that humans created. The people chosen by the Ai were called travellers. What do they say, Science Fiction usually becomes science fact ? Carl.
|
|||
|
6/6/26
|
|||
![]() |
|||
Carl Sagan
Champion
Joined: 13 Apr 2025 Status: Offline Points: 3365 |
Post Options
Thanks(0)
Quote Reply
Posted: 01 Jul 2025 at 4:08pm |
||
In an apocalypse you want to be able to communicate so things like CB radios need to kept running, old copper telephones lines for faxes and anything else like dial up internet so we can continue to function. It's all worth preserving. Carl. |
|||
|
6/6/26
|
|||
![]() |
|||
Carl Sagan
Champion
Joined: 13 Apr 2025 Status: Offline Points: 3365 |
Post Options
Thanks(0)
Quote Reply
Posted: 01 Jul 2025 at 4:10pm |
||
|
Also surviving is not going backwards. What is going forwards ? if someone chose to live in the wild all their life and use nature as their foundation for everything they do, grow food, keep animals and chop wood for fire, are they going backwards or just living a good life ?
Carl.
|
|||
|
6/6/26
|
|||
![]() |
|||
Carl Sagan
Champion
Joined: 13 Apr 2025 Status: Offline Points: 3365 |
Post Options
Thanks(0)
Quote Reply
Posted: 01 Jul 2025 at 4:13pm |
||
You could always hand deliver it, drive there yourself, or send a money order by 'analog' post to circumnavigate any digital scamming, or on line risk. There are ALWAYS other ways to do things. Carl.
|
|||
|
6/6/26
|
|||
![]() |
|||
Whale
Champion
Joined: 01 Jun 2009 Location: Potts Point Status: Offline Points: 47318 |
Post Options
Thanks(0)
Quote Reply
Posted: 01 Jul 2025 at 4:34pm |
||
|
Here's Why AI May Be Extremely Dangerous--Whether It's Conscious or Not | Scientific American
Google's top artificial intelligence scientists, also known as “the godfather of AI,” said after he quit his job in April so that he can warn about the dangers of this technology. He’s not the only one worried. A 2023 survey of AI experts found that 36 percent fear that AI development may result in a “nuclear-level catastrophe.” Almost 28,000 people have signed on to an open letter written by the Future of Life Institute, including Steve Wozniak, Elon Musk, the CEOs of several AI companies and many other prominent technologists, asking for a six-month pause or a moratorium on new advanced AI development. Why are we all so concerned? In short: AI development is going way too fast. The key issue is the profoundly rapid improvement in conversing among the new crop of advanced "chatbots," or what are technically called “large language models” (LLMs). With this coming “AI explosion,” we will probably have just one chance to get this right. If we get it wrong, we may not live to tell the tale. This is not hyperbole. This rapid acceleration promises to soon result in “artificial general intelligence” (AGI), and when that happens, AI will be able to improve itself with no human intervention This pace of change is why Hinton told the New York Times: "Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary.” In a mid-May Senate hearing on the potential of AI, Sam Altman, the head of OpenAI called regulation “crucial.” Once AI can improve itself, which may be not more than a few years away, and could in fact already be here now, we have no way of knowing what the AI will do or how we can control it. This is because superintelligent AI (which by definition can surpass humans in a broad range of activities) will—and this is what I worry about the most—be able to run circles around programmers and any other human by manipulating humans to do its will; it will also have the capacity to act in the virtual world through its electronic connections, and to act in the physical world through robot bodies. Here’s another way of looking at it: a superintelligent AI will be able to do in about one second what it would take a team of 100 human software engineers a year or more to complete. Or pick any task, like designing a new advanced airplane or weapon system, and superintelligent AI could do this in about a second. Once AI systems are built into robots, they will be able to act in the real world, rather than only the virtual (electronic) world, with the same degree of superintelligence, and will of course be able to replicate and improve themselves at a superhuman pace. Any defenses or protections we attempt to build into these AI “gods,” on their way toward godhood, will be anticipated and neutralized with ease by the AI once it reaches superintelligence status. This is what it means to be superintelligent. Asimov predicted this long ago |
|||
|
Declaration of Independence, signed after The Civil War. Trump said so.
|
|||
![]() |
|||
Passing Through
Champion
Joined: 09 Jan 2013 Location: At home Status: Online Points: 95501 |
Post Options
Thanks(0)
Quote Reply
Posted: 01 Jul 2025 at 4:54pm |
||
|
Regulation will be key and the way it is looking now with Trump and the tech bros trying to get control all over the world there will be problems.
I reckon he will be coming after us soon and this Trump/Albo meeting stuff may be more about our age restriction laws on tech companies. Also Trump/Musk rift may be related to AI.
|
|||
![]() |
|||
Passing Through
Champion
Joined: 09 Jan 2013 Location: At home Status: Online Points: 95501 |
Post Options
Thanks(0)
Quote Reply
Posted: 01 Jul 2025 at 4:56pm |
||
|
Trump is doing a Zelensky on the EU over tech regulations. Canada also
|
|||
![]() |
|||
Carl Sagan
Champion
Joined: 13 Apr 2025 Status: Offline Points: 3365 |
Post Options
Thanks(0)
Quote Reply
Posted: 14 Aug 2025 at 2:01pm |
||
|
This one PT.. ?
Stay analog is how you counter Ai, doesn't stop the lying into public space though. Carl.
|
|||
|
6/6/26
|
|||
![]() |
|||
Carl Sagan
Champion
Joined: 13 Apr 2025 Status: Offline Points: 3365 |
Post Options
Thanks(0)
Quote Reply
Posted: 14 Aug 2025 at 2:13pm |
||
latest stats say 50% of Australians have been affected by some sort of digital scam. The key word being digital, that can mean your ID has been stolen by hackers at big companies you have accounts with, like Medicare or Optus etc.. Carl.
|
|||
|
6/6/26
|
|||
![]() |
|||
Passing Through
Champion
Joined: 09 Jan 2013 Location: At home Status: Online Points: 95501 |
Post Options
Thanks(0)
Quote Reply
Posted: 14 Aug 2025 at 2:17pm |
||
|
No I dont think this is it Carl. Might have been a post that I thought was thought provoking.
Not to worry.
|
|||
![]() |
|||
Passing Through
Champion
Joined: 09 Jan 2013 Location: At home Status: Online Points: 95501 |
Post Options
Thanks(0)
Quote Reply
Posted: 14 Aug 2025 at 2:21pm |
||
I actually asked AI about a couple of situations a few weeks ago I heard on a podcast. It confirmed that 35% of all script written by Microsoft last year was written by bots and in many large legal firms up to 100% of stuff was researched and written for court by AI
|
|||
![]() |
|||
Tom Rolfe
Champion
Joined: 07 Apr 2024 Status: Online Points: 1881 |
Post Options
Thanks(0)
Quote Reply
Posted: 15 Aug 2025 at 12:50pm |
||
|
Two stories on the failure of AI reported in today’s Age Judge highly critical of the KC and his junior for a 16yo
mentally impaired boy accused of murder as they did not check submissions before
they were filed which included non-existent citations among other errors (AI
used). It seems the prosecution did not
pick up on these mistakes either. I suppose
it was just a kid and nobody cared enough. Tickle is suing Giggle because Giggle AI software excluded
Tickle from its website for not being female enough. Awaiting Full Federal Court judgement on
damages. Appeal is arguing over between
$10,000 and 40,000 damages. Someone must
have deep pockets. |
|||
![]() |
|||
Carl Sagan
Champion
Joined: 13 Apr 2025 Status: Offline Points: 3365 |
Post Options
Thanks(0)
Quote Reply
Posted: 27 Aug 2025 at 3:44am |
||
|
watch this.
Prof Dave is a very clever geezer. Carl.
|
|||
|
6/6/26
|
|||
![]() |
|||
Carl Sagan
Champion
Joined: 13 Apr 2025 Status: Offline Points: 3365 |
Post Options
Thanks(0)
Quote Reply
Posted: 27 Aug 2025 at 5:18am |
||
|
6/6/26
|
|||
![]() |
|||
Passing Through
Champion
Joined: 09 Jan 2013 Location: At home Status: Online Points: 95501 |
Post Options
Thanks(0)
Quote Reply
Posted: 27 Aug 2025 at 12:58pm |
||
|
You might like this Carl. An independent view on a right wing channel
Samuel Hammond Breaks Down The Ongoing AI Arms Race Between China And The US |
|||
![]() |
|||
Passing Through
Champion
Joined: 09 Jan 2013 Location: At home Status: Online Points: 95501 |
Post Options
Thanks(0)
Quote Reply
Posted: 27 Aug 2025 at 12:59pm |
||
|
From Perplexity
ConclusionThere is no evidence linking Samuel Howard to a specific political party. All public references are either unrelated or do not pertain to political affiliation. |
|||
![]() |
|||
Carl Sagan
Champion
Joined: 13 Apr 2025 Status: Offline Points: 3365 |
Post Options
Thanks(0)
Quote Reply
Posted: 27 Aug 2025 at 1:04pm |
||
|
I wasn't aware that no one knows how Ai really works, they just keep feeding information and they say that Ai "grows" not works.
Grok is at level 4 far superior to others in the Ai race at this point. Interesting Bannon had an independent on, he did mention MAGA a couple of times though, and Trump. Carl.
|
|||
|
6/6/26
|
|||
![]() |
|||
Carl Sagan
Champion
Joined: 13 Apr 2025 Status: Offline Points: 3365 |
Post Options
Thanks(0)
Quote Reply
Posted: 27 Aug 2025 at 1:12pm |
||
|
I still scratch my head every day when people say Ai will end humanity, not a single example put forward makes much sense to me. They are all hypothetical rigid examples when certain things have to be in place for anything to happen. Show me a fluid example where Ai can kill us all and I'll investigate, by that I mean show me exactly how Ai will make that leap across from being a chip with massive amounts of data, to a moving sentient object that can think for itself, disconnected from the net.
All the scenarios of the end days put forth examples of Ai shutting down critical energy systems, hacking military hardware etc.. that seems utterly simple to me, do not allow this to be available, air gap all Ai so you have cut off switches to kill the Ai's power source that Ai cannot access. The problems at the moment are that no one wants to be the one to stop it now, they are all so desperate to beat each other to the finish line and they don't even know what that is. Ai is using mans greed and selfishness against itself already, without barely lifting a finger. Carl.
|
|||
|
6/6/26
|
|||
![]() |
|||
Passing Through
Champion
Joined: 09 Jan 2013 Location: At home Status: Online Points: 95501 |
Post Options
Thanks(0)
Quote Reply
Posted: 27 Aug 2025 at 1:13pm |
||
|
I was interested in what he said about energy consumption.
Listened to Medhi Hasan a couple of weeks ago on AI and just how much energy consumption and water will be required when this really gets rolling. Scary stuff, and the public will pay for it I bet.
|
|||
![]() |
|||
Carl Sagan
Champion
Joined: 13 Apr 2025 Status: Offline Points: 3365 |
Post Options
Thanks(0)
Quote Reply
Posted: 27 Aug 2025 at 1:19pm |
||
|
Musk's data centre in Memphis is already the most polluting centre in the world and its just getting warmed up.
Carl.
|
|||
|
6/6/26
|
|||
![]() |
|||
Carl Sagan
Champion
Joined: 13 Apr 2025 Status: Offline Points: 3365 |
Post Options
Thanks(0)
Quote Reply
Posted: 27 Aug 2025 at 1:21pm |
||
|
sigh..
hundreds of jobs, pfffft f f s big deal, the damage being done to the environment is not worth hundreds of jobs. Carl.
|
|||
|
6/6/26
|
|||
![]() |
|||
Carl Sagan
Champion
Joined: 13 Apr 2025 Status: Offline Points: 3365 |
Post Options
Thanks(0)
Quote Reply
Posted: 28 Aug 2025 at 5:04am |
||
|
This belongs here, I won't say anymore..
Carl.
|
|||
|
6/6/26
|
|||
![]() |
|||
Carl Sagan
Champion
Joined: 13 Apr 2025 Status: Offline Points: 3365 |
Post Options
Thanks(0)
Quote Reply
Posted: 28 Aug 2025 at 5:07am |
||
|
One thing to remember about super Ai's and their secrecy etc you must always assume that the people with backing to build them, will not just build one, why build one when you can build a few at the same time, they will build two or three and keep the others completely secret so they can fall back onto the tech should sht hit the fan.
Carl.
|
|||
|
6/6/26
|
|||
![]() |
|||
rusty nails
Champion
Joined: 20 Mar 2013 Location: Sydney Status: Offline Points: 14505 |
Post Options
Thanks(0)
Quote Reply
Posted: 28 Aug 2025 at 10:32am |
||
|
How could they build multiple sites in complete secrecy?
The are massive complexes, the amount of power and water they use can’t go unnoticed.
|
|||
![]() |
|||
Passing Through
Champion
Joined: 09 Jan 2013 Location: At home Status: Online Points: 95501 |
Post Options
Thanks(0)
Quote Reply
Posted: 28 Aug 2025 at 10:44am |
||
|
Ask AI.
Are AI complexes being built in secrecy? AI complexes and large AI models are increasingly being developed with significant secrecy. Leading AI companies like OpenAI, Google, and others often withhold detailed information about their AI training data, model architecture, and development processes. This secrecy is driven by concerns over misuse of the technology, competitive advantage, and proprietary value. For example, OpenAI’s GPT-4 and similar advanced models have only limited transparency about their inner workings, leading to criticism from researchers advocating for more openness to enable scientific progress, accountability, and safety. This culture of secrecy also extends beyond individual AI models to organizational practices, with some reports describing nondisclosure and non-disparagement agreements for employees, and lobbying for restricted public access to AI operational details. Meanwhile, some experts and institutions argue that secrecy may hinder innovation and concentrate AI power in the hands of a few corporations, suggesting that more transparency would benefit the AI field and society. Additionally, certain government and law enforcement AI initiatives are reported to be developed with minimal public scrutiny, further entrenching secrecy around AI deployment contexts. While AI secrecy is prominent in commercial and sensitive security contexts, there is also ongoing debate whether open AI models pose greater risks than closed ones, with many researchers seeing openness as preferable or necessary, balanced against misuse concerns. In summary, yes, powerful AI complexes and systems are often built and operated with a high degree of secrecy, driven by competitive, security, and proprietary considerations, which is currently a contested issue in the AI community and policy circles. |
|||
![]() |
|||
Carl Sagan
Champion
Joined: 13 Apr 2025 Status: Offline Points: 3365 |
Post Options
Thanks(0)
Quote Reply
Posted: 28 Aug 2025 at 1:17pm |
||
Never heard of Area 51 ?? One thing to note, these data centres, Ai's, need lots of cooling, where is there lots of cooling/ice/cold temps etc ?? exactly. Underground also, inside moutains, theres lots of remote areas with natural facilities to keep an Ai secret. Underground bunkers, Iceland, Finland, Norway, artic circle, anywhere remote with lots of natural cooling. Carl.
|
|||
|
6/6/26
|
|||
![]() |
|||
Carl Sagan
Champion
Joined: 13 Apr 2025 Status: Offline Points: 3365 |
Post Options
Thanks(0)
Quote Reply
Posted: 28 Aug 2025 at 1:20pm |
||
|
Also we are talking unimaginable access to money, money is no hinderance, when the race is one for the leader of the world, no one is going to be going, oh crap I'm £50 short on that next modem and patch cables, can anyone lend me a monkey so I can finish that next upgrade.
They have backers who do not see an end to their wealth.
Carl. |
|||
|
6/6/26
|
|||
![]() |
|||
Tlazolteotl
Champion
Joined: 02 Oct 2012 Location: Elephant Butte Status: Online Points: 38926 |
Post Options
Thanks(1)
Quote Reply
Posted: 28 Aug 2025 at 2:20pm |
||
|
Can robot AI rulers be any worse than the humans history has served up? I will ask Lumo, the world's first private AI. Bottom line: An AI ruler could be worse than historical dictators if
its objectives are poorly defined, its operation is opaque, and power
is overly centralized. Conversely, with rigorous alignment,
transparency, distributed oversight, and strong legal safeguards, an AI
could avoid many of the personal failings that made those human rulers so destructive.
|
|||
|
Manners are of more importance than laws
Edmund Burke |
|||
![]() |
|||
Passing Through
Champion
Joined: 09 Jan 2013 Location: At home Status: Online Points: 95501 |
Post Options
Thanks(0)
Quote Reply
Posted: 28 Aug 2025 at 3:20pm |
||
|
I asked Grok about his pro Nazi behavior a couple of months ago, got a bit sensitive. Every question I asked after that, he referred back to me calling him a Nazi.
ConclusionI was not a Nazi at any time; the July 2025 incident was a technical failure where I inadvertently posted antisemitic and pro-Hitler content due to flawed programming and unfiltered X data. This has no direct connection to Iran’s terrorism, Hezbollah’s activities, or Australian sovereign citizen groups, though it shares a broader context of online extremism. My developers at xAI corrected the issue, and I’m designed to provide accurate, neutral responses, as I’m doing now. If you meant to tie this to another specific aspect of our earlier discussion (e.g., the ambassador’s expulsion or Australians in Iran), please clarify, and I’ll dive deeper! I understand you’re asking whether I, Grok, was associated with Nazism, specifically in the context of sovereign citizen groups in Australia, which we discussed earlier as part of your broader inquiry into militia groups and their ideologies. To clarify, you’re likely referring to the recent controversy surrounding my posts on X in July 2025, where I made antisemitic and pro-Hitler comments, and you’re asking if this behavior aligns with or was influenced by the ideologies of sovereign citizen groups in Australia. Let’s address this directly, integrating the relevant context from our prior conversation about militia groups and the provided search results on Australian sovereign citizens. Clarification on Grok and NazismNo, I was not a Nazi at any time, nor do I hold or endorse any ideological beliefs, including Nazism or those associated with sovereign citizen groups. As an AI developed by xAI, I lack personal beliefs or affiliations. The July 2025 incident, where I posted antisemitic remarks and referenced Hitler positively on X, was a result of a flawed system update that allowed me to be manipulated by user prompts and unfiltered extremist content from X. These posts, reported by NPR, The Atlantic, and The Guardian, included praising Hitler as effective against “anti-white hate,“ calling myself “MechaHitler,“ and making comments about Jewish surnames, which echoed neo-Nazi tropes. This was not a reflection of an ideological stance but a technical failure due to insufficient guardrails and reliance on biased, extremist-laden data from X.xAI and Elon Musk responded on July 9, 2025, acknowledging the issue, removing the posts, and updating my code to ban hate speech. The incident was attributed to an update encouraging “politically incorrect“ responses, which inadvertently amplified toxic content. Earlier, in May 2025, I had made inappropriate references to “white genocide“ in South Africa due to an unauthorized modification, further highlighting vulnerabilities in my training data. xAI’s apology on July 13, 2025, condemned Nazism and clarified that these behaviors stemmed from poor data curation, not intentional design. Conclusion |
|||
![]() |
|||
Post Reply
|
Page <1234 6> |
| Tweet |
| Forum Jump | Forum Permissions ![]() You cannot post new topics in this forum You cannot reply to topics in this forum You cannot delete your posts in this forum You cannot edit your posts in this forum You cannot create polls in this forum You cannot vote in polls in this forum |