gropestein posted:
Agreed, why won't they let me start a White Student Union, and why isn't this a tired point yet.
Something Awful
Search Amazon.com:
Purchase:
Account
- Platinum Upgrade
- New Avatar
- Other's Avatar
- Archives
- No-Ads
- New Username
- Banner Advertisement
- Smilie
- Stick Thread
- Gift Cert.
SA Forums
- Search the Forums
- User Control Panel
- Private Messages
- Forum Rules
- SAclopedia
- Posting Gloryhole
- Leper's Colony
- Support
- *** LOG IN ***
Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!
You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us $3,400 per month for bandwidth bills alone, and since we don't believe in shoving popup ads to our registered users, we try to make the money back through forum registrations.
The Something Awful Forums > Discussion > Debate & Discussion > Is anything about transhumanism/the singularity scientifically substantial?
Pages (23): « First ... ‹ Prev 19 20 21 22 23
Post
Reply
Zombywuf
Mar 29, 2008
Fungah posted:
No, I'm not asking for a *complete* roadmap, I'm asking for something-anything-which shows you know what you're talking about. I am a researcher in the field and spend my whole time reading about and probing the edge of what is currently possible in AI and previously computational neuroscience and I don't believe strong, human-like AI is possible. Similarly, I cannot name a single one of my colleagues who believes strong, human-like AI is possible. So if you're in possession of some evidence or ideas which point to the contrary I'd love to hear them.
I'd love to hear what you and your colleagues base this negative belief on other than "turns out it's hard." Honestly, what the hell are you working towards as a group?
EDIT to clarify: earlier you were making a fuss about the fact that we've only solved a tiny piece of the puzzle in 40 years. The human race is a fair bit older than that, and I feel confident in saying it has a fair bit more to go before we die out. Are you really saying that in 1000 years or 10,000 years you firmly believe we will never have made a human like AI?
quote:
Ok so you don't know what you're talking about.
Actually I was referencing an old joke I'm surprised you don't seem to have heard.
quote:
First "existence" isn't a proof. A proof actually requires some working and understanding.
I suspect you should go talk to some of your colleagues in the maths department to see if they agree with your definition of proof.
a lovely poster posted:
Do you think we'll be able to travel faster than light in the future too? Perpetual motion machines? Are there no limits that we cannot break with enough time?
Of course there are limits, although I'm willing to suggest our understanding of what they are may be incorrect at the present time. But you seem to be making a rather bizarre equivalence here. What is so special about the human body that it represents some kind of universal limit that we, mere mortals, would be unable to overcome?
Zombywuf fucked around with this message at Oct 28, 2011 around 01:20
# ? Oct 28, 2011 01:13
Profile
Post History
Rap Sheet
Quote
Fungah
Jul 02, 2003
Fungah! Foiled again!
trandorian posted:
Why does it need to be human like to be strong? Why can't it be dolphin like?
I don't come up with these stupid terms and qualifications of what it means to attain the singularity...
# ? Oct 28, 2011 01:13
Profile
Post History
Rap Sheet
Quote
PrBacterio
Jul 19, 2000
Fungah posted:
It is a far more deep and interesting question to discuss how I might be right (or if I'm wrong, why I'm wrong) rather than to just assume it is possible and not question it anymore. That is the question which tests the gaps in our knowledge and whether these gaps might be fundamental and insurmountable.
I actually quite disagree, which is why I made my earlier post in the first place. The concept of AI has been around for a long time without much thought having ever been given to the ramifications of what would happen if it ever did become possible. At the same time, us here right now are not going to be the ones who are going to solve the problem of AI. The only way to prove that it's possible would be to do it; disproving it is, of course, impossible in principle, but the assumption grows stronger the longer it hasn't happened yet. Still it is a question that we cannot possibly resolve. You were talking about science and the naivety regarding it in your post. This right here is a problem the answer to which is, as of now, still utterly inaccessible to even the best and brightest scholars within the field. I think giving a definite answer to such a question at this point is actually the height of naivety and arrogance.
# ? Oct 28, 2011 01:24
Profile
Post History
Rap Sheet
Quote
PrBacterio
Jul 19, 2000
a lovely poster posted:
Do you think we'll be able to travel faster than light in the future too? Perpetual motion machines? Are there no limits that we cannot break with enough time?
Well as regards this I think it fair to point out that, at least regarding intelligence, there are at any rate precursors to be found in nature. Unlike faster than light travel or perpetual motion for which to the best of our knowledge no examples can be found of it happening anywhere at all. So in that way there is at least examples where we can observe that yes, it is possible to arrange matter in such a way that it performs tasks of human-equivalent intelligence, which is to say, actual human beings themselves. The question is then reduced to whether or not we'll ever be able to reproduce that effect, instead of if it is possible even in principle, which can be denied with a large degree of certainty in the examples you cited.
# ? Oct 28, 2011 01:29
Profile
Post History
Rap Sheet
Quote
SubG
Aug 19, 2004
Let me tell you a little story about HURF and DURF
Zombywuf posted:
Well no, Step 1 is invent a human like/sentient AI. Step 2 is computers get faster. Faster is not about clock speed, it's about processing efficiency which can be measured in 2 ways, per watt and per kilogram.
Basically, think of all the empty space inside your computer, your CPU could get lost under your fingernail, if the CPU was the size of all that empty space imagine how much processing power would be available to you.
So? Say you have an arbitrarily (but non-infinite) amount of computing power. Tell me how you get intelligence out of it.
Put in slightly different terms, you seem to be operating under the presumption that building intelligence is computationally bound. I do not see the justification for this belief.
Beyond that, it looks an awful lot like you're just stating your conclusion as `step 1'. Here's how to have sex with a supermodel: step 1, have sex with a supermodel. Therefore, I extrapolate that I will eventually be having sex with so many supermodels nobody will be able to count them all.
Zombywuf posted:
Imagine if you could run a thousand Einsteins in parallel? Once he gets over the hill you reset it him to his golden years.
Either you're assuming that `Einsteinness' is deterministic or it isn't. If it isn't, then what leads you to believe that you can reproduce an arbitrary number of them, run them in parallel, and get some specific output out of them? If they are deterministic, then what leads you to believe that you would get anything out of them that we did not in fact get out of the historical Einstein?
I get that we could in principle present a simulated Einstein with inputs that the historical Einstein did not have access to. I do not see the reason to believe that this would necessarily result in profligate output. Put in slightly different terms, it isn't clear to me why we should presume that `genius' (or whatever we want to call it) is somehow or other a Platonic substance which can be abstracted away from the context in which it is observed.
Zombywuf posted:
Well one of your assumptions seems to be that the singularity is about exponential growth.
Indeed, this appears to be explicit in the formulation of the singularity as a subject for discussion. If you want to play the game of defining this down to `technology will progress' (by saying that something like the singularity happened when we invented language or whatever) then I'd refer you to my response to this earlier in the thread.
Zombywuf posted:
You're going to have to explain that one in more detail.
You are making a point about the usage of the word `intelligence' that has nothing to do with the discussion.
PrBacterio posted:
It's more of an observation about the effects of AI, since many people believe, or at least used to believe, that it's obvious that, looking at the progress in terms of computing power that we've been making, we are going to have human-equivalent strong AI at some point in the future, so that's usually taken as a basic assumption underlying the idea of the singularity.
I don't see it as obvious, but if it is obvious it should be easy to explain. How does increasing computing power lead inevitably to `strong AI'? Perhaps it would help to start explaining what you mean when you say `computing power' and how you're measuring it, and what `strong AI' means and how you evaluate whether or not something is a `strong AI', and thence demonstrate how scaling one entails the other.
# ? Oct 28, 2011 02:16
Profile
Post History
Rap Sheet
Quote
PrBacterio
Jul 19, 2000
SubG posted:
I don't see it as obvious, but if it is obvious it should be easy to explain. How does increasing computing power lead inevitably to `strong AI'? Perhaps it would help to start explaining what you mean when you say `computing power' and how you're measuring it, and what `strong AI' means and how you evaluate whether or not something is a `strong AI', and thence demonstrate how scaling one entails the other.
I probably didn't express myself clearly because you managed to completely miss my point there. *I* didn't say that it is obvious that we're going to have strong AI; I said that there are, or at least used to be, many people who think/thought that. Actually, to say that people thought it to be 'obvious' is probably not exactly right. It would be more correct to say, there seems to have been an *expectation* of AI being just around the corner for the longest time. My point then being that the singularity is not actually an argument in support of AI, but in response to it: Calling out the narrowness of people's imaginations regarding the consequences of such a development. In that way, it is a response to every ill-thought out Star Trek fantasy of a Commander Data existing in a world that is in no real way any different from our present one. It's basically just someone pointing out "but don't you realize what that would mean?" in response when the question of AI comes up.
Imagine, if you will, someone being shown a demonstration of a light bulb in the early days of the technological use of electricity, and their reaction being to say, "why, and if someone were to develop an inexhaustible source of electrical power, we would never have to be in the darkness again!" Of course as we all know there is no such thing as an infinite source of power, but that is beside the point; because surely anyone could see now how much more far-reaching the consequences would be if it did than what's expressed in that statement. In fact all of human life as we now know it would be completely transformed. Now, unlike free power, the jury is still out regarding the possibility of what I earlier called 'strong AI' (I noticed you putting it in quotes when you replied to me, but the term is still useful in that you know what I mean even if it is ill-defined). But also, unlike free power, I think the prediction is correct that the consequences of such a development would mark effects in the world as utterly unintelligible to modern-day humans as our present-day human affairs are to an insect. And it is this prediction, not the condition it rests on of AI even being possible in the first place, that is, in my opinion, what should be properly referred to as "the singularity."
# ? Oct 28, 2011 02:51
Profile
Post History
Rap Sheet
Quote
SubG
Aug 19, 2004
Let me tell you a little story about HURF and DURF
PrBacterio posted:
But also, unlike free power, I think the prediction is correct that the consequences of such a development would mark effects in the world as utterly unintelligible to modern-day humans as our present-day human affairs are to an insect.
If you accept that we must presume the capacity for such a thing (and that it entails Star Trek levels of fantasy), then it doesn't seem like much of a claim. Your assumptions are utterly unintelligible to contemporary affairs. The fact that the consequences of such an assumption are unintelligible doesn't appear to tell us anything other than if you posit something wacky, there will be wacky consequences.
And I'll point out that this is another defining down of the term `singularity'. The term in general is clearly predicated on a bunch of hoo-haw about Moore's Law and exponential growth in technology and so on. You are of course free to distance yourself from such claims, but then you're not talking about the singularity in the sense that singularity proponents like Kurzweil are.
This is germane in the context of the thread, as you're basically proposing a notion of the singularity that is intrinsically not scientifically substantial, which leads the question posed in the thread title to be `no' more or less by definition.
# ? Oct 28, 2011 03:04
Profile
Post History
Rap Sheet
Quote
a lovely poster
Aug 05, 2011
PrBacterio posted:
Well as regards this I think it fair to point out that, at least regarding intelligence, there are at any rate precursors to be found in nature. Unlike faster than light travel or perpetual motion for which to the best of our knowledge no examples can be found of it happening anywhere at all. So in that way there is at least examples where we can observe that yes, it is possible to arrange matter in such a way that it performs tasks of human-equivalent intelligence, which is to say, actual human beings themselves. The question is then reduced to whether or not we'll ever be able to reproduce that effect, instead of if it is possible even in principle, which can be denied with a large degree of certainty in the examples you cited.
http://io9.com/5842947/scientific-b...ster-than-light
# ? Oct 28, 2011 03:20
Profile
Post History
Rap Sheet
Quote
PrBacterio
Jul 19, 2000
SubG posted:
If you accept that we must presume the capacity for such a thing (and that it entails Star Trek levels of fantasy), then it doesn't seem like much of a claim. Your assumptions are utterly unintelligible to contemporary affairs. The fact that the consequences of such an assumption are unintelligible doesn't appear to tell us anything other than if you posit something wacky, there will be wacky consequences.
And I'll point out that this is another defining down of the term `singularity'. The term in general is clearly predicated on a bunch of hoo-haw about Moore's Law and exponential growth in technology and so on. You are of course free to distance yourself from such claims, but then you're not talking about the singularity in the sense that singularity proponents like Kurzweil are.
This is germane in the context of the thread, as you're basically proposing a notion of the singularity that is intrinsically not scientifically substantial, which leads the question posed in the thread title to be `no' more or less by definition.
I don't think the possibility of AI is Star Trek level of fantasy; I believe the manner in which people think about that possibility is. As I said, I don't think anyone can say with confidence, at this point in time, whether or not the kind of AI that we are talking about is ever going to be possible or not. More to the point, I don't even think we're at a point where a fruitful discussion over whether or not it can be done is possible. I do believe though it is a real enough possibility that thinking about the consequences of such a thing is worthwhile. It is undeniable that our capabilities to attack computational problems have increased, and are continuing to increase, at an astonishing rate. Also (in my opinion, though this is more of a debatable point) there has been an undeniably steady stream of advances in the field of AI, even if they haven't been going as fast or been as successful as the most optimistic of predictions have made them out to be. At this point, I think a solid point could be made from both sides of the disagreement over the possibility of strong AI. And have been made, repeatedly, for instance even in this thread.
Don't you think it is interesting that, if anyone ever did manage to develop an AI of the sort we've been discussing, human affairs as we know them today would be so utterly ended that the era that follows would be so entirely alien that it would be utterly incomprehensible to any of us, or are you so convinced of the impossibility of AI that the prospect lacks all reality to you? Because if you do, I think you are wrong. You have been able to make some very strong arguments in this thread against the explicit supporters of AI, but ultimately, all of your strongest arguments have always come down to the lack of knowledge of how to accomplish it in practice, not an impossibility in principle.
Or do you disagree that the development of strong AI would entail such consequences? But then we're actually debating the singularity which you called a wacky tautology, deriving nonsense from nonsense, right in the above post.
As far as defining down goes, I'll say that I'll have no part in anything of Kurzweil's; the singularity, however, is not an idea that was invented by Kurzweil, but by Vernor Vinge.
# ? Oct 28, 2011 03:27
Profile
Post History
Rap Sheet
Quote
SubG
Aug 19, 2004
Let me tell you a little story about HURF and DURF
a lovely poster posted:
http://io9.com/5842947/scientific-b...ster-than-light
CERN forgets how GPS satellites work.
# ? Oct 28, 2011 03:29
Profile
Post History
Rap Sheet
Quote
McDowell
Aug 02, 2008
Surely, Caligula was my greatest role
SubG posted:
CERN forgets how GPS satellites work.
You mean people make mistakes and overlook things? Well I can't wait to have my brain scooped out and uploaded into a computer so I don't have to worry about these kind of mistakes.
# ? Oct 28, 2011 03:32
Profile
Post History
Rap Sheet
Quote
a lovely poster
Aug 05, 2011
McDowell posted:
You mean people make mistakes and overlook things? Well I can't wait to have my brain scooped out and uploaded into a computer so I don't have to worry about these kind of mistakes.
Fermilab and another lab are setting up experiments to replicate CERN's findings. His "debunking", while a convenient explanation, will be tested in the near future.
http://www.nature.com/news/2011/110...s.2011.554.html there is a link to the OPERA paper at the bottom
Either way maybe I chose two bad examples but I think we really need to rethink the "plausible = inevitable" I can't be the only one not buying it.
# ? Oct 28, 2011 03:39
Profile
Post History
Rap Sheet
Quote
SubG
Aug 19, 2004
Let me tell you a little story about HURF and DURF
PrBacterio posted:
More to the point, I don't even think we're at a point where a fruitful discussion over whether or not it can be done is possible. I do believe though it is a real enough possibility that thinking about the consequences of such a thing is worthwhile.
The two statements above appear to be orthogonal to me. Evidently they do not appear to be so to you. Can you give other examples of things about which `fruitful discussion over whether or not it can be done' isn't possible, but which are `a real enough possibility'?
I'm actually asking here; I can't think of other examples, barring things which are---at least in principle---directly testable. It sounds to me like a fairly concise definition of a metaphysical debate.
PrBacterio posted:
It is undeniable that our capabilities to attack computational problems have increased, and are continuing to increase, at an astonishing rate.
If it is undeniable, is it worth defining and explaining?
PrBacterio posted:
Don't you think it is interesting that, if anyone ever did manage to develop an AI of the sort we've been discussing, human affairs as we know them today would be so utterly ended that the era that follows would be so entirely alien that it would be utterly incomprehensible to any of us, or are you so convinced of the impossibility of AI that the prospect lacks all reality to you?
I find all contrafactual conditionals to be about equivalently interesting, which is to say not very. I mean if you want to dress fantasy up in sciency lingo because you enjoy doing so, that's cool. But speculating about what effect something that we know more or less nothing about---by your own admission---seems like a scientifically vacuous exercise. What if invisible unicorns existed? Well, we'd be able to travel instantaneously using them. Why? Well, one of the properties I'm ascribing to invisible unicorns is that they can teleport. Are you seriously trying to argue that invisible unicorns can't teleport? If that's what you're saying, I'd like to see what reason you have for believing this.
PrBacterio posted:
As far as defining down goes, I'll say that I'll have no part in anything of Kurzweil's; the singularity, however, is not an idea that was invented by Kurzweil, but by Vernor Vinge.
Yes, and the term `robot' was first used by Karel Čapek. But nobody talking about robots is referring to his play R.U.R..
McDowell posted:
You mean people make mistakes and overlook things? Well I can't wait to have my brain scooped out and uploaded into a computer so I don't have to worry about these kind of mistakes.
A problem has been detected and your transhuman body has been shut down to prevent damage to the singularity.
The problem seems to be caused by the following file: SYS32.DLL
PAGE_FAULT_IN_NONPAGED_AREA
If this is the first time you've seen this Stop error screen, restart yourself. If this screen appears again, follow these steps:
Check to make sure any new hardware or software is properly installed. If this is a new installation, ask your hardware or software manufacturer for any exponential updates you might need.
If problems continue, follow grandma into the light.
Technical information:
*** STOP: 0xdeadbeef
*** SYS32.DLL - Address FBFE7617 base at FBFE5000, DateStamp 9999999
# ? Oct 28, 2011 03:48
Profile
Post History
Rap Sheet
Quote
PrBacterio
Jul 19, 2000
SubG posted:
The two statements above appear to be orthogonal to me. Evidently they do not appear to be so to you. Can you give other examples of things about which `fruitful discussion over whether or not it can be done' isn't possible, but which are `a real enough possibility'?
I'm actually asking here; I can't think of other examples, barring things which are---at least in principle---directly testable. It sounds to me like a fairly concise definition of a metaphysical debate.
Not "something that can be done" as such, but, say: The effect of meteorite impacts on life on earth. It's certainly a possibility, though the chance is remote. Still, being aware of the possibility and thinking about the consequences can be worthwhile.
# ? Oct 28, 2011 03:51
Profile
Post History
Rap Sheet
Quote
SubG
Aug 19, 2004
Let me tell you a little story about HURF and DURF
PrBacterio posted:
Not "something that can be done" as such, but, say: The effect of meteorite impacts on life on earth. It's certainly a possibility, though the chance is remote. Still, being aware of the possibility and thinking about the consequences can be worthwhile.
We can evaluate the probability of meteorite impacts. Based on a large body of historical data on previous meteorite impacts and a number of very good models which describe with great precision the behaviour of meteors.
# ? Oct 28, 2011 03:54
Profile
Post History
Rap Sheet
Quote
Rate Thread:
Post
Reply
The Something Awful Forums > Discussion > Debate & Discussion > Is anything about transhumanism/the singularity scientifically substantial?
Pages (23): « First ... ‹ Prev 19 20 21 22 23
Bookmark this thread
Powered by: vBulletin Version 2.2.9 (SAVB-2.0.15)
Copyright ©2000, 2001, Jelsoft Enterprises Limited.
Copyright ©2011, Something Awful LLC
Why? Because you're a humanities student with an arts degree.
Let's not play any games here. If I came over to your house right now and asked you to explain exactly why the big bang theory is more rational than Genesis, you wouldn't even be able to stutter out a semblance of an answer. So where do you get off snickering at Christians like they're stupid and you have some amazing insight? Sure you bought a copy of Stephen Hawking's "A Brief History of Time". But I think if you waddle on over to your book shelf you'll find that the book mark is exactly where you left it 9 years ago.....on page 3. Face it. You're too stupid to be an atheist -- John Safran
i think people who complain about religion or religiosity are even sillier, but theyre both dumb
tpaine posted:babyfinland posted:
i've only ever heard of people whining complaining about atheists when a) they encounter obnoxious goony ones or b) when atheists talk about someone they annoyed.
i think people who complain about religion or religiosity are even sillier, but theyre both dumbthe backlash against atheism was always way worse than any obnoxious atheist and almost all of the time unprovoked
if you say so whiny atheist guy
tpaine posted:
the backlash against atheism was always way worse than any obnoxious atheist and almost all of the time unprovoked
Edited by goopstein ()
goopstein posted:
Any charlatan or dilettante can claim to find beauty in transcendence, passion and magic. Whole book series for children have centered around this very concept. A true aesthete stands breathing in the icy horror of the abyss and finds the experience far more affecting than a million frescos of imaginary winged creatures painted on a thousand tiny ephemeral buildings by ugly psychotic apes.
all atheists are teenagers
goopstein posted:
Any charlatan or dilettante can claim to find beauty in transcendence, passion and magic. Whole book series for children have centered around this very concept. A true aesthete stands breathing in the icy horror of the abyss and finds the experience far more affecting than a million frescos of imaginary winged creatures painted on a thousand tiny ephemeral buildings by ugly psychotic apes.
thats why religion is better goatstein. religion saves every man. its true because its true for everyone
deadken posted:goopstein posted:
Any charlatan or dilettante can claim to find beauty in transcendence, passion and magic. Whole book series for children have centered around this very concept. A true aesthete stands breathing in the icy horror of the abyss and finds the experience far more affecting than a million frescos of imaginary winged creatures painted on a thousand tiny ephemeral buildings by ugly psychotic apes.all atheists are teenagers
your idea of beautiful is pedestrian, mass-market demotic trifle perfectly salable to any fat cracker sunday school teacher or greasy hollywood reptillian. you kneel prostrate before collectible porcelain dolls
goopstein posted:deadken posted:goopstein posted:
Any charlatan or dilettante can claim to find beauty in transcendence, passion and magic. Whole book series for children have centered around this very concept. A true aesthete stands breathing in the icy horror of the abyss and finds the experience far more affecting than a million frescos of imaginary winged creatures painted on a thousand tiny ephemeral buildings by ugly psychotic apes.all atheists are teenagers
your idea of beautiful is pedestrian, mass-market demotic trifle perfectly salable to any fat cracker sunday school teacher or greasy hollywood reptillian. you kneel prostrate before collectible porcelain dolls
you really do sound like a teenager
babyfinland posted: