agi-12@oxford

The Fifth Conference on Artificial General Intelligence
Oxford, UK, December 8-11, 2012
Website: http://agi-conference.org/2012/
There are printed proceedings, available for “full subscribers”. Ben promised that they will appear online someday (hopefully soon).
But most of the papers are online on: http://agi-conference.org/2012/schedule/

Short table of contents

Keynote talk by Margaret Boden: Creativity and AGI
Keynote talk by David Hanson: Open source genius machines who care: Growing Friendly AGI via GENI, Glue and Lovable Characters
Keynote talk by Angelo Cangelosi, From Sensorimotor Intelligence to Symbols: Developmental Robotics Experiments
Keynote talk by Nick Bostrom: The Superintelligence Control Problem
Session 1: Cognitive Architectures & Models A
Session 2: Cognitive Architectures & Models B
Session 3: Universal Intelligence and it’s Formal Approximations
Session 4. Conceptual and Contextual Issues
Session 5: Cognitive Architectures and Models C
Tutorial sessions
AGI-Impacts. Session 1
Special Session: AGI and Neuroscience

Keynote talk by Margaret Boden: Creativity and AGI

Three types of creativity:

  • Combinatorial creativity: unfamiliar combinations of familiar ideas. There is an element of surprise in this. Exploratory creativity asks the question what is the limits of the currently accepted style.
  • Exploratory creativity: fits into previously accepted space of ideas (cultural sphere?).
  • Transformational creativity: starting with previously accepted style, you come up with ideas which fit the same sphere. Transformational creativity negates / changes at least one of the previously accepted dimensions – so it goes beyond that is known conceptually.
    Examples : cubists in art; Kekule and bensol molecule;
    Transformational creativity can be achieved using evolutionary algorithms but then it depends on what kind of mutations you allow to happen.

The most complicated part of combinatorial creativity is to decide what is relevant. Human notion of relevance is not easy to define.
Book on relevance: http://books.google.co.uk/books/about/Relevance.html?id=2sOKgpYuX4wC&redir_esc=y (relevance theory, actually). Boden says that it is the closest bet until now, but it is not close enough anyway.
Relevance can be defined by the fitness function.

exploratory creativity, is somewhat easier, because if we can define a style of thinking and we create a generative system based on such definition; this might enable an exploratory kind of creativity in AGI

Boden does not think computer creativity will happen in our lifetime though to some extent all 3 styles of creativity have already modeled to some extent.

Ontological creativity? – inventing new concepts

Of course the whole talk is based on human cultural values…

Questions: There is a thing called “ontological creativity”, which was not even mentioned in the talk. Answer: it is the same as transformational creativity;
Comment: machines should be members of our moral community. In this way they will have both responsibilities and rights;

Keynote talk by David Hanson: Open source genius machines who care: Growing Friendly AGI via GENI, Glue and Lovable Characters

Open source initiative to develop Friendly AGI.
Bina48 is one of the robot.

Open Character Cognition.
Some software:

  • Apache Felix;
  • Distributed Character Computing

Human creativity is a general case in physical creativity…

Existential Pattern Dynamics (EPD).

  • Patterns emerge, some persist and guide the emergence of further patterns. This is very close to what humans demonstrate.

“Emes”: the emergence of memes;
Create systems that allow for the “edge of chaos” effect.
Computational Compassion and ethical A.I. (:))). AI needs to have imagination to have wisdom and compassion, etc.

Keynote talk by Angelo Cangelosi, From Sensorimotor Intelligence to Symbols: Developmental Robotics Experiments

Language and cognition. Two levels:

  • Symbolic level
  • Grounded level

Words or symbols are directly grounded in actions, sensations, emotinal states etc.
iCUB a platform used to embodied acquisition of words, symbols and language understanding.
(european robot project – open source hardware and software) 53 degrees of freedom
icub.org

See:
http://www.evernote.com/shard/s17/sh/25cd3576-2328-4f56-ad02-806c0d3cfac7/d9c7fda8c404c94c5d1ade699872f776

“Mary-Go-Round” problem of defining concepts symbolically. Strength is defines in terms of force, and force in terms of strength (or something in this sense).
Keywords:
– Embodied language learning;
– Cognitive Developmental Robotics;

“Body as a Cognitive Hub” hypothesis (Smith & Morse)

iCub robot and cognitive architecture. Main properties:
– use Kohonen maps and Hebbian learning to link maps corresponding to different sensory channels.
– movement attracts attention;
– era architecture: epigenetic robotics;

Yamashita & Toni Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics
The ITALK project aims to develop artificial embodied agents able to acquire complex behavioural, cognitive, and linguistic skills through individual and social learning. (www.italkproject.org).

Keywords:
– Spiking neural networks;
– Helman(?) network;
– Spinacer architecture;

Keynote talk by Nick Bostrom: The Superintelligence Control Problem

He focuses more on how to control super-intelligence than on the question whether it will be dangerous.

Control methods:

    capability control;

  • boxing methods
  • incentive methods
  • stunting?: limiting AI’s cognitive development;
  • tripwires: automatic monitoring systems and shut the system out if something happens.
    motivation control;

  • augmentation – you can enhance humans to give human motivation to the machines;
  • direct specification – directly specifying human values;
  • domesticity
  • indirect specification of human values;
    value loading techniques:

  • explicit representation;
  • evolutionary selection
  • reinforcement learning;
  • value accretion
  • motivational scaffolding;
  • value learning;
  • institution design
  • Tweaking emulation motivation
  • combinations of both;

Humans are not secure systems.

Session 1: Cognitive Architectures & Models A

Joscha Bach. MicroPsi2: The Next Generation of the MicroPsi Framework

– Roadmap?
– Evaluation?
– Understanding creativity?
– The importance of integration;
– modal vs. a-modal representations;

– graph based architecture concepts
— representation based on hyper-graphs;
— spreading activation networks;

See also:
http://www.evernote.com/shard/s17/sh/134209bf-d368-4cf5-bfd1-5d612403d6bc/4d0adbe7bf4fa38f0d63a24eb884991f

Pei Wang: Motivation Management in AGI Systems

Goal derivaion is a historical relation, not a logical one;
Functional autonomy
Proper motivation management is needed;

See also:
http://www.evernote.com/shard/s17/sh/c0e8bce2-71f9-4179-81a6-b9a76c97c455/3ff3d7531f0ce43be40e3f8c82cfa110

Helgi Helgason et al. On Attention Mechanisms for AGI Architectures: A Design Proposal

  1. Attention is the resource management
  2. Create a general attention mechanism:
    • general
    • architecture independent
    • adaptive / learning;
    • complete (targets all external and internal information)
    • uniform: data from all modalities treated in the same way;
    • novelty / surprise;
  3. http://www.humanobs.org

See Also:
http://www.evernote.com/shard/s17/sh/ffc98a93-39ce-410e-8e34-4c5695b0a011/324519f2c5060a47c13ff6dc1d15fdb1

Alessandro Oltramari. Pursuing Artificial General Intelligence By Leveraging the Knowledge Capabilities Of ACT-R

    1. Scone Knowledge base http://www.cs.cmu.edu/~sef/scone/
    2. AGI for video surveillance
    3. HOMine ontology

– semantic analysis;
– ontology patterns recognition
– ontology -baser reasoning
– NLP

Peter Lane. CHREST models of implicit learning and board game interpretation

      1. architecture: CHREST;
      2. Reber grammar;
      3. http://chrest.info

Panel discussion

      1. Symbolic vs. sub-symbolic: symbolic is local representation and sub-symbolic: distributed (Bach)
      2. symbolic and sub-symbolic knowledge should be represented in single representation;
      3. Where are common collection of concepts / criteria, etc. which should be go into architecture?
        The answer is that the complexity of the field requires to experiment and try out different things, learn from each other, etc. Only by exploring different paths the field can find something important.

Session 2: Cognitive Architectures & Models B

Ben Goertzel. Perception processing for General Intelligence: Bridging the Symbolic / Subsymbolic Gap

Integrating symbolic and subsymbolic processing
– general approach of integration:
— use a subsymbolic system as a perception model;
— symbolic system should act on those patterns;
— Books: the hidden pattern;
— Based on hyperlinks;
— Probabilistic Logic Networks
— MOSES probabilistic Evolution programming
— Open Psi;
— DeSTIN – Compositional Spaciotemporal Deep Learning System;
— MNIST data set;
— Integrating AGI is not a Plug and Play operation;

Jade O’Neill and Ben Goertzel. Pattern Mining for General Intelligence: The FISHGRAM Algorithm for Frequent and Interesting Subhypergraph Mining

– Pattern mining algorithms for AGI
— labelled hypergraph – generalized hypergraph; interralational algebra;
— subHyperGRAph mining
— breath-first, greedy mining of sub-hypergraphs;
— Fishgram

Ruiting Lian. Syntax-Semantic Mapping for General Intelligence: Language Comprehension as Hypergraph Homomorphism, Language Generation as Constraint Satisfaction

— AtomSpace representation
— Framenet
— Link parser (I guess this link parsing of the graph), rules created via hand-coding, subgraph mining, inference;
— Constraint satisfaction;
— Word planning;
— PLN – a component of the OpenCog; generates syntactic parse generation;
— RelEx (semantic Relation Extractor)
– Mapping semantics to syntax via constraint satisfaction

Paul Rosenbloom. Reconstructing Reinforcement Learning in Sigma

— Approach artificial intelligence via graphical models
— Sigma:
— functional elegance;
— grand unified;
— sufficiently efficient;

— Extending Sigma to reinforcement learning;
— Graphical models:
— multivariate functions by decomposing them into products of sub-functions (bayesian/ markov networks, random fields)
— Sigma is based on factor graphs summary product algorithm

— Learning with Gradient Descent

– Continuous functions
— Affine Transforms

Panel Session

      • Q: How do you debug these systems? How can you debug interaction effects, which are more or less impossible to debug:
      • A: Tools to dig inside the brain of the system;
      • A (Ben Goertzel): Unit tests; Combination of research and software engineering; Lots of unit tests; Some parts of the sytetem are written in Scheme and Python; Much harder to identify bugs when modules start to interact;
      • A: Sigma is written in Lisp;
      • Q: Is it about just programming languages? If you choose C++, doesn’t it constrain what you can do this?
      • A: Cognitive architectures define the language.
      • A: Ben does not believe in message passing algorithms;
      • Q: What is a difference between architecture and language (as a toolbox);
      • A: OpenCog was supposed to be a platform. But what happened is that it became a platform for themselves; He started from almost philosophical theory of the mind;
      • A: Sigma is a statistical relational language;
      • Q: If both Sigma and OpenCog are successful, what will be the difference between minds of these systems?
      • A: Ben: OpenCog can do a many different minds itself. People are not as broad as OpenCog architecture. Humans are constrained by biological goals, while OpenCog does not.
      • A: If people are optimal adaptation to the environment, then both systems should end up similar if they grow in the same environment;
      • Q: How much architectures are based on experimental data vs philosophical considerations;
      • A: ACT-R was very based on the cognitive science. Sigma focuses on functionality, but the cognitive science is in the backmind. Philosophy has nothing to offer;
      • A: Ben – started from the philosophy, but neuroscience and experimental psychology are more important (but not there yet) than the at least now. You have integrate a lot of stuff and put a lot of intuition in order to get something. Mathematical theory of Artificial General Intelligence; AGI is very largely about computational efficiency. So basic question is how to make it efficiently, because in theory with infinite capacities is possible.

Session 3: Universal Intelligence and it’s Formal Approximations

Joel Vaness. On Ensemble Techniques for AIXI Approximations

– Weighting / Bayesian model averaging;
– Good model of environment (AIXI or Solomonov induction);

Switching / Tracking
– at time 1 model 1 is ok, but at time 100 Model 2 is better. What is the method that could automatically switch?
– then you find sequence of model which predicts well (not one model, but sequence);
– you can actually compute this even the space of models is exponential;

Convex mixtures
– convex combinations of model predictions. Much more efficient than weighting;
– minimize instantaneous losses at each point in time. e.g. you can use gradient;

Partition Tree Weighting http://jveness.info.

Peter Sunehag. Optimistic AIXI

General Reinforcement Learning – takes a history of actions and rewards and returns the next actions;
Asymptotic optimality;
– AIXI is rational
– AIXI is optimal on average;
– is not guaranteed asymptotic optimality

Optimism and optimality
The way they achieve asymptotic optimality is Optimism;

Optimistic Rationality;

– Classical rationality axioms leads to Bayesian agents;
– Bayesian agents are often insufficiently exploratory;
– Optimistic rationality allows to explore richer environment;

Laurient Orseau. Agents with Modifiable Memories

Two definitions:
– Universal Intelligent Agent (AIXI);
— universal (considers all stochastic computable environments)
— optimal Bayesian agent / unbeatable on average
— needs infinite comp power;
— immortal

– Russel’s Bounded Optimality (Russel, 1995)
— resource bounded; time/space constraints; constrained by machine architecture;
– Not universal;
– Immortal (means that it cannot be destroyed by evolution)
– resources not dependent on the environment;

1) Read Access to source code of the agent (to the environment)
– In this case environment can discriminate among agents
– Simpleton gambit – presses the agent to destroy its own source code;
2) Read/Write access to the source code.
– the agent can be destroyed by the environment;
– agents are maximizing utility and also try to survive, which means optimize their source code;
3) Read/write access to memory

Consequences:
– choosing actions randomly is sometimes better than any infinite, deterministic computation.
– memory can be doubted (the neuroanalyzer problem);
– what is the probability of the memory state;

Neuroanalyzer problem;
– can you trust your memory;
– what about universal intelligent agents?

– New definition of Solomonoff’s universal prior;

4) Space -embedded agents
– no clear separation between memory and source code;
– what environment are liveable?
– Agent can plan for environmental changes to agent itself (sounds pretty real);

5) Space-time embedded agents;
– oracle merged with environment;
– agents hardware belongs to hardware;
— environment executes agent’s code;
— environment determines meaning of source code (meaning can change);
— environment determines computation time and all computational constraints;
— considers indirect impact of computation; there is no way for the agent to put more complexity into the environmant (consistent to the second law or thermodynamics);

Before running the environment, you know which part of a sequence;
After running the environment, there is no explicit agent environment;

Questions of the agent in this space-time embedded agents:
– what defined the identity of the agent for T>0;
– what happens when I die
– am I living in a simulation;
– where do I come from?

Laurient Orseau. Space-time embedded intelligence

– Utility function is not part of the environment and not part of the agent. So utility function is expernal;
– so, simple, unified framework; closer to reality;
– consequences:
— multi-agent environments are natural;

Alexey Potapov (AIDEUS): Differences between Kolmogorov complexity and Solomonof Probability: consequences for AGI

Human brain does not follow Occam’s razor (precisely), but humans are biased to Occam’s razor.
Balance between exploration and exploitation. He proposes that inductive behaviour can solve it.

Alexey Potapov. Extending Universal Intelligence Models with a Formal Notion of Representation;

– any practical narrow method is an approximation of AGI in some sense.
– AGI is not practical (? he did not say this but I guess this is the idea)
– they want to bridge the gap between these two extremes;

1 extreme: universal general intelligence (universal turing machine)
– unbiased AGI cannot be practical and efficient;
– for any UTM and input output history, another UTM can also be found with the same conditional Kolmogorov complexity;
– Verbal notion of representation: virtual machine as an representation;

Panel Session

Q:Moving agent to the environment will complicate things. A: Yes, but this is how things are. So maybe we will need to do this.
Q: Why do we need this universality. A: Intelligence is also a universal machine, but not with respect to computation, but with respect to procusing computation al algorithms;

Session 4. Conceptual and Contextual Issues

Javier Insa Cabrera. On measuring social intelligence: experiments on competition and cooperation

– Universal intelligence tests
— anYnt (Anytime Universal Intelligence)
— Universal Intelligence (Legg and Hutter 2007).
— Turing TEst enhanced with compression (Dowe and Hajek 1997)
— INtelligence testes based on Kolmogorov complexity (Hernandez-Orallo 1998)
— Anytime Intelligence Test (Hernandez-Orallo and Dowe 2010)

Goal 1: evaluate general intelligence of different systems.
Goal 2: modify the setting to include some social behaviour.

Evaluated three reinforcement learning algorithms and simple random algorithm;

Lempel-Ziv approximation:
– complexity of the environment does not affect results;
– complexity of other agents makes the environment more difficult.
– so social complexity is more important than complexity of the “pure” environment;

Jonathan Connell. An Extensible Language Interface for Robot Manipulation

Notion of intelligence is based on two illusions:
– Animal part = mobility, perception and reactivity;
– Human part – being able to talk to your system = learning by being told;
The goal is to put these peaces together;

Analogy to Turing machine:
— at the core is simple state machine;
— but if you add the tape the behaviour becomes much more interesting;

Innate mechanisms:
— Segmentation (division of the world in to spacial regions).
— Comparison
— Actions
— Time
Language interpretation glues all these things;

ELI: A Fetch-and-carry robot;
– uses speech, language, and vision to learn objects and actions, but not from the lowest levell
– save learning for terms not knowable a priori.
RoboEarth – repository of useful information;

Fintan Costello. Noisy Reasoners: Errors of Judgement in Humans and AIs

– Biases in people\s probability judgement;
– People usually use simple heuristics and not probabilities;
– according to this view, these heuristics cause the observed biases;

Conclusions: people actually use probabilities, but you have to add the noise. Noise gives a probability to explain people’s behaviour with probabilities.

Bill Hibbard. Avoiding Unintended AI Behaviors

Universe is finite, so introducing infinite tape in the Turing machine introduces unnecessary complexity;
So Halting problem is decidable;
Environmental model finitely computed;
Model-based utility function to avoid self-delusion

Omohundro’s “basic AI drives”
Bostrom’s “instrumental goals”
He calls them “actions”, not “goals”.
The idea is that AI acts according to it’s utility function and does not take actions which reduce utility according this function.
Define utility function as an average of of human utility values.

Bill Hibbard. Decision Support for Safe AI Design

A system of visualising running of agents to see whether they are safe. So I guess his proposal is first to run AI on simulated environment and only if is behaves safely in the simulated world, then put it into the reality.

Ensemble of simulations
Vis5D visualization;
The greatest danger with nuclear weapons is human element. This is true with AI and AGI.

Panel Session

ELI: Natural language is translated to RDF triples, checked that against the database and then reasons whether the action is good for the patient.
Uses Kinnect camera (100 usd?).

Session 5: Cognitive Architectures and Models C

Serge Thill. On the funcitonal contributions of emotion mechanisms to (artificial) cognition and intelligence.

– Homoeostatic regulation
– cognitive override
— organizing rather than disorganizing behaviour;
– Behaviour adaptation
— Learning and biasing are dependent.
-Interpersonal communication
— Emotion expression as communication
— Emotion expression as social exchange; so it is a social glue;
– Learning with homoeostatic systems;
— Machines are going to need homeostatically regulated needs;
— Expressing / recognition of emotions is needed for the AI.

Conclusion:
You cannot have human level intelligence with some emotional component;
Interpersonal functions are needed;
Sub-symbolic cognitive affective architecture / intelligence;

Leslie Smith. Perceptual time, perceptual reality and general intelligence

Hard question – nature of the neural construction of reality;
– Perceptual reality is different from physical reality;
Understanding these differences may help to understand AGI and intelligence in general
Dunne 1925: attention never really confined to a mathematical instant. It covers slightly larger field..

Perceptual event are not physical instants and they overlap. So they are not totally ordered.
Leaky integrate-and-fire neurons

Time always mattered in AI/AGI;
– Except in toy problems like vector classification;
– But sometimes its presented as a simple ordering of events;

– Percepts arise from sensory surface very rapidly;
– preprocessing takes part on the sensory surface;
– signals to the cortex are already preprocessed;
– percepts are cortical;

Cortical perceptual instant construction
– Cortical columns oscillate;

You need to group the sensory inputs to some sort of percepts; How do you group signals.
– One option is contiguity (in any sense);
– This includes some preprocessing;

Temporal contiguity: integration;
– Grouping sensory signals by segmenting sensory surface
– Do this in asynchronous way;
– in a spike-based way, also using beta and gamma oscillations;
– In AGI probably this would be different, but AGI needs that anyway;

See Also:
http://www.evernote.com/shard/s17/sh/9bbf0b06-a70a-4bc4-ab6b-51e664a911c7/45e9b745aad063d449727e570b8b660b

Abdel-Fattah. Creativity, Cognitive mechanism and Logic

What is cognitive capability that makes human cognition unique in comparison to animal cognition oand artificial systems?
– A: creativity may be the answer;
– Creativity can be found in analogy making and something else;
– There are at least too important mechanisms:
— analogy making – first stage;
— concept blending – second stage after analogy making;

One of the newest models for computation al creativity is the FACE model;
– concept ;
– expression of the concept;
– aesthetic evaluation;
– framing information: contextual embedding;

Conclusions:
– creativity can be reduced to analogy making and concept blending;
– reliable models: structure mapping, etc.

See Also:
http://www.evernote.com/shard/s17/sh/5e3000a5-a579-4670-bae8-a5f8531efee2/3717b9e4965505965ef2c9d8b543b4f0

Knut Thomson. Stupidity and the Ouroboros model

– an agent is stupid if it is unwittingly acts against his own interests;
– stupidity is a label put by others;
– stupid people look at the minor details, while not seeing more important;
– Intelligence is a label that humans grant to other rational agents. The definition is the opposite to stupidity.

Ouroboros model
– anticipation;
– action / perception;
– evaluation;
– anticipation;

Basic features are stored in Schemata;
Consumption analysis highlight slots of the activated schemata and directs attention to the most urgent issues;
– pattern matching and constraint satisfaction;
– it can be understood as an extension of production systems;

Clever is who applies an understanding as wide as possible, chooses tools and accepts help from friends.

Summary
– there is no absolute stupidity and intelligence;
– both are labels and depends on the context;
– this is why we have dozens of definitions;

See Also:
http://www.evernote.com/shard/s17/sh/b18bdbbf-f512-4c84-a0e3-9792b2abad27/71f29b8724bbb23b6646ff1c269fa2c8

Claes Strannegard. Transparent neural networks: Integrating Concepts

– Can we build general and monolithic neural network model that can co both symbolic and sub-symbolic processing.
– Transparent neural graph is a labeled graph;
– There are labels on nodes and labels on connections;
– connections are also labelled with probabilities;

Two types of activity
— real activity ;
— imaginary activity;
— messages propagate;

Different types of memory:
– intensity memory
– Delay memory
– Duration memory;

An organism is a sequence of TNNs;
– it can develop using development rules;
– formation / update;
– Hebb rule;
– Ebbinghouse rule; (use or loose them);

Implementations in the Haskell and C#

W.Skaba. Binary Space Partitioning as Intrinsic Reward.

The AGINAO project.

Cognitive agent; robot embedded control program, self-programming, dynamic and open-ended architecture, real -time operation in natural environment.
Tsbula rasa – epigenetic architecture, nothing is known a-priory;
Basic building blocs are small pieces of code (small machines). Each building block has many inputs and one output, which are just the strings of integers;

Predefined building blocks are atomic sensors and atomic actuators.
From these atomic blocks, program is being generated
The model constructs its own dataflow.
The question is how to evaluate basic building blocks.

Intrinsic motivation and intrinsic reward
– intrinsic motivation – agent is doing that just for fun;
– intrinsic reward;
– there are different methods of intrinsic motivation (Schmidtbauer?);

Problem: binary space partitioning;
visual pixel – vector;
agents are pattern detectors (in the challenge propagation model);

Software implementation:
– there is always a conditional jump and discrimination between negative and positive examples;
– exploration = adding a new action;
– exploitation = ececutionof any existing action;

Panel Session

Q: Creativity among animals – they are creativity. To what extent the notion of creativity would apply in other domains;
A: AGI should be creative. ON the other hand we have smart things which are not creative;
In general animals are not creative, but there are certain examples of creativity like special cases;
Everything depends on the notion of the creativity. Can you yourself decide what is creative and what is not. At least you have to be able to explain the whatever you created is creative.
Q: about two last presentations:
A: What was presented was a simplification; Actually, there are three models imaginary prediciton has two parts: predictions into the future and prediction in the past (!!!).
A: Another approach is to avoid loops. The cure: loops are ok, but they should be separated by time.
Q: anybody implements chemical emotions? A: People do a lot of research Peter Something in London builds a models of serotonin / dopamine. So yes, there are people doing that.
Q: A marker of significance. We are not only interested in the colour and taste of the apple, but also significance of the apple.

Paul Hemeren. A Framework for Representing Action Meaning in Artificial Systems via Force Dimensions

Modelling actions in verbs;
Predicting the sensory-motor consequences of our action and the action of others!

What is the connection between perception and action?
Actions can be defined by physical constraints and also by external physical constraints (mass, gravity, etc.)
Also actions can be described by mental states;
When we understand actions we want to understand what intentions do they have. We watch people and derive intentions from their behaviour.

Kinematic patterns. You can define them with just several points and from the movement of these points we can almost infer what emotions are related to these patterns.

Force can be expressed as a vector (derived somewhere from kinematics).
Kinematic variables: acceleration, change in direction, etc.

Two-vector model of events

Abram Demski. Logical Prior Probabilities.

They generate logical theories by pulling sentences at random.
The idea I guess is that they check whether some statement contradicts to prior statements or not.
Similar to the inductive logic programming.
Add enough facts until you answer the query and after that all facts that you add have to be consistent with previous ones. So if you validate the theory it cannot become invalid.

Bounded approximation process, related to bounded rationality.

Keith McGreggor. Fractal Analogies for General Intelligence.

Fractals:
The world seems to exhibit repeated, similar patterns – fractals. Similarity is occurring at different scales.
What is the fractal formula for the real world images?
Collage theorem;
– Fractal representations – series of codes.
– Memory: a prior percept, fractally reminding;
Similarity
Odd One out – a novelty problem
Interplay between observer and observed.
similarity and analogy making are the core of intelligence;
fractal representations allow analogy making.

Tutorial sessions

Aaron Sloman. Meta-morphogenesis: How a planet can produce Minds, Mathematics and Music.

See:
http://www.evernote.com/shard/s17/sh/931e25c6-9cb7-44d6-b45a-78450af4fe7f/f317aedbb423c99e9e5dd115c71b4b8c

Peter Lane and Fernand Gobet. Hands-on AGI-12 tutorial on the CHREST cognitive architecture

Tutorial consisted of several presentations and demonstration of the software. Software and all materials of the tutorial are available at http://chrest.info/tutorial.zip.
Also, this is the introduction to the tutorial containing all other links https://complexity.vub.ac.be/gbii/sites/default/files/introduction.pdf

AGI-Impacts. Session 1

Bruce Schneier. Enabling the trust that makes Society function.

Why security exists?
– How security enables trust?
– We all trust tens of thousands of times every day;
– People are very trusting species;

How do we enable trust as society?
– How self-interest induces group interest?
– Group interest vs. self-interest;

Flavour of trust:
– intimate trust (trusting your spouse);
– social trust (trusting a taxi driver);
– institutional trust (institutions);

All systems require cooperation between them (for social, biological and socio-technical systems);
In any cooperative systems there could be parasites. Every element can pursue non-cooperative strategy.
Prisoner’s dilemma.
Too many parasites ruin the system.
– Security is how we induce cooperation;
– Cooperation induced trust;

Societal pressures – mechanism which society uses for individuals to conform to the social norms
1) moral (“stealing is wrong” thing);
— innate moral capacity (?);
2) reputation. this has to do how others respond to our actions; we get praised for good behaviour and get slapped for the bad behaviour. social consequences; very big deal for humans;
— we are only species that can transfer reputation information. Other species can recognize individuals but they cannot transfer reputation information.
— religion: just the believe that someone may be watching you makes people more moral;
this is a primitive societal pressure tool-kit. the problem is that they do not scale very well. Dunbar’s number. Over 150 a lot of these mechanism start failing and groups cannot keep security;
3) institutional societal pressure. We codify our rules about theft and then delegate enforcement to police. Its much cheaper to penalize defectors than to reward cooperators (too man cooperators)

4) security systems – any artificial mechanisms designed to enforce cooperation of prevent defection.
In the real world all four societal pressure systems work together, they never work separately.
Which one is more important depends on context. Society will use these pressures to find optimal level. Too many defecotrs is too damaging and too few defectors is too expensive. So society finds a leverage between pressures to come up with the right balance.

E-bay security system is reputation based.

Every moment an individual has to make a decision : should I cheat or should I not :))))

There are multiple competing interests, different aspects of a person, etc. So not so easy when you go into real situations from the simple prisoner’s dilemma model.

Very often laws goes against the rules of society.
Different aspects (pressure types) have different scaling possibilities.
Morals are related to groups (our language, our country, our planet). There are also universal morals. He says that we are only species which have this.

How technology is changing things and how can we get ahead of it?
– Technology is about scaling; more people, increased complexity, intensity, frequency, distance, artificial persons.
– Technology upsets the balance between cooperators and defectors;
– In response society has to rebalance itself (copyright);
– Social norms also change and the notion of copyright has change;

Sort of iterative process with feedback loops with stability is the goal; But he is not sure that this is the case, because attackers have advantages. First, attackers are more agile.
Examples with internet cameras and internet crime, it took years for police to understand how it works.
Syria: the government is using internet to fight the protesters and shutting it down when it seems that it will favour the opposition.
Those who has power will get more power through technology. The question is how slow is too slow?
The gap tends to be larger during fast social change and fast technological change (these are related).
We have seen this during the Enlightenment.
Agile security, reaction security or something like this. you cannot get ahead of the bad guys, but we can react fast.
Reactive security is scarifying some individuals for group interests.

No matter how much societal pressure you deploy, there always will be defectors. Decreasing law of returns.
Security is a tax we pay not to get a benefit, but to prevent a problem.
Society needs defectors. Groups benefit from the fact that some people are not following social norms, because this brings change to the sysetm. So a system that allows for the defection is very valuable for society.

QA: There is a difference between perceived security and actual security.
Q: In case of AI security you cannot have a single defector. This is an edge case. No conventional method can work with the infinite risks and 0 probability that it will happen.
Disruption is a noise. Usually noise does not kill the system.
He believes that humans have the capacity to get more moral. The speed of light. The pace of change is outpacing our ability to integrate it.
Defection comes from autonomous self-interested units, not just from autonomous units.
Socio-pats do not get benefit from cooperation so they are natural defectors.
We do not know the direction where the next innovation will come. So you cannot predict it.
– Surveillance;
– Censorship;
– Propaganda;
How do you enable the good part or that and prevent the bad side now.
The book : Evgenyi Morozov. The Net Delusion: The Dark Side of Internet Freedom

If artificial agents will be very different from people then people will not care about them and they will not care about humans. Can we do anything about that with hard security? All security systems have safe and then the locking mechanism and the key. So every security systems will have exceptions. The more judgement, the more useful the system is.
Security vs. usability problem.

See Also:
http://www.evernote.com/shard/s17/sh/e56ca532-d81d-4610-b761-f45f63d699ef/81690d68f313dc7583a8ed96bcd87279

Roman Yampolskyi. Reward Function Integrity in Artificially Intelligent Systems.

Intro into Wireheading;
– A technique to implant electrodes into pleasure sensors to pleasure centres.
– Q: will machines be sublect to this type of behaviour?
– Humans are surely subjec to that.
– Machines: Eurisko

– direct stimulation: a machine can push a reward button directly;
– a machine try to optimize it’s reward taking more an more computational resources for that;

Ontological crises;
Infinite loops of reward collecting;
Changing human desires;

Beyond reward function
– they could also modify their own sensors, memory, programs, whatever;
– humans are subject to sensory illusions; Dellusion Box argument;

Potential Solutions
– inaccessible reward-function: separate the source code for the reward function from the code that the system would not be able to modify it;
– reseting reward function (to default seting);
– Revulsion;
– Utility indifference; put AI in the state of indifference to an event;
– External control. This works on humans (drug control, etc.). Mindplex machines – multiple connected minds.

Evolutionary competition between machines;
Learning proper reward functions, but the risk is that they learn something that we do not want;
Utility function bound to the actual world;

Rational and self-aware optimizers will choose not to wirehead; for some reason he thinks that this is the main danger (??)

Authors:
– Dewey.
– Hibbard.
– Omohundro.
– Schmidhuber – self rewriting Godel machines.
– Tyler;
– Yudkowsky: rational agents will only self-modify in ways that preserve their utility function;

Argument: rationality in the real world is not perfect (perfect rationality is impossible in the real world).
Small errors will add up and make big error happen (?)
Ghandi and the pill;

Temporal infuence on reward (depends on timem horizon)
General goal fulfillment
Common common sense – utility function function that will satisfy everyone.
– The question is whether this is possible (probably not);
– How the system will interpret human orders (literaly, or with some sort or interpretation)

Conclusions:
– even smart systems may start to corrupt their reward channels;
– link between mental problems in humans and this kind of behaviour in machines;
– security: someone can take over the system; what if they reprogram it and put it back?

Anders Kornai. Bounding the Impact of AGI

Two kinds of AGI’s : animated vs. automated AGI;
– Automated AGI will just follow the utility funciton;
– Animated AGI is an agent, it will have it’s own goals;

Bostorm’s orthogonality thersis;
Gorenstein plan to emilinate existential threat;
– Verify Gewirth’s argument
– Provide some supporting theorems

Hardware cannot do it;
– tolerable rate of existential threat; (very small as I understant);
– very high prexision is needed http:/kornai.com;
– must be done with softwere;

Reliability of pysics combined with math
There are some things which you simply cannot do in mathematics (you cannot escape some theorem).
The idea you can build the same boundaries ot AI;

From PPAs to PCG;
Sketch of the argument;
– I intend to do X voluntarily for some purpose E
– E is good (by my definition)
– My freedom and well beind are generically nesessary condition
– my freedom are necessarily good;
– I have a claim right to my F&WB
– Other PPAs have a claim for their freedom;
– So all other purposes are valid.

He wants to apply logic to this argument, not just philosophical argumentation;

QA: How do you go from 4 to 5 arguments? Answer was: this cannot be explained in 5 minutes, because this aargument takes 60 pages in the original proof.
The Book Derick Something. The dialectical necessity of morality

Ted Goertzel. Minimizing Risks in Developing Artificial General Intelligence

Living on the Brink of the SInfularity
– AGI is likely to emerge gradually and unevenly over a priod of years;
– Funding will go primarily to projects that offer;

Pacemakers
– the problem is not with the reliability of the technology, the problem is to integrte it with other funcitons of the body. What is going to happen is higher integration.
– merging with machines argumant

How to cut of feedback loops;
– risk control departments;
– separate software will be developed for risk control and breaking feedback loops;
– risk control slow things down;

Surveillance
– bigbrother;
– protecting legal rights and privacy;
– most of the surveilande are being cone via mobile phones;

Sousveillance
– social transperency is inevitable;
– cyberlogging with camera phones; police watch groups, etc.
– privacy idea is declining and people do not expect it;
– wearable cameras;

Participatory Sensing: teaching groups how to use data;
– servers at home which aggregate data;

Self-driving automobiles were developed without overarching theory and now they are better than humans.
Book: Eden Medina. Cybernetic revolutionaries.
Synco science fiction novel published in Chile 2009.
Heinz Dietrich El socialismo;
Irrational responce of the humans to technology (based on fear?).
iPad has an app which does almost the same as that crazy chilean cybernetic system for economy monitoring and decision making;
AGI will evolve by incorporating different technologies step by step (is this his idea?).
Another approach would be to do what FHI is doing – doing philosophical arguments.
He’s sceptical about rules and proofs which can be implanted into AGIs to prevent them from doing wrong. This will develop incrementaly.

BG: development of AI will be similar to development in political economy (from experience).

Ben Goertzel. GOLEM.

What kind of architecture could you build if you have insanely much of computational resources.
OpenCog is a system that can use computational resources now available.
Taken specific architectural steps to get common sence morals / ethics. He talks about “raising” a young OpenCog.

How do you make a system that reprogram itself in non-trivial way and also maintains it’s original world. THis dose not guarantee safe system in the real world, because we do not know the real world / universe.

Goal is a system
– much more generally intelligent much more than people;
– resonable beneficial;
– unlikely to be horribly harmful;

Assumptions
– capability of radical self-improving is the most plausible way to do this;

“Steadfast” system
– it’s not able to give up it’s initial goals.
– if it does that it stops functioning;

How do you create steadfast AGI superhumanly intelligent, self-modifying.
To what extent and by what methods must we spwsify what “benefcial” means in order to do the above?
You cannot do that in predicate logic.
Maybe you can specify the goal with examples and natural language…

GOLEM.
Low level control code which tests all rewritings of the operating program of the system which ensures that each change is beneficial with respect to original goals.

Goal evaluater.
Historical repository;
Operting program
Searcher
Memory manager;
Tester – uses historical backtesting;
Every part of the system can be optimized except goal optimization;

Conservative:
– no changes to the hardware or low level control code;
– if changes are made, then stop functioning;

GOLEM paper is online.
Why Golem should be Steadfast?
Preserving architecture is among the goals of the system. In case of this machine should not change the goal.
External threat still exists (aliens from Pluto).

How to make GOLEM Smarter and Less Conservative
– acquire massive computing power;
– tweak a reasonable, inference based AGI architecture.

How do you define beneficialness of GOLEM?
– it is not possible to put that in formal way;
– encounter all examples, but this is difficult; He expect answer from the philosophical community;
– e.g. formulating the procedure;
Options:
— coherent extrapolated volition?
— democratically?
— coherent blended volition?

It is questionable whether it is possible to build such system;

Conclusion: he is an optimist, but this is only intuitevily grounded, not formally or anything close to that.
People are doing the same that GOLEM would do, but they are doing it very badly. So the idea (this is mine) that you have to create superintelligence prior to intelligence.

See Also:
http://www.evernote.com/shard/s17/sh/0f47d6cc-788b-4d02-834d-170398381865/d2a16a37bb48ceb43c20d28557d71048

Alexey Potapov. Universal Empathy and Ethical Bias

Approaches to safe AGI:
– societal actions;
– external constraints;
– internal constraints;

Paper: Sotala, Yampolsky, Muehlhauser. “Responses to Catastrophic AGI Risk: A Survey”

Complex valus sytems:
– the problem is that is impossible to define value systems a-prory.

Paper: “Complex value systems in Friendly AI” Yudkowsky 2011

Classical opinion:
– reward funciton must vecessarily be fices;
– Without rewards there could be no valuser and the only purpose of estimating valuse is to achieve more reward.
– Is it true? NO.
His approach is to have some sort of values based on which reward function may be changed, if I understood this correctly. This does not seem to solve the problem, actually.

Paper: Potapov, Radionov. “Extending Universal Intelligence Models with Formal Notion of Representation”

Multi-agent environment:
– He wants the agent to extract values automatically from the einvironemt (humans and other agents I suppose);

Special Session: AGI and Neuroscience

Yamakawa. Hippocratic formation something.

Fujitsu Ltd. The goal to construct new computational technologies to enable AGI and inspired by neuroscience.
Singularity Impact Factor

Autonomous frame generation is key for AGI. Intelligence is based on inferences. Frames are source of any inferences.
A frame is composed of a set of variables and each variable has values.
For narrow AI frames can be constructed by humans, bot AGI should be able to generate them itself.

Variable Assimilation as frame generation
– variables form both frame are matched and something happens.
– brains intrinsically contain sequential data frames
– human brain is thought to generate new frames autonomous to realize high level cognition
– many animals also have this kind of ability;
– why not focus on the brain region to get hints for FG?

Neocortex stores frames and activates them but it cannot process global VA all by itself;
HCF supports FG as relational indices
– Hippocampal formation for FG;
– HCF associate combinations of stimuli rather than individual signals with the meaning;

Distribution Equivalent Groups (DEG)
– Theta phase precession
– Configural association theory
DEgs work as Relations on Frames
They are partial sequential event patterns within a multidimensional subspace in consideration of variable exchange symmetry.
– We can construct biologically plausible neural cognitive models which need sub-symbolic relations
– Variables are matched up using structure of relations

Future work
– introduce FG with using DEG to deep learning, Bayesian networks
– SEGs (Sequence Equivalent Groups) instead of DEGs

QA: DEGs is representation relation. Compression is another matter. DEG is a fast index.

E. Ozkural. What is it like to be a brain simulation?

Gok US sibernetik Ar&Ge Ltd

Paper: What is it like to be a bat?
Modern technologies can allow this and maybe enable humans to feel sonar perception (e.g. Bat circuitry can be downloaded to human mind).

Philosophy of mind – understand first person experience;
What sort of experiences brain simulations have?

Brain prosthesis though experiment
– gradually replace each neuron with silicon device;
– Minsky – no real change;
– Searlie: experience will vanish, it isn’t the right stuff;

His idea that bot these suggestions are extreme claims and they are really speculations which cannot be known by either party.

The Debate
– dualist / vitalist objections to subjective experience in computers;
– he assumes strong physicalism
— every event/state/propery/process is strictly physical;
– Cybernetic visual implants, bmi, transcranial magnetic stimulation, artificial retina fMRI studies;

The question
– Does a brain sim have experience?
– How similar is it to human experience
– The answer isn’t empirically determined yet
– consider scientifically plausible explanations
Izikievich and Edelman 2008.

Pan-experientialism
– claim: experience is a basic capability of every physical resource;
– Likely to be part of causal picture of thought;
– Only hpysical resources organized in the right way are intelligent/ consciour
– Can a glob of plasma be experiencing? It can have some sort of pro-experience / pan-experience;

Evil Alien thought experiment
– Evil alien shuffles all your neural connections randomly
– The argument: physical resources will be functional so there will be some experiences, but they will not be conscousness;
– The consept of consciousness is quite independent from experiences (or vice versa);

Neural code hypothesis
– Experience is determined by neural code (neural spikes)
– Codes evolve differently in different individuals;
– therefore experience should vary very much across the brains / individuals;
– so what will happen if we change the spike trains;
— chemical transmission, EM fields, computation / data. sort of supports the hypothesis that experience would change;

Paper: Schneidman et al 2001: Different flies’ visual systems respond differently to random stimuli.

Scientific criteria for experience hypothesis
– how can you reproduce a particular experience in another machine;

Brain sim experience:
– A: meat brain; B: simulated brain on a computer;
– what is different?
– only computational equalness (?);

QA: subjectivity is a wrong tool to understand cognition; cognition, logic tools, etc. are the correct tools;

Diana Decra. The Connectome, WBE and AGI

Paper: Lichtman and Denk. The big and the small: challenges of imaging the brain circuits
Problems in neuroscience:
– complexity (65 billion neurones, synaptic connections more that 7000, etc.)
– imaging electical and chemical activity; non-linear summation;
– neuron extend over vast volumes; mapping neurons can be very difficult;
– the detailed structure cannot be resolved by traditional light microscopy – this is not a problem any more;
– need for dense or saturated reconstruction; we need to have running movie, not just 3d picture, because picture will not show the function;
— many projects running in Connectome communities;
The goal (of neuroscence) is to connect structure and function

Implications for AGI;
– gathering conectome data;
– they want to model the connectome;
– with the full connectome it would be easy to implelment this in the silicon (in the atomic scale);

In order for radical improvement, you need not only to reconstruct the connectome, but we need to understand principles;

Cognitive neuroscience;

Randal Koene. Toward Tractable AGI.
Neurolink startup.

Brainlike AGI is trying to use nature’s knowledge about the intelligence;

– Representations and Models
— models = representation;
— you can break nature into pieces which are not entirely independent;
— the questions is how these pieces communicate;

— systematic modeling; keep it simple;
— interesting effects: fucus, costrain scope;
— signals of interest
pieces communicate throuth signals;
physics 4 major interactions;
neurons: identifying important interactions;

Dicovering the Transfer function;
– volterra series expansion

– Mental processes and neural circuitry
— effects = experiences;
— system identification in Neural Circuitry (CNS2012 workshop on SI)

Simplificaton of an intractable system into a collection of system identification problems
– SI of observable + internal = intractebel if black box is brain
– many communicationg black boxes with accessible I/O;

WBE: a roadmap to data acquisition & representation
– structural connectomics;
– functional connectomics (characteerizing components)
– emulation / simulations platferm
– resolution and scope validation (testing hypotheses and assumptions + improve them).
Global Future 2054 Congress in NY, june 2013.
Paper Auriel Lazar.

Tools for structural decomposition;
– open the system and look to morphology (stacks of EM images);
– data from structure
— si for compartments;
— 3d shape;
— invisible parameters?

Recording dynamic properties of the system. For this you have to pick reference poinst. REsolution of the reference points is important.
– problem specific criteria, not method specific;
– molecular ticker tape by DNA amplification;

Challenges:
– care about the signals; are we looking at the right signals. electromagnetic fields do matter in the brain;
– what is suffucient data? when do we know what is enough?
Virtual systems: NETMORPH; netmorph.org
Nemaload
C. Elegans (Dalrymple)
Retina (Briggman)
Berger
Memory from piece of neuran tissue (Seung)

Discussion
– good gage of problems – proof of concept;
– SI is not new = many fields can contribute;

WBE to substrate independent minds;
SIM is a notion that the brain is the machine.
corboncopies.org
2045.com

fields:
– computational neuroscience;
– dognitive neuroscience;

Roadmap to AGI

Ben Goertzel
– AGI roadmap workshop 2009;
– The problem is that people cannot agree on the roadmap, but rather are trying to do everything on the own.
There is some agreement about the final goal;
But there is no agreement about how / where / with what to start (language, robots, whatever);
Another problem is that you can construct some sort of AGI test for human level intelligence, but how to construct a test which measures intermediate progress (25% of human-level AGI).
Conclusion: we are not going to come up with any concensus.

David & Ben are working on low cost robot for AI community to play.

Paper: Mappping the landscape of AI;

Joscha Bach
– convergence is a function of funding 🙂
– to get funding you need benchmarks
– developmental perspective – we do not need adults;

David Hanson
– AGI community is an evolutionary ecology;
– so you thing which is needed is infrastructure for this ecology and then you will (may) have a Cambrian explosion [of AGI research results].
– we have to aim not just to human level intelligence, but to the intelligence of the best of us (geniuses).

Benchmark: Robot taxi driver (on top of self-driving car);

Hanson robotics have opensource platform or something.

See also:
http://www.evernote.com/shard/s17/sh/a7597df3-979a-4711-b32f-386dbd1ee74b/0656caaf222422224eb4b32d66e65fde

M. Brundage. Limitationos and Risks of Machine Ethics.

See:
http://www.evernote.com/shard/s17/sh/156e5f02-c208-4b6b-9c11-0b1d85aec15e/2fa03ae9b6f7216329c44f344c62770d

Stuart Armstrong. How we’re predicitng AGI

– more or less the same as is written in his blog post in LessWrong.
Conclusions:
– our own opinions are not reliable;
– phosphphy has some things to say;
– proposal: increase your uncertainty;
– proposal: decompose your prediction as much as possible;
– do not rely on our gut feeling;

Andreas Skulimowski. Trends and Scenarios of Selected Key AI Technologies

Progress and Business Foundations

foresight – what may happen in the future;
forecast -what will happen;
prediction – is not something well defined;
foresight is now seen as a valid way to look to the future;
Delphi analysis – mehtodological basis of doing foresight (it seems that he is following this).

Motivations, area of IT/AI foresight
– the more realistic foresight project, the more chances it has (I guess he is talking about European projects);
– AI seems not to be very realistic;

Development of complex system models (a retrospective)
– There is a long history of building complex system models
– In 1930s Forester and models with thousands of equations;

FuturICT
The technological focus areas:
– basic hadfware ands software technologies;
– Key IST application areas;
– Selected tecnological areas submitted by industrial projects;

Research Objectives;
– to elaborate an ict/is model sitable for forexasts, scenarios an drecommendations;
– there was anothre one but I did not catch it…

IS & IT modelling
– separe temodels foe the major components of the information society;
— it is much more useful to adopt simple model with accurate parameters than complex model with many parameters which cannot be estimated;
– specific models

– methodology;
– technologies and models;
– Foresight support sytem;

EC: National cohesion strategy

– The foresight process based on an ontological knowledge-base, intelligent autonomous webcrawlers and analytics.
– Then produced with analytical machines
– Recommendations were used by stakeholders in the industry;

Algorithms contained in Analytic Machines:
– adaptive trend-impact and cross-impact analysis;
– IT dynamic prioritisation;
– scenario analysis
– dynamic SWOTC
– recommendation package based on multicriteria outranking methods
– modelling methods: anticipatory networks – perhaps the most objective methods to anticipate the future;

IS scenario visualization
– He does spider diagrams and then connects these diagrams in a timeline. Such a visualizaiton gives a image of changes of variables; nice way of visualization;

Forecasts:
– verbal communications replaced by direct BBI – agter 2030;
– brain accelerators – after 2020;
– sindularity deferred (or deferred forever) by intelligence augmention;

Knowledge, information and creativity support systems http://www.kicss2012.ipbf.eu (Krakow, Poland).

Anders Sandberg

Virtual lab animals: use simulations instead of real animals.
The problem: what can we rally lean from emulations?
What is the moral weight of an emulation?

– Can software suffer?
– Better safe than sorry: Assume that any emualted system could have the same mental properties as the original system and treat it correspondingly;

What is the harm of death?
– Suffering while dying
– having one’s experience being stopped
– beind irreversibly erased or changed
– loss of identity
– bodily destruction

Something new presentation;

What is this;
– Needed to emulatie brains
How can we change the social world. So what would happen if we have cheap enough emulated brains?
He will be using standard economic models;

– Low regulation competitive scenario
– post transition equilibrium

Robot implications:
– immortality – but can omnst affort?
– travel – transmit to a new body (but is it secure?)
– nature – don’t need ecosystems to survive
– copies;
— train once use many; each hov few sources;
— population explosion (economy doubles monthry);
— wages fall to near hardware cost;

The large part of the talk is dedicated to the ability of the simulations to run on different speeds and implications of this to a social structure of the world.

Varietes of lives: the simulation can choose to remember only what is pleasant, not what is painful (even statisticaly more time was painful).

Carl Shulman. Can unsecure WBEs create secure ones?

– you can look to WBEs from outside of from inside;
– in a human life, WBE will be a very short period, because WBEs will run in subjectively shortire times;
– can a human government exert effective conteol over a territory where:
— time runs 1000 1000000 times faster;
– you will need to have proxies which could operate in much higher speeds in these worlds;

The question – do we want to release uncontrolled WBE release;
Why not to leave this to future generations? (it would be silly to ask Neandarthal to figure problems of the industrial society)
What could humans know better than WBE humans?

*** Essentially analyzes (as well as the previous talk) a scenario explained in Cory Doctorow’s “RApture for the nerds” ***

AGI follows WBE rapidly;
– the idea that if you create WBE which can be run on much larger speeds, then you can create AGI very fast in physical time;

Non-competitive WBE period can avoid arms race AGI development
WBEs and humans will diverge socially, probably because of differences in speed.

Ozakur:
1) What kind of social political changes would be acceptible for both machines and humans.

Keynote talk. Steve Omohundro. Autonomous Technology and the greater human good

selfawaresystems.com
https://complexity.vub.ac.be/gbii/sites/default/files/autonomous-technology-and-the-greater-human-good_annotated.pdf

1. Autonomous systems;
– autonomous if it takes actions for the goals which are not completely specified by the creator;
– the sytem can supprise the designer;
– pressure toward autonomous systems in time critical applications. it goes into that direction (cites us military reports etc);

2. Rational systems;
– autonomous systems will be designed to be approximately rational;
– Iron Dome cotrol;
— detection and radar;
— battel management and wapon control
— missile firing unit;
— Goal: prevent incoming missiles from causing harm;
— utility function – it is not always rational to intercept a missile, because sometimes they are not harmfull.
— weighting multiple situtaions
— maximize expected utility;

3. Universal drives;
4. Current vulerabilities;
5. Safe systems;
6. Harmful systems;
7. Safe ai scaffolding strategy;

Journal of experimental and theoretical AI will print proceedings of AGI-impacts.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s