It’s probably important that how you define the relation ‘analogous’ (supposing it’s done as a relation…) is essentially topological: two things should be analogous when they have the same part-types and relations—any aspect of quantification or other values/parameters should be ignored. Perhaps in the case of something like: I have two types, Chair and Furniture26; Chair has four of subtype ‘leg’ and Furniture26 has only two—they would still be considered analogous, but! LESS analogous. In other words how analogous two things are should probably vary on a continuum. Also, it’s probably a good idea to not bother trying to build a full A.I. with this, but instead a very robust analogy decider/generator. Although it may only simulate (as opposed to emulating) human analogy making (since its substrate doesn’t have the character of loads of neurons interacting), it could lead to an interesting mathematical construct if it’s kept concise enough: a deeper kind of isomorphism—and perhaps, do a different sort of intelligence, which would nonetheless be recognized as such (of course, as a component in a much more complex system). It’s interesting that the question of whether the most basic construct should be ‘type’ or ‘relation’ is so close to arbitrary. I do think that having relation be a type will lead to a cleaner formulation, but perhaps not, and perhaps only barely. Is there something to the fact that they are so ‘close’ in this sense? Actually, maybe the best/simplest thing would be to say that there are no ‘relations’ as a separate thing; ‘types’ are a unique identifier and set of zero or more other types called ‘subtypes’; if a type has more than zero subtypes, it can be called a ‘relation.’ I like that formulation the best. To get things off the ground, it would be fun experimenting with a set of built-in relations that take any number of subtypes, that are meant to map to basic human, physically derived relations: depends, above, contains, etc. My current thinking on the relationship between the two is: the most primitive things are types, which are just unique identifiers; relations are a special ‘type’ constituted of a unique identifier and a set of other types. In trying to visualize this scheme, I see types as colored circles and relations as larger colored circles, with colored circles embedded in them; of course this recurses. Could this visualization be either an aid to thinking in this domain? or maybe for a piece of software for constructing IA programs? An abstraction on a type is a kind of operation that removes attributes (or how about changes some attributes from Types to Structures?). In order to facilitate intelligent abstraction, attributes can be ranked by their contribution to a Type's Identity. To form an analogy, abstract some Type, then seek other Types that are more particular; then, when creating a theory, try assuming that some relations from the new, particular type also hold for the original particular type. This is an analogical inference. Notion of an abstraction rank could be useful for discussing 'concreteness' Interesting to note that the labeling syntax in attribute lists are like defining new local types--this may be the justification for having multiple attributes of the same Type explicitly listed: they are actually different types, the thing distinguishing them being... a role? a scope/context? presence in a particular set? Role: the active Type if a Structure has multiple Types. Maybe when a Type appears in a labeled attribute declaration, the Type adopts a new Role for the scope of the following block. Maybe a Role is the most specific Type a Structure has in a given scope. Attribute labels are a way of making a Type more specific, but only in the sense that it's concreteness rank is one higher--but only vacuously so: consider: if an attribute label is actually a Type definition, it's TypeLineage is like grandparent->parent->originalType->attributeLabelType--if concreteness is a function of TypeLineage length, the labeled Type is one more particular than the input Type. Seems the use of these labels is speaking of a different sort of variation in type, nameley, the type structures for two entities can be the same, except that are intended to be used for different things, and we capture this distinction by labeling them differently, which introduces two new types--so we've accomplished capturing this distinction using the original type system. Can the StructureSelector take advantage of the fact that Structures are Types? Relation between primitive Types and Monads (or whatever that Haskell/category theory thing is). Primitive types are capable of side effects/communication with outside systems. Remember when describing the language, that it's a generalization of grammar to constrain not just sequences: it's probably possible to write an I.A. -> BNF compiler, which is sort of a linearization process, making I.A. a higher level grammar description language. Worth thinking about how to convert arbitrary Types to restrictions on symbol ordering in sequences. At the start of running an I.A. program, you have a Type graph and a Structure graph; time evolution of the program consists entirely of operations on those data structures. Maybe the way primitives work is that they are connections to an application which is running I.A. as a script--i.e. I.A. is used solely to control other programs. All Types have Type as an origin type implicitly if they don't give one explicitly. TypeRecognitionStatements are TypeExpressions that evaluate to the True Type or the False Type. Maybe they are Functions--in particular Boolean functions maybe have a built-in '_' ('don't care') Structure, which causes the runtime to generate some Structure that meets the Type constraints. Or maybe do a typed verion, so that something like (Type) means it should be collapsed into a structure of the given type (and maybe '(Type, Guide)' if we'd like to guide the parameter selection). This could be very useful: we only ever write TypeRecognitionStatements, but after attaching a substitution clause to a type, replacing all attributes with Structures, also substitute the Type attribute parameters in the Relation expressions with the Structure substitutes; now we have a StructureRecognitionStatement: a TypeRecognitionStatement with all parameters being Structures. Maybe Functions and Structures are equivalent... we just get function behavior by supplying the ->{} block, which allows us to substitute the entire Structure with another one, using operations on the Structure's attributes. And perhaps all Types come equipped with an implicit ->{} block that returns a Boolean Type (which can produce Structures corresponding to 'true' and 'false'). Think more about how to do substitution. Analogy making is typically between domains (maybe the Domain concept will work for this), especially going from a poorly understood domain to a better understood domain. Finding a representation within some domain seems to be the key concept. A way of creating a 'Domain nuetral' representation of a Type is to make all of its attributes of Type 'Type.' Additionally, intermediate steps in this direction could be taken. This means there is another way of creating an abstraction: rather than eliminating attributes, switch out their Type for another higher up in their lineage. Type: //need to thikn about how to define a PrimitiveType to complete this (BTW, idea here is to define a Type type within I.A.) +{originType: Type, componentExpansion: Type, relationsExpansion: Type, componentSubstitutions: Type} aStructure: AType //has two attributes +{Type1, Type2} -{ SomeTypeRecognition{type1} AnotherTypeRecognition{type2, type1} } ={structure1, structure2, Type1Structure, Type2Structure} MultipleInheritenceExample: Mom +{Attr1, Attr2} Dad +{Attr1} -{ ARelation{Attr1} } typeCheck{ //invoked when an attribute substitution is requested, with all new attrs being Structures //check whether the Structures are descended from Types appearing in the present attr list //create Boolean structure 'true' if condition met, 'false' otherwise. } Structure: Type whose attributes are PrimitiveTypes or other Structures, or a StructureLabel (starts with lowercase letter?) Structure: Type -{ OR{PrimitiveType{attributes}, StructureLabel{attributes} } } TestType: ParentType +{ A[attr1: Type1, attr2: Type1, attr3: Type2] R[ relation1{attr1, attr2} relation2{attr3, attr1, attr2} ] } -{ A[parentAttr1: TypeX] R[ parentRelation3{parentAttr3, parentAttr2} ] } Parent2Type -{ A[parentAttr4: TypeY, parentAttr2: TypeZ] } Ideas for built-in, core types: Generalization: //increases generality of the attribute Types +{Type} Abstraction: //remove attributes +(Type) Particularization: +{Type} Analogy: +{original: Type, analogy: Type} -{ Equivalence{Particularization{Generalization{original}}, analogy} } Inference: +{premises: Type, } AnalogicalInference: +{original: Type, analogy: Type, inference: Type} -{ Analogy{original, analogy} AttributeExpansion{original, analogy, inference} } OneWayPartialAttributeMerge: +{original: Type, mergeSource: Type, merge: Type} -{ //merge has all the attributes of original, plus at least one held by mergeSource and not held by original. } Equivalence: +{first: Type, second: Type} -{ } Type: +{attributes: } //Think on this one... probably related to grammatical description of TypeDefinition... Chair: Furniture +{ legs: Cylinder, seat: Slab, back: Slab, Equality{legs} Count{legs, 4} Adjacency{legs, seat} } -{ cover: Cloth } myChair: Chair {legs: myLegs, seat: mySeat, back: myBack} Tentative BNF for various types in IA: typeExpression : type ('.' type)* ; type : typeDef | typeLabel ; typeDef : ; typeApplicationStatement : ; typeLabel : ; attributeList : (type)? (',' type)* ; ################################################## More Recent type : (typeLabel)? unlabeledType ; unlabeledType : typeRef | typeLiteral ; typeRef : plain ol' identifier ; typeLiteral : (typeSection)+ ; typeSection : (type)? (typeBlock)? (typeBlock)? (unlabeledTypeBlock)? ; typeBlock : typeBlockLabel unlabeledTypeBlock ; unlabeledTypeBlock : '{' typeList '}' ; typeList : (type (',' type)*)? ; typeBlockLabel : '+' | '-' ; //structures are syntactically identical to Types; they are distinguished at runtime by the fact that all their component types are Structures, too (primitive structures corresponding to primitive types make sense, sense the primitive types are generic and we'll have to supply parameters in order to refer to a specific one). //a structure construction looks like a labeled type using only a substitution block, and using that block to replace all Type attributes with Structures. //E.g. myChair: Chair{legs: myLegs, seat: mySeat, back: myBack} //Chair: Object +{legs: Cylinder, seat: slab, back: slab, //relation list } ///IMPORTANT!: In the Type graph, composition is the only thing that creates graph relationships (i.e. attributes and relations are children) -- the 'origin Type' that's part of the Type literal syntax is not a parent, it's just an efficient way of specifying a certain group of Types. primitive relations should act sort of like primitive attributes: we have some built in ones, which correspond to things like less than, ==, etc. --but it could also be a complex boolean function supplied by something outside of the app. When you have nested Relation statements, it's not that the 'return' value of the inner statement is made an argument to the outer statement, both should remain in place until they are decomposed into primitive relations, and then evaluated from the inside out--perhaps if some relation doesn't hold, it returns the 'empty' relation and evaluation continues (this about the situation OR{AlphabeticalOrder{y, x, z}, x}. OR joins the elements of all the given Types into one Type, and whether it 'holds' or not (this is better than saying it returns true, since it's not a function from some perspectives) depends on whether the Type resulting from the joining is empty or not.) The idea is that some relations will fail when given the empty Type, and some are okay with it. A numerical comparison, for instance, I think would fail if given an empty type (or maybe it always succeeds if only one argument is empty?). Seems that the main task in writing the recognizer is figuring out good ways of establishing independence between parts. Maybe when reading the IA program in in the first place (it being like a grammar for a parser in this case), the Type graph resulting from parsing it can be analyzed into independent parts. Seems the potential ways of establishing independence is first finding out which Types are linked together in substitution clauses; I think this will result in one large nested substitution clause, which (eventually--once filled with Types [i.e. not in this pass]) can then be evaluated from the inside out, allowing failures of inner relations as long as the outer relation holds (I'm thinking of the OR situation here which can still hold when some of its members fail--maybe it's a special case, though). However, remember, we are only analyzing the Type graph at this point, so we won't be evaluating that big nested Relation; I think what we have here (once all such clauses are found), is a partition of the Type graph by related relations, so that during parsing, we can run each of these graphs independently (very important considering the time/space complexity of the parsing algorithm is related to the graph size here, and it's definitely not linear [if it were, splitting into pieces wouldn't matter]). I picture the step of looking for one of these independent partitions of the Type graph as a sort of upward dredging starting from the primitives. Probably useful to look at the Type graphs formed by parsing some IA programs to get a better idea of what sort of relationship patterns can exist. For instance the above partitioning problem can probably be reduced to a common graph algorithm for finding a subset of the graph where all the nodes are linked together by a certain edge type--like we can use one edge type for navigating the graph and looking for these islands, but it's another edge type that actually holds the island together. It's gonna be important to figure out exactly when Types can be considered equivalent or not; like clearly a reference to Type (e.g. appearance of the TypeRef or TypeName or whatever for the type) is equivalent to the Type it references--but what are the other situations? Like if we have an IA program where there is a single Primitive type, are we creating new Types wrapping it by referencing the Primitive within our secondary Types? It may be possible to change the TypeLabel type to take Type':' Type, instead of TypeName':' Type. I good overall System architecture is to have the Java program compile IA programs to a simple graph description language (basically just serializing the intermediate rep.), which can then be read directly into the Type graph of another application (say, a swift application). Is it feasible, once the above structure is set up, to have most of the parser part of IA's runtime be written in IA? Order matters for attributes. Use Set and List objects for accepting groups of the same Type. There are probably lots of things that can be done to transform the graph/tree representations of the program to make it more useful for different scenario's--I'm kind of thinking of this as different perspectives on the same problem, each tree variant being one perspective. We'd like to keep each of the perspectives around and switch between one and the other as the current requirements are better suited to one or the other. Some examples are the transpose concept, where you interchange children with siblings. The ‘concept turing machine’ idea can probably be realized through I.A. It seems that what is required is a formalization isomorphic to human concepts—and then!—to express a particular ‘concept’ in that formalization