{{{#!rst Current practices in knowledge representation for robotics ========================================================== General ------- - Name of the project - Names of the main contributors Knowledge model --------------- - Main paradigms (like ontologies, fluents, semantic frames, relational databases...) - How would be represented the fact: "The bottle (id 1845) is on the table (id 2365)"? - Which formalisms/standards/standard corpus are used (OWL, FrameNet, Cyc...) - Expressivity (for Description Logics, cf http://www.cs.man.ac.uk/~ezolin/dl/) - Open World Assumption or Closed World Assumption? - Resonning capabilities - Reasonner available? - If yes, external tool or part of the KB? - Capable of representing uncertain knowledge? - Management of time? - How time is represented? - How is it associated to statements? - What is it used for? - Has an associated rule language? - Modifiable at run-time? - A-Box only? - or T-Box and A-Box? Area of use ----------- - Task planning - Plan representation - Spatial mapping - Dialogue grounding - Learning/Experience modelling - Human-robot interaction - Other area? - References to main experiment conducted with the KB Features -------- - Capable of fetching external sources of knowledge (like Wikipedia,...) - Automaticaly? - Grounding mechanism? - Could you briefly describe how perception are grounded in the KB? - Integration with supervision processes? - Could you briefly give an example? - Other features? Other ----- - Main programming language - How does the KB integrate into the robotic architecture? (language bindings, middleware wrappers...) - Has been deployed on a real, physical platform? - If yes, has been deployed on more that one platform? - Project homepage, API documentation, tutorials/examples - Main publications that present the project }}}