Bayesian Optimization is a derivative free optimization approach for global optimization of an expensive black-box function and creates a surrogate model of the black-box objective function and finds the optimum of an unknown objective function from which samples can be obtained. It is a class of sequential-model based optimization algorithms that uses past evaluations of the function to find the next point to sample. Since the objective function is unknown, according to Bayesian theory the function is treated as random and a prior belief is placed over the function values, and as more values are observed, the posterior is updated based on the observation likelihood. These models use different types of acquisition functions to determine the next point to be evaluated based on the current posterior distribution over functions.
This tutorial has already been heldThis tutorial is scheduled for Oct. 27th at 03.00am (UTC-4) or 08.00am CET (Central European Time)
Autonomous development needs a general-purpose theory and experiment studies require such a theory. Finite automata (a.k.a. finite-state machines) have been taught in almost all electrical engineering programs. However, Turing machines, especially universal Turing machines (UTM), have not been taught in many electrical engineering programs and were dropped in many computer science and engineering programs as a required course. This resulted in a major knowledge weakness in many people working on neural networks for developmental AI. Without knowing UTM, researchers have considered neural networks as merely general-purpose function approximators instead of general-purpose computers. This tutorial first briefly explains what a Turing machine is, what a UTM is, why a UTM is a general-purpose computer, and why Turing machines and UTMs are all symbolic and handcrafted for a specific task. In contrast, a Developmental AI system must program itself through lifetime, instead of being programmed for a specific task. The Developmental Network (DN) by Weng et al. not only is a new kind of neural network that avoided the controversial PSUTS (Post Selection Using Test Sets), but also can learn to become a general-purpose computer by learning an emergent UTM directly from the physical world, like a human being does. Because of this fundamental capability, a UTM inside a DN emerges autonomously on the fly, realizing APFGP (Autonomous Programming For General Purposes) and enabling conscious AI through conscious learning. The well-known three bottleneck problems in AI, vision, audition, and natural language understanding are all naturally dealt with in DN experiments to be presented in the tutorial.
This tutorial has already been heldThis tutorial is scheduled for Oct. 29th at 11.05am (UTC-4) or 06.05pm CET (Central European Time)