doi
stringlengths
17
24
transcript
stringlengths
305
148k
abstract
stringlengths
5
6.38k
10.5446/57428 (DOI)
So, as I was saying, thank you for being patient, for waiting for us for the last two presentations. We have received some communication from our speakers. They are well entitled not to be here, unfortunately. So, thank you for your patience. We would like to introduce our new speaker, it's Karel Shavok from the Czech Republic. Welcome, Karel. Hello, you have to be here? Yes, we can. I hope I'm pronouncing your name correctly. Yes, yes, it's Karel. I would like to also introduce you. So, Karel is a graduated theoretical cybernetics. He is also a member of the International Society for Precision Agriculture, the Research Data Alliance, the Club of Ossia and other societies. He was the president between 2005 and 2007, the president of the European Federation for Information Technology in Agriculture, Food and Environment. He is now chairman of the OGC Agriculture Directorate. He is also volunteering for the Inspire Hacks and also is part of the National Implementation Team of the Czech Republic, Inspire Geoportal. He also participated in many other research projects, but I will let him actually present and tell us about the analysis of potential needs of agriculture sector for the UNO. Thank you, Karel. Okay, thank you for the introduction. I will try to speak about our vision of the needs of agriculture sector for air observation. In first presentation, we heard about the solution. Now I will not speak at all about the solution, I will speak about what is needed or how agriculture sector can benefit from air observation, what we can do in the future to be better. This analysis was done by European Projects Air for Agri, it was the two years analysis of different groups in agriculture sector. I will analyze primary production, food production, but also the needs of public sector, the needs of financial sector and also the needs of information for Fujia Kuriti, which is very close to previous presentation of Gail Glam. I will start with some important ideas or messages at the beginning. I will start a little explain about Agri food sector. I participate in many projects on many meetings and what we very often see as the mistake that people are trying to develop solution which where farmers can use air observation data. This is not a reality, I know few farmers who are really able to use air observation data directly, but most of normal farmers need to know the knowledge which is generated from air observation data. They need to use this information in relation with other information sources. We already work with European Space Agency around 2005 on the model of edit value chain. I think that till now this is not fully implemented and this is the future, how they could deliver the benefit of air observation data to the farmers. We need to include all different producers of information or different stakeholders into the chain and find the way how we will be able to combine all this information sources for farmers. As I mentioned other important players can be food industry. Generally food industry is on one way interested about information about the potential of the market. The company which are focused on high level production, they are very often also supporting what we are calling precision farming and they are supporting services for farmers. They need to guarantee that farmers and suppliers will make top level production and they are running such type of services for farmers. There is more organization who are now already doing this in the world. We can mention Nestle Barilla in the Czech now. For example personal breweries also trained to support the farmers with such type of knowledge. Public sector, I am from Europe, outside of Europe sometimes there are different rules. In public sector in Europe we have this common agriculture policy and this common agriculture policy on one way is controlling the subsidies for payment system. What is important for the future is that it will not only be used for the restriction of the purposes of the control of the farmers but they can also help and cooperate on data sharing to be possible to use some services and farmers cheaper. So it is important to connect this also into this one chain. Financial institutions are interested about information on the market but insurance company or reinsurance company are also interested to minimize damages and monitor the damages. Again there could be some common interest with the farmers. This is other player who has to be in some way included into the chain. Of course there is this global security and we heard a lot about Gilglam before. This is some part of this but again we are now on two level. One level is this world level about the world production and then second is how to help farmers in developing countries to increase their productivity to make their agriculture more sustainable. This slide is very old. We done this slide around 2008. It was in previous project called Future Farm. Already in this time we recognize that there could be some trends or challenges which can go from opposite direction. We need to increase food quality and safety but the population is growing and we need also to find the way how to increase global production. This is not problem in Europe but in the global scale. There is competition between production of the food and energy. There are more such challenges which go from opposite direction and we need to find the solution. How to find the optimal methods and health observation could be one from the sources. Now in Europe we have Green Deal which is defining some very ambitious goal. Reduce amount of nitrogen and fertilizer, decrease usage of pesticides, increase biodiversity and this also generates sustainable development goals which came early then. Green Deal, European and sustainable development goals are in some way in line. Again here are the things which I mentioned that we discussed already many years ago. As above zero hunger, it is food quality clean environment. There are many things which is necessary to solve in the future and the question is how health observation can help to this goals on local but also on the global scale. In our projects we run some questionnaire. This was in Europe and you can see that the average scale of farms of production is relatively low and then only 20% of the farms is bigger than 1000 hectare. Of course it is different in different countries and if we will go to developing countries this will be much more. So we have now the topic for the discussion how we can help with earth observation to this smaller farms. Again, another question what we discussed with the community was about the resolution of the satellite data and you can see that only for approximately half or a little more than half, the most common open data like Copernicus Sentinel-1 Sentinel-2 or transatomatic may be taking us good but that number of farms require higher resolution than is possible to use with this data and which is now only supplied by commercial providers. The situation is the willingness of farmers to pay for this services and you can see that most of the farmers are not willing to pay for the earth observation services more than 5 euro per hectare. Two sets of the farmers are not willing to pay more than 5 euro per hectare for this type of services and again this is some limitation. So we can say that in the future agriculture we need to produce globally more with higher quality and using less land and fewer inputs at the same time. So this is important and although the resources are limited because for example willingness of farmers to pay for this services is limited. We will be able only to do this if we will be able to build some good knowledge management. I mentioned that farmers are not interested about the data, farmers are interested about the knowledge and wisdom to understand why they do something and there is necessary to connect a number of organizations, a number of technologies to come from this poor data to this final wisdom. We analyze the data which were most often required by different groups and here you can see the data which are most required by agri-food group and what was very interesting that the most often was mentioned the weather forecast and this type of the data which are related in the climate. We analyze this things not only in Europe but we also analyze this in Africa but these things which were related to the weather, to climate, to soil and water was common for all Europe and Africa and I expect that this could be something which is important globally. In the financial sector there are the main requirements for the potential of biomass production but for insuransive flattice it is important mapping of the different diseases. If you are looking on this requirements it can be in some way combined with the requirements of farmers. Public sector again if you go directly and more deeply to this common requirements you can see that a lot of things which could be considered as useful by the public sector can be also useful for farmers or for other organizations. In the global scale I think that it is in line with what was mentioned in the GeoGlam presentation what are the most important things but again this is a lot of related to global production but also to environmental issues. About the data, data can be important source of information. It is useful that this data Copernicus or Landsat are open so this is the reason why people are trained to use this data as much as possible. But in many cases there are requirements for better spatial resolution, new bands and to have the data more dense. We have now of course big problem with the data from Landsat on Centrel2 with the cloud so there is necessary to look for the way how we can be able to combine this different data to have real coverage during the season. What is important that is necessary to derive the way how to combine and sometimes the data is necessary to combine with aerial or drone data. There is important that now there exist number of deliver platform for this data probably who are from Europe you know that in Europe we have now this five-dias platform. Till now this dias platform were financed by public money mainly but it is expected that from 2002 there has to be self-finance. Then it's question if all five dias platform will be able to run on the base of commercial services. Second thing is the knowledge building yes you don't only deliver data but you need to have the solution which will be able from this data to derive information and knowledge. There were huge investment in Europe to large infrastructures here you can see some schemas of some big project which were or are financed. There is data by Eucsdat, Afar cloud, Demeter so there is number of side solution but till now it's necessary to find the way how this solution will be again self-financing and how we will guarantee that this platform will be operational after end of the project. And what we can do to deliver better services for farmers. Yes there is large investment many application is highly fragmented. There is a lot of apps develop on the top of solution but they are not integrated on all information. And there is urgent need to combine private and public investment to be able to better use the data and also supported better discovery of the data. We have very good catalogs for primary data but what for example is not solved is to have such good description for the data which are derived because there is necessary to understand which methods was used for this type of analysis. How it was analyzed, what was in putting data and there is still now the gap which has to be overcome in the future. So as I mentioned in the beginning that both Europe and United Nations are looking on sustainable environmentally friendly production and the plans are ambitious and what is important that this environmental task must be integrated with agriculture production but then is again question how this has to be financed. And how this could be proceed. A lot of we look on precision farming and what I would like also stress here that if we are speaking about environmental protection and about the methods. Till now the most often precision farming is understand as method of this variable rate application to recognize where I have to put fertilizer where I have to put more and when I have to put less. What is very important is to look on the timing case because we already recognize that for example for nitrogen the good timing can lead to better usage of nutrients by crops and also reducing the losses to the water to the source. So it is important to be able to support also this other topic is that in some way is important as I already mentioned this precision farming with other activity and to run some public private partnership to read the goals because for the Energy Institute for public sector for all is important to guarantee the lower losses to guarantee the more environmental friendly production and then it's necessary to look how different organization can cooperate and aggregated demand for air and data to be possible to reduce the cost of the production and to be able to deliver to the farmer services in the cost which will be acceptable for them because for every farmer is important that its benefit has to be bigger from some services than what they will pay for the services. As I already mentioned, we need to support better fire principles for accessing of the data and we need to collect this province information about the origin of the data. And there is some 10 recommendation from Elf or Agri and also this my presentation here is partly connected with this recommendation is to organize regular worship and conferences where different stakeholders will discuss how they can cooperate. We need to support cooperation of this different place from public and private sector. In the research we need to combine the multi actor research because very often something is developed by IT expert or Earth observation expert, but if they are not included agronomies they are not included the expert from all area. We never can come to the result which will really help. So we need to support fire principles and new metadata models. We need to look on the usage of previous solution and I think that here is also if we are on the first 4G very important that in many cases the solution are based on open sources and that usage of open sources can also lead to the better of usage of existing solution and their extension. Sometimes they are financed very huge project but to bring something new new methods probably can come from the smaller project and we need also to have some financing and some instruments for supporting smaller projects. What is very important and one from the key issue is standardization because we need to make information standardize but we need also to have more live standard. Many of previous standard is very heavy and it is different to use this by for example non-GI specialist to use this standard so we need to go much more to light APIs to be possible. We need to build new solution where we will cooperate different expert from different areas and what is important to organize some coordination action or from public money or on the base of some organization where people will try and put effort together. As was mentioned I am part of OGC and we are now trying inside OGC to have regular meeting and discuss how different organization can cooperate and of course the biggest problem on all levels is legislation and financing in Europe. It will require some included into the reform of common agriculture policy but we need also to have the action in the global scale because we need now to help also to other countries better use this as observation data for both for reducing the power and making better food production in this country. Here you can see the team of organization which participate on this study and all our materials or our report are publicly available or on the web pages of the AFOG project or they are also on the research gate publicly available. So anybody who will be interested about our work in the more details you can go there or contact me and I will share with you our information. Thank you very much Carol that was a very interesting presentation on the thing that we can do for agriculture with our observation. We have a little time left for questions and we already have a question in the chat asking you how will the farmers receive or access the process data of the US services using a mobile or tablet app or it will be given in a periodic report. I think that or what we recognize that farmers are using computer mobile app but what is important that there is some local service provider who is able to integrate this different data not only data from airsoft observation but for example from machinery and who is able to deliver this to the farmer as the services. And usually farmers like also not only access the services but if you are able to communicate with them you are able to explain them why for example they have to put more or less fertilizer. So it is really important to solve this question of this last mile because there is many ideas that we will build some marketplace for application from the farmers but usually this is not where us farmers are working. They need to have some trust and they like to have local original organization which are helping them which are making for them services and this is I think the way how we can access the farmers. As for example idea that farmers will go directly on dias platform I know such farmers yes but I think that in Czech Republic five six. It is all yes so there is necessary that there is anybody who is doing this services for them and working with them but also it is much better if it is some service organization where farmer are used to work with them if you will go as airsoft observation expert or as IT person to the farmers. They will be not trusted they need to speak with anybody who has agriculture knowledge. Exactly we have another question for you aside from payment for earth observation information what is the next biggest barrier to earth observation information for farmers. One one barrier is that this optical satellite are not very often available during the season is because sentinel to automatic mapper for example this season in Czech Republic we had only few images which we are able to use and farmers need to have information in the time and on the other side. For example our group is trying to work with Sentinel one but till now the work with Sentinel one is not so advanced and not so often but this is the way what can be used. Other topic is that I mentioned that this scale of the farm usually Lanzar data or sentinel data are good for the fields bigger than five hectares. If you have small fields there is again limitation of course we have Maxar we have planet and other data provider but there to be possible to deliver services in the cost which will be acceptable for farmers there is necessary to provide aggregation of this information and aggregation of the demand for this information. And to be able to cover the cost from more sources yes it could be not only from the farmers but I mentioned from the food producer or other. So it is necessary to build the new models of this public private partnership to combining different request and to trade to reuse one data for more purposes. Alright thank you very much for all the context. We can take another question if you have it we will go over a bit until our next presenter might connect. So if you have another question for Karel please use this opportunity to ask it now. If not we included Karel's contact here if he allows us we can direct users to directly ask other questions about your work. Yes of course I am ready to speak and if there is interest to publish my presentation I can send this presentation and I am trying to have all my work publicly available so I don't know if the first 4G will publish the presentation or how it has to be I can send my presentation and also use this video recording I agree to publish this. Thank you Karel. For everybody that is listening us all the presentations will be shared in a stream format so far. I am not sure exactly what the policy is for the presentation in the other format but all these streamings are recorded and will be released in a few weeks from now when everything is going to be edited and put together in a very streamlined way. Thank you very much Karel. It seems that we don't have any other questions in the chat but thank you for joining us and presenting.
Agriculture comprises vital economic sectors producing food, agro-industrial feedstock, and energy and provides environmental services through managing soil, water, air, and biodiversity holistically. Agriculture including forestry also contributes to managing and reducing risks from natural disasters such as floods, droughts, landslides, and avalanches. Farming with its close contact to nature provides the socio-economic infrastructure to maintain cultural heritage. Farmers are also conservers of forests, pastures, fallow lands, and their natural resources and, in turn, of the environment. Agriculture today is a composite activity involving many actors and stakeholders in agri-food chains that produce and provide food and agricultural commodities to consumers. In addition to farmers, there are farm input suppliers, processors, transporters, and market intermediaries each playing their roles to make these chains efficient. Presentation will present analysis and the the vision of the EO4Agri project about the role of Earth observations in agriculture. The increasing economic, social, and environmental needs of agriculture pose many challenges for the upcoming years. This topic is closely related to the strategies of the United Nations and the European Union on sustainability. The United Nations adopted 17 Sustainable Development Goals in 2015 as part of the 2030 Agenda for Sustainable Development. The European Union presented in 2019 the European Green Deal - a roadmap to make the European economy sustainable. This white paper aims to stress the importance of knowledge management for agriculture to address these challenges. The role of Earth observation in this knowledge management is analysed including its current gaps and limitations. The white paper focuses on the definition of key problems, analysis data gaps, delivery platforms, analytical platforms, and final recommendations for future policies and financing. This document serves as an input for the future Strategic Research Agenda and the Policy Roadmap.
10.5446/57421 (DOI)
y vamos a proceder entonces a contar esta experiencia. Yo estoy mirando la pantalla de stream y agente. Sí, opa, te damos un segundito. Se damos un segundito, partir. Estoy dando un partir. Acá está. Ahora permitió. Ahora sí, ¿no? Sí, ahora sí. Esto es de tuya, la presentación. Gracias. Bueno, quería agradecer por aceptar el artículo y la oportunidad de presentar este trabajo. Que dio mucho trabajo. Bueno, este lo hicimos en la municipalidad de Juenao, que está al sur de Paraguay. Es en el departamento de Itapúa. Como sabrán, Paraguay está dividido departamentos, que serían los mismos que estados en Brasil y provincias en Argentina, con un grado de autonomía menor. Y esto está dentro del departamento de Itapúa, en el distrito que de Juenao, que es un sector de colonias. En este caso es una colonia alemana. Alrededor está en los otros distritos, son colonias japonesas, ucranianas. Es una área bastante con aires europeos de Paraguay. Este trabajo está dividido... Vamos a ver. El trabajo que hicimos fue con la municipalidad directamente. Y el problema que nos encontramos es que la mayoría de las municipalidades en Paraguay tienen un catastrópso leto. Que la forma del manejo de la información no suele ser eficiente. Los valores de cobro de los impuestos son desactualizados. Se trabaja mucho con las herramientas CAT. Hay muchos problemas de comunicación. Entonces procedimos a trabajar con esta municipalidad. Y lo que buscamos fue una solución integrada, que consiste en cinco pasos. El diagnóstico, la incorporación de un sistema de informaciones geográficas, la actualización de los frentes de las propiedades, actualización de las construcciones, y creación de políticas públicas. Vamos a ver. La solución que cuando comenzamos con el primer paso que vendría a hacer el diagnóstico, el diagnóstico consiste en un trabajo que hacemos con la municipalidad. Donde trabajamos durante unas semanas analizando los documentos, que ellos tienen los procesos, que llevan la cantidad de informaciones y cómo se manejan internamente y externamente. Y lo que nos acusó a grandes rasgos este diagnóstico, fue que el registro de las construcciones estaba totalmente desactualizado, el registro de los frentes estaba actualizado, y que había poco o ningún flujo de datos entre los diferentes departamentos que trabajan con esta información. También que tenían gran cantidad de información en diferentes formatos, tenían muchas cosas en papel, en planos antiguos, tenían un plano en AutoCAD, tenían otra versión de plana en AutoCAD, luego tenían varias versiones de esos mismos planos, y no había un compendio. Algo increíble, sí que tenían mucho orden, pese a algunos problemas por el tipo de tecnología que estaban utilizando, y algunos vicios, pero tenían por ejemplo algo increíble que para un municipio, que es una copia de casi todos los títulos de propiedad. Entonces, vos en la municipalidad, juega en AO, que el departamento realmente trabaja muy bien el departamento de catástroco, cuando vos vas a pagar tus impuestos, ellos verifican si tienen una copia de tu título y si no tienen, te lo piden. Entonces esto ayuda para esclarecer cualquier punto en cuanto a los valores de los impuestos, y también por si adelante algún conflicto, hay un pequeño registro público municipal. Eso me pareció bastante loable dentro de esta municipalidad. Entonces, este se me cambió el bueno. Lo que hicimos primero fue la migración del sistema de información geográfica. Primero voy a explicar una partecita acá del impuesto inmobiliario. El impuesto inmobiliario para WAI funciona de dos puntos. Tiene el valor del terreno, el valor de la construcción, y se suman de todos y se dividen por 100. Estos valores son valores catastrálidos, o sea, no son valores de mercado. Luego, el valor de la construcción se quita por el tipo de frente, que vendría a ser si es la paltada o empedrado de tierra, el camino de acceso a la casa, y la superficie del terreno. El valor de la construcción es de acuerdo al tipo de construcción y a la superficie construida. Estos dos valores por 100 y tenemos el impuesto inmobiliario. Entonces, lo primero que teníamos que hacer para arrancar este trabajo fue migrar toda la información que ellos tenían. Entonces, tenían una base de datos CAHT con 120 capas de información, y la cual también tenía de repente loteamientos que estaban fuera de esto, que eran archivos aparte. Entonces, juntamos todo, migramos la mayor cantidad de información con todas estas etiquetas, y las convertimos en 9 capas dentro de un SIG institucional. Con estas capas ya fue mucho más sencillo poder hacer el trabajo, y bueno, es la columna vertebral del trabajo que hicimos. El SIG institucional funcionaba en un servidor interno de la municipalidad, estaba en la base de todos los pobres, estamos usando QGIS y con POSGIS estamos haciendo la conexión, y se conectan tres departamentos. Entonces, yo creo que lo consideraría como un SIG institucional. Tenemos el departamento de catástro, que es el que genera las parcelas, entonces cuando vas a dividir una propiedad o unificar, baja el departamento de catástro y eso se aprueba ahí. Cuando se hace esa división, se carga en el SIG institucional, y el departamento de obras entonces ya tiene acceso, y cuando vos querés construir una casa, ellos luego te hacen el polígono de la casa y hacen los controles, y el departamento de medio ambiente es el que se encarga de la parte del control de la basura. Entonces cuando ya tenés construcción y es una casa o un negocio, los valores son diferentes. Conseguimos por primera vez juntar todo dentro del SIG institucional, entonces ya se acabó el tema de pedir los archivos de un departamento a otro, porque era lo que pasaba una vez al mes, se compiaba el cap de una computadora en el departamento de catástro, le pasaba el departamento de obras y el departamento de medio ambiente. Internamente también esto generaba mucho conflicto en el departamento de catástro, ya que vamos a decir que los que mantienen más el SIG son los que están más metidos dentro del trabajo, porque eran dos personas y siempre tenía el conflicto de borrar algo, o de cometer algún error y de ir y pasándose los trabajos, y también que lo podían trabajar al mismo tiempo. Bueno, a partir del SIG institucional ellos consiguen trabajar en líneas el departamento de catástro y que también los otros departamentos tengan acceso a esas informaciones, pero evidentemente cada uno con sus roles, con un sistema de auditoría y con los accesos a las diferentes capas y las limitaciones también que le fuimos poniendo de acuerdo a las responsabilidades del departamento. Esto lo hicimos ya para que el proyecto sea sostenible, porque uno de los problemas que nos acusó también en el diagnóstico fue que hace 20 años se hacía una actualización, pero luego se fue desfasando y se perdió mucha información con el cambio de autoridades y los cambios de política a lo largo de los 20 años de la municipalidad. Por lo tanto, la fiscalía afortunadamente una de las funcionarias participó de ambos proyectos, entonces nos ayudó a intentar rescatar la mayor cantidad de información. Bueno, algo importante que hicimos fue actualizar los tipos de pavimento, porque ellos tenían de una manera analógica, pero cargaban en los registros municipales esta información. Entonces hicimos la actualización, vimos y comparamos con los registros que ellos tenían y tenían la información actualizada solamente que de una manera diferente, o sea no estaba espacializada, a partir de acá espacializamos. ¿Por qué me refiero a esto y a volver un poquito atrás? Porque para el cobro del impuesto necesitamos el valor del frente, entonces una de las ideas era ver si estaba bien el valor del frente y si el valor del frente estaba bien. Entonces en ese sentido confirmamos y no hubo una variación, la variación fue muy pequeña, una o dos propiedades, donde sí fue interesante que lo que me gustaría hacer hincapié, fue la actualización de las construcciones. La actualización de las construcciones la hicimos de la siguiente manera, adquirimos una imagen Warview 2 del 2018, este trabajo no leé la fecha pero lo hicimos en el 2018 y lo terminamos en el 2019. Digitalizamos todos los polígonos de construcción que había, generamos esta capa de construcción, la cual asociamos obviamente al parcelario que habíamos hecho antes en el SIG institucional, entonces ahora tenemos una capa de construcción. Para calificar qué tipo de construcción era, utilizamos dos técnicas diferentes. La primera hicimos una serie de trabajos caminando alrededor de toda la ciudad con una tablet quitando la foto de los frentes y actualizando usando una aplicación del G-Cloud. Hoy en día estamos usando Q-File para este tipo de verificaciones, pero en el momento que estábamos haciendo no resultaba muy estable, entonces decidimos usar G-Cloud y funcionó bastante bien el servicio. Pero teníamos un inconveniente, que mucha gente salía, nos preguntaba qué estábamos haciendo, otros no tenían que seguir, fotografías, el avance era muy lento, este trabajo encima lo estábamos haciendo en enero y febrero y en Paraguay en esta época hace mucho calor. Ahí cuando ya estábamos en un 70% del trabajo, justo habíamos comprado un dron, entonces decidimos probar, quitar los fotos de los frentes con el dron y nos resultó mucho más rápido, mucho más práctico, y utilizamos un dron de JTi, un Phantom 4, y quitamos los fotos de todas las frentes y aparte los que os referenciamos usando el import foto de QGIS y ahí cargamos la información y fue mejor porque teníamos acceso no solamente al frente, ya que como pueden ver en una de las fotos hay tipo un árbol y a veces hay murallas, sino que también podíamos quitarnos las dudas de qué tipo de construcción había al fondo. Pues como le había hablado al principio, a nosotros no sirve solamente saber la superficie construida, sino que con la foto del frente por ejemplo tenemos datos a qué tipo de construcción es, porque no es lo mismo una casa de madera, tiene un valor de construcción, que es una casa por ejemplo como la que está en la fotografía aérea, que es una casa que tiene un tejado de buenísima calidad y que es una casa de primer nivel, y que tiene un valor mucho más alto, y también sobre la detección de los segundos pisos y terceros, entonces con la foto de frente ya era mucho más fácil dibujar y calcular. Con esto pudimos actualizar todos estos 19 barrios, de los cuales en el caso este es el barrio número uno del barrio Oro, que es un barrio muy nuevo, prácticamente no había información y estaban todos subterrenos, como Valdeus. Entonces ese fue uno de los lugares donde tuvimos que hacer también de manera aérea, porque había mucha gente y constantemente no podíamos avanzar y había que explicar todo el tiempo este trabajo, y por más de que nosotros generamos un vídeo y había publicidad en la radio, pero había muchas personas que no querían que se haga porque rápidamente ya entendieron que era para cobrar más impuestos. Bien, entonces esta fue la actualización de las construcciones y acá está el dato interesante. La municipalidad de Junau tiene un ingreso de 914 millones anuales potenciales, su potencial de recabación con el impuesto inmobiliario solamente en la parte urbana, que es la que nosotros trabajamos, que sería 162 mil dólares. Bueno, luego de terminar el trabajo nosotros ya subimos a 1500 millones, que serían unos 267 mil dólares, tienen un incremento de 100 mil dólares, el 64% sobre la recabación potencial, o sea eso es lo que potencialmente puedes recabar, de ahí a lo que realmente se consiga suele ser un 45 a 65%. No tengo, no tengo acceso a la información de cómo fue la recabación en este último año, por el tema de la pandemia creo que ahora ha mermado bastante, pero sí tuvo el aumento del potencial y bueno ahora depende la municipalidad de generar las políticas públicas para captar ese aumento, y también intentar cobrar esto. Bueno, y con el dinero que vendría a ser el incremento de la recabación del impuesto inmobiliario, la idea es seguir usando la tecnología SIG para diseñar diferentes proyectos, y uno de los proyectos que se diseñó es el de una bici senda, entonces ya aprovechando toda la cartografía que se generó se hizo con un equipo urbanista un proyecto de una bici senda, el cual están ahora intentando juntar la recabación, y esto también aumenta la recabación para ese aumento, emplearlo de vuelta dentro de la sociedad, porque el pado de impuestos es algo que a nadie nos gusta y es necesario ver un resultado, entonces la idea es seguir generando diferentes proyectos, y que también incentivar a los ciudadanos a que vayan los impuestos porque van a haber el resultado que en este caso sería una bici senda, los dos también en el mejoramiento de algunas plazas, pero ya todo usando dentro del sistema de informaciones geográficas, y también para el cobre de impuestos también están usando ahora para hacer unos mapas de deudores, entonces con los mapas de deudores ya pueden mandar notificadores para el cobro del impuesto, y algo interesante que en Paraguay de la recabación del impuesto inmobiliario el 60% tiene que ser utilizado para inversiones en infraestructuras, no puede ir a gastos fijos como salarios y gastadoras municipalidad, entonces dentro de todo es un impuesto de que la sociedad puede ver el resultado, pero ya depende la gestión también de la municipalidad. Y bien, esto era la charla, muchas gracias. La bopa, gracias por la presentación, estoy de acuerdo contigo, es un trabajo que da mucho trabajo, yo digo eso porque actualmente trabajo en el registro municipal de mi ciudad también, que es en la región metropolitana de Fortaleza, nos se hará en Brasil, y también tenemos muchos muchos desafíos, después yo voy a conversar un poco contigo para hacer más información sobre su trabajo, que es muy muy bello, muy muy trabajoso, pero muy gratificante, y que tengo certeza que irá a traer muchos beneficios para la ciudad, tampoco, que las personas no gusten por cuenta de los impuestos, acá es lo mismo, no tienes diferencia. Yo tenemos muchas preguntas, y yo voy a compartir en la pantalla para ustedes poder mirar junto con los famosos. La primera cuestión es sobre la migración de datos de CAD para seguir los desafíos, pero no que muchos sabemos cómo es este desafío para trabajar de Bogotá a dos cad. Bueno, ok, bien, hay un principio, un catástro, que es el siguiente, toda la información que existe tiene que ser aprovechada, entonces por más de que esté en un formato analógico, o en este caso CAD, lo mejor es aprovecharlas siempre, es así que el desafío, a mí parecer la forma más sencilla fue entender bien la base de datos, trabajar con la gente que hizo esta base de datos, es involucrarse, porque yo hice la migración con un equipo y luego nos quedaban varios huecos, y la verdad que no entendía qué pasaba, entonces me quedé como una semana, 15 días trabajando con la gente de la municipalidad, al lado de ellos, para también entender la dinámica, algunas cosas que en el diagnóstico no salieron, y con la ayuda de ellos, me fueron mostrando los gaps que había, los agujeros, fuimos resolviendo, entonces yo creo que es importante una participación de la gente que creó, eso te ayuda muchísimo, también sectorizar por barrios para no desesperarse tanto, ver qué y comenzar por la información más importante, en este caso nosotros comenzamos por la manzana, porque el código catastral tenía una relación con la manzana, y entendimos el código con los afectados, le hicimos participa en la solución, entonces también es constantemente, ¿usted que piensa sobre esto, qué opinas? Y en este trabajo vimos por ejemplo de que había algunos errores que ellos estaban cometiendo, que nos cargaban ciertos datos, porque querían que la gente traiga su título de propiedad, para ellos era super lógico, entonces yo hacían machéo y siempre me faltaban 4% de los 5.000 lotes, siempre faltaban un porcentaje si altísimo, y luego había que estaban este 10% de lotes que ellos no estaban cargando, porque ellos querían que cuando buscaba ya apagar el impuesto, no te acepte y pase al departamento de catálogo y a ellos te piden el documento, y claro me parecía super lógico, pero no estaba dando un problemazo, y después de dos semanas, que pasó esto y yo ponte, ¿qué es esto? Esto es lo que nosotros hacemos y fue algo que nos salió en el... no era que nosotros lo estábamos haciendo mal el trabajo, sino que un error de comunicación, y bueno, para ellos funcionaba, solamente que cuando pasas a otra tecnología de repente ya no está tan bueno. Ok, la pregunta siguiente... Voy a poner la pregunta para mi suavidad, hablan más en el... Sí, gracias. Sí. La próxima pregunta es, es correlata, tiene correlación, si hubo alguna resistencia de corpo técnico de las personas que trabajaban con CAD en la migración para GIS. Sí. Bueno, en este caso no hubo... ya estuve trabajando con órganos nacionales donde sí tuvimos más resistencia, pero acá lo que ayudó, el secreto a mí parecer, es primero que el CAD para esto no es. Entonces nosotros entendemos, pero bueno, ¿cómo le hago a entender a alguien que no es la herramienta correcta? Entonces es comenzar cuando vos comenzas a ver qué ellos hacen y por ejemplo, tipo, buscar un lote. Buscar un lote en el CAD no es tan fácil. El tema también de los errores en la toponía de todo, la normalización de datos. Entonces nosotros lo que hicimos fue, como ya teníamos mucha experiencia en esta parte, fue mostrarle todos los beneficios que tenía esto y lo más importante, gente de la municipalidad, usted no van a hacer la migración. Entonces nosotros le traemos la herramienta ya resuelta y le acompañamos en la implementación. Hicimos un acompañamiento casi un mes porque en Paraguay, en las municipalidades, se trabaja medio día. Entonces teníamos que buscar, tipo, una hora ahí y generalmente los empleados tienen otro trabajo de tarde. Entonces meter una capacitación, hicimos un sábado, creo, pero terminamos usando en el día a día y al inicio yo vi a qué ellos hacían, replicaba, hacía el tutorial y después ya le mostraba pero sin dudar mucho y perder el tiempo. Entonces miran, ah, en esto que vamos a perder mucho tiempo, acá esto lo podemos hacer rápido. Cuando le mostré el mapa de deudores, por ejemplo, ellos estaban súper emocionados porque realmente veían un resultado y aparte suele pasar en las municipalidades que llega fin de año y no hay para el 13º, para el aguinaldo muchas veces. Entonces ellos saben a buscar a la gente que está viviendo impuestos. Y ahí cuando esto era solucionado a la vida, entonces les pareció interesante y ahí hubo aceptación. Pero creo que lo más importante es entender el problema y acompañarle y así les participa en la solución igual que en la parte anterior. Ok, la próxima cuestión es la más votada en nuestra charla. ¿Cuántas personas fueron necesarias para hacer luz del elevamiento? ¿Cuánto tiempo le ojo? Bien, en este caso eran 5.500 lotes más o menos y llevó, a sergamento del campo, llevó 3 meses y quedaron unos lotes que faltaron y faltaron tipo 1.000. Esos 1.000 los hice en solo, en 15 días con el drone. El drone fue la solución más acertada. Y la otra forma eran 3 meses y van a ser 4, más o menos, con un grupo de 3 a 4 personas. Fluctuvo un poco, pero vamos haciendo una medida en 3 personas trabajando. Ok, bueno, hasta ahora es la última cuestión, vamos a ver si no llega más. Si ha desarrollado alguna herramienta para mejorar los procesos de registro en Cujiz. Lo que usamos fue la normalización de formularios. Entonces normalizamos los formularios y luego cuando estábamos usando con el Cujiz, o sea, en el Pogres ya también normalizamos toda la entrada y con eso solucionamos. O sea, dentro del municipio eso fue lo que nos sirvió más que nada. No fue necesario hacer nada extraordinario ni la utilización de plugins, nada de tipo. Solo Cujiz, formularios, organizados, estructurados, pensados. Y una buena documentación de bancos de datos. Eso. Y bajar las reglas de juego. Bueno señores, a partir de esto hacemos ahora así y crear un documento que te dice, esto se carga de tal forma. Que es lo que suele pasar en este tipo de proyectos, que no te da el tiempo para documentarte bien, porque uno tarda mucho en el trabajo. Y ahí es algo que siempre queda. Nosotros hicimos para lo básico, no hicimos todo lo que queríamos hacer, pero sí suficiente para mantener con calidad y con estos controles de edición, sin nada extraordinario, así mismo con lo que había. Fantástico, porque es muy bueno mirar un trabajo que usaste, por ejemplo, las ferramientes que tú tienes, con un buen uso, un uso pensado, con un desarrollo muy bueno del proyecto. Esto reduce el custo para la municipalidad. Y tienes muchos daños buenos para la ciudad, para la municipalidad. Entonces es fantástico mirar tu trabajo. Las personas acá en chat, yo hoy dice buen trabajo. Y yo digo lo mismo. Entonces nosotros tenemos más cuatro minutos, a tabuapa tú quieres compartir algo o invitar a las personas para conocer un poco más sobre tu trabajo. Bien, les puedo comentar un poco, pasó que me parece que es interesante. Y que fue los diferentes, o sea, acá para mí al alguien que llevo parecido, el premuto a tener en cuenta es integrarse a la solución e integrar al resto a la solución. Porque la resistencia en cambio es software, siempre que sea la función pública, porque bueno, son más tradicionales, de repente las personas en la función pública son más tradicionales y es alguien que busca estabilidad, entonces saltar cuando una persona hace algo de una forma y cambiar cuesta. Ahí lo interesante es siempre hacerle participe, para que él realmente sienta que es parte de esa solución y que a la larga va a ser mejor, y también comprender por qué es un problema cambiar. Y ver cómo minimizar ese problema. Esto que vos me decías que no funcionaba, ya lo arreglamos y ahora lo hicimos de tal forma. Entonces involucrarse, mucho y trabajar en equipo. Eso sería porque nosotros hacemos un trabajo de consultoría externa. Entonces eso es una cosa importante. Y lo otro, en el caso de los impuestos, es comenzar a buscarle la necesidad que hay dentro del municipio. Por ejemplo nosotros hablando con el intendente, unas cosas que hicimos ahora fue de los barrios y de las comisiones vecinales. Entonces hicimos un mapa de pago de impuestos donde el intendente se llama el barrio porque le llama la junta de vecinos y dice, mira, necesitamos que arregle la plaza. Vale, genial. Y lleva su mapa y un informe. Donde ve que el presidente de la comisión de vecinos no paga el impuesto hace cuatro años. Pero está reclamando que no le arreglan la plaza. Entonces en el punto de negociación, teníamos un mapa por barrios y donde se mira, tu barrio paga 25% de los impuestos. Esto significa que, como yo voy a quitar, mientras que este otro barrio estaba pagando el 60%, se dio a quitar el recurso del barrio que es buen pagador y lo iba a invertir en tu plaza. No, no, no, eso no parece correcto. Entonces, ¿por qué no nos ponemos una meta? Usted suena hasta el 45% que va a ser tanta plata que es lo que necesitamos para arreglar esto y yo se los arreglo, pero primero paguen ustedes su parte. Entonces eso por ejemplo, a los administradores públicos le encantó porque nunca tenía una herramienta así que le permita realmente. Y también hablar con la persona de si vos me estás reclamando este bache, pero me debes cuatro años. O sea, hace cuatro años vos no pague como crees que acabe la luz y cuando estás haciendo tu parte. Entonces ahí ya cambiaba. Eso fue algo súper interesante que creo que también vale la pena utilizar ese tipo de herramientas y era síc básico, pero aplicado a la realidad. Era bien sencillo y también una cosa interesante, es el imprimir los mapas y llevar los impresos y llevar los informes porque claro, en este caso, pero en mi intendente tiene más de 50 años. Y en una computadora estar vos buscando y interpete no es necesario. Entonces eso le enseñamos también a los empleados municipales para esos tipos de mapas básicos ellos llegan pero con una calidad linda. Entonces ya imprimían y ya llegaban con mucho más respeto. Entonces en una junta con estas comisiones el intendente llegaba con mucho más recursos y se podía arreglar solo. Los gustaría en el futuro tener una web más para esto, pero por el momento estamos puliendo más nuestros productos para poder pasar algo así. Muy bueno, muy bueno. Atabalpa, nuestro tiempo se acabó infelizmente. Yo compartí en el chat tu enderezo de en medio, tu correo electrónico para que las personas puedan entrar en contacto para más informaciones. Gracias por su presentación, por tu tiempo. Es muy interesante su chat en Fosso Gis. Gracias a usted por la oportunidad y cualquier cosa, voy a ir a la exposición. Ok, gracias. Chao. Gracias.
El presente trabajo es un caso de éxito en la actualización de un catastro municipal utilizando herramientas libres, realizado en el distrito de Hohenau, departamento de Itapúa, Paraguay. Fue posible migrar los registros catastrales a un Sistema de Informaciones Geográficas Institucional, mediante el cual fueron integrados 3 departamentos de la municipalidad. Se trabajo con 19 barrios, 524 manzanas y 5.798 parcelas. Fueron actualizados 3.553 registros de construcciones e incorporadas 2.009 nuevos bloques , generando un incremento del 57%, lo cual se tradujo a un aumento del potencial de recaudación del impuesto inmobiliario de 64%.
10.5446/57257 (DOI)
Okay, the second talk today is a Canon class created of an open framework for three Canon pie monitor with Owen Smith. Thanks Owen, good morning. Hello, good morning. Okay, Owen is a student of the Institute for Environmental and Spatial Data Analysis at the University of North Georgia. It's okay. Okay, Owen, the stages are yours. Are you sharing your presentation? Good luck and good presentation. Thank you. So this work was initially undertaken and completed roughly two years ago while I was at the Institute for Environmental Spatial Analysis and I recently just graduated there. I'm now at North Carolina State pursuing my PhD in geospatial analytics. So with that in mind, I've learned a lot since then. And yeah. But so this work came about with the completion of a contract for the Georgia Forestry Commission in which we were using proprietary software to create tree canopy products for the entire state of Georgia using Nape imagery. There we go. So they wanted to know a lot about deforestation metrics in the state, especially as Georgia within the United States is growing rapidly. Urban sprawl is a huge issue. And naturally with that comes a lot of deforestation and the different environmental factors that are caused by it. And so we go into a little bit about deforestation here and the small scales and even at the large scales of what I can do. And then they wanted to be able to monitor it. However, we didn't have a ton of resources. Ideally, we would have some sort of access to cloud compute to have real time monitoring systems set up to be able to update. And then with that, we were using Textron's feature analyst, which in licenses are expensive. So Dr. Joe, he, they Joe and I at the University of North Georgia, we were thinking, well, can we do this with open source software? And so we did. And so the previous studies, as I mentioned, the Georgia Forestry Commission, which was undertaken by us, and then others have used PyTorch, Keras, TensorFlow or FeoToolbox and the likes. So the imagery used for this was the National Agricultural Imagery Program, otherwise known as NAPE imagery. It's collected by the USDA every three, four years, give or take. And then it traditionally has been a one meter resolution. However, after 2019, it's now at a 0.6 meter resolution. And they offer it in two formats. They offer it in three bands, just standard RGB, and then additionally a four band product, which has been here for red band. And as a really exceptional pre processing quality, a lot of it's flown from airplanes. So hence the high resolution. And they remove cloud cover for us. So there's really no need for a cloud mask oftentimes, which helps with the processing because it's an extra step that doesn't need to be done. So the Python libraries that were used in this were GDAL, obviously, as I'm sure everybody attending this conference knows GDAL. And then NumPy, which is the kind of fundamental computing library for science in Python, that in addition to SciPy. And then GDAL utilizes NumPy really heavily for its abstraction into Python. And then additionally, Scikit-learn, which Charlotte gave a great overview of, utilizes NumPy as well really heavily. It's all implemented in NumPy. And then onto that Scikit-learn, it's pretty much the go to for Python machine learning. Again, it's built on top of NumPy and SciPy, the sort of premier Python scientific computing libraries. And then, but why Scikit? Because I mentioned other libraries such as PyTorch and Keras earlier. And the reason for that is a lot of these packages use artificial neural networks pretty extensively. And then they additionally use these for processing along GPU units. Well, at the time that this was conducted, they had really limited AMD GPU support. And I didn't have access to any other type of GPU. So I was pretty limited with that. So ultimately, we were CPU confined. So we wanted something that was able to split parallel across CPU and Scikit does that really well with in particular, their random forest classifier. And then since Scikit is built on top of NumPy and GDAL uses NumPy extensively, they integrate really well together for any sort of remote sensing classification that's needed. So onto the algorithm then. So we use, we decided to use random forest and Charlie gave a really great overview of that as well. And so the reasons for choosing this over other things such as neural networks or support vector machines was that random forest has been found to be very useful in land cover classification. It is, has a good split of both time and accuracy. As a lighter load computationally, then say the add a boost algorithm. And again, it can be parallelized across CPU, which is incredibly useful, especially as a we didn't have access to any sort of GPU that that was able to be utilized for this in the Python environment. And B, it becomes incredibly useful if used in a high-performance computing environment where primarily you're working across CPU cores. One consideration though is it can be a memory hog, especially as the, as a matrix of number samples by the number of trees is stored in the memory. So this also becomes important, right? Because we're working with one meter and 0.6 meter resolution imagery. And then so we also explored the trees classifier. So like random forest, it's a multi tree predictor built using an ensemble of decision trees class, this classifier splits the nodes of the tree completely at random. It uses the entirety of the sample and not just a bootstrap to grow trees. This means that there's it's each tree is independent or uncorrelated to the very last, whereas oftentimes, random forest, you can get a some some correlated data in there. And so again, we went with this because it has a higher bias and lower variance standard random forest. And it's suited for noise, you're highly correlated data sets. And the noise in particular was a big consideration due to the spatial resolution of the data we were working with. So then with that, we had to decide what we were going to classify, right? So as I mentioned, the nape imagery comes in two products. We have the standard RGB product and then the RGB plus NIR product. So with that, we wanted to test both. We wanted to see if using just pure RGB was just as viable as NIR index. So so without we chose the visually atmospheric resistant index and the atmospheric resistant vegetation index. So just for those who don't know, the NIR for red is useful for vegetation remote sensing as it's absorbed by photosynthetically active vegetation and lesser by photosynthetically inactive vegetation and is subsequently reflected by bodies of water and impervious surfaces. So quite useful, especially in Georgia, where we do have a growing urban sprawl. And it becomes important to be able to separate that, right? So the visually atmospheric resistant index just uses only visible light bands, essentially makes it more accessible, more flexible in areas that that don't have an NIR product available at a high resolution. And so the the blue band is incorporated into both of the index I'm going to show. And this adds kind of proxy, one could say for the removal of atmospheric effects without any sort of higher level processing to remove that. So far a formula and then we also normalized it between the values one and negative one classification. And so these are just some examples of the Vari, the normalized Vari and the ARVI, which is the NIR. You can see that the non normalized visually atmospheric resistant index doesn't do quite a great job. The black body you can see in the middle is a is a waterway. This I believe is a wetland area. So kind of a tricky area to classify as is. But you can see the normalized Vari is a little bit better, but the the RV is very clear that it becomes better at separating those values without any sort of say ensemble method using data fusion. So then onto the atmospheric resistant index. So as I mentioned, it uses the blue band to simulate the removal of atmospheric effects. It works very well with the literature, what was clear about that. Yeah, and as we showed this, so that what the previous slide was the subset of this current image here, you can see that even throughout it that that water body throughout the wetland, it's very clear that it's separated. You can see the different farmland, even some spots within the wetland where there's forest potentially that that are dying, which is a whole other issue. So we also enacted additional image processing steps for output. Just a simple local statistics image processing, just a Gaussian five by five medium filter. Really fast as well that was implemented with a sci pi. So very negligible computational overhead added with this. And so the overall workflow can be seen here. So I've spoken a lot about nape imagery and initially this product or this this work was designed around specifically to enable classification on nape imagery for the entire state. And so but I wanted to try to make it as modular as possible to be used with any sort of remote sensing data. So on the bottom half here, we have it set up to after pre processing, you can use on individual files. In the top half you see pre processing classification, then those processing functions are included. But then there's a whole suite of functions and methods to enable efficient processing of nape imagery. And so from start to beginning, you can input your configuration file parameters to hopefully make it more reproducible and include testing. And then within this as well, I didn't mention this. But also we utilized scikit learns hyper parameter tuning, which utilizes a grid search. And that's offered as well kind of packaged for this specific use case to try to find the most efficient and most efficient and most accurate parameters for the remote for the rain and forest classification. So then again, I've talked about Georgia. For those who don't know, this is what Georgia looks like. It's where I was born and raised. And it's where my family's from. So so it's on the mind. But we use it as our case study primarily because we were already working within it. And we had an existing data set that we had created using a different software that we had already validated. We already had all the raw data. So it was it kind of was natural to use this. So we chose a couple physiographic districts within the state. And again, I mentioned why Georgia is important. It has a very high biodiversity. You can go from the coast to the southern Appalachians to farmland in between it to one of the most populated cities in the world in Atlanta. So really varying area. So challenging as well to create accurate data sets. So then we ran the workflow as I have gone over. And a big thing for us was we wanted to look at the time of what was created compared to the existing. This was also relatively easy to do because again, we created this other data set in this additional seditional proprietary software. So you can see on the several graphs here that utilizing the pipeline set up from GDOL to do all the preprocessing all the way through scikit all the way through the post processing to the spoothing. We have a clear, very clear time advantage here. And it's important to know that feature analysts, it's not necessarily an apples to apples comparison. Based on the algorithms feature analysts does use an ensemble method. So of course, there'll be more processing steps within that. But again, if we can get comparable results for less and less time, to me that is worth it. So here's just the times as well. These will be in minutes. That's an important note. And you can see based on area and then the times for each step, right? So we have the index creation very quite quick. And then we also ran extra trees and then the base random forest classifier for each and compared the times to feature analysts. And so then beyond the out of bag kind of metrics that scikit learn provides for uncertainty and accuracy analysis, we also wanted to try to quantify against our the product we had already created with this other software. So for that, we implemented the moving window comparison coefficient Python. And it basically it's a spatially aware comparison index for categorical raster datasets. So with that, we're using categorical raster datasets. So it's a prime example of how to use it and when to use it. It's simple, you can go more into literature about that. The paper was initially created by Robert Costanza in 1989. He has a great paper about that. So here's just some examples. So we for this is just a selection of 20 tiles for comparison. And we that the mean for extra trees was in 87.56 similarity coefficient score. And then for random forest was an 87.62 similarity. And then on the top right, you can kind of see from beginning to end, you can see the raw data, the extra trees outputs, and then the feature analyst outputs. And you can see in that second column, one thing that was noticed was that our pipeline performed better in areas of dense forest than the other data set had without any sort of post processing, which is really important to us as well. So yeah. So just some considerations. This was conducted about two years ago, while still an undergraduate. So had limited resources in time. This project was a learning opportunity for me as well. So with the sort of resources and technical skills I've learned since then, I think the product would be more robust than it currently is. And I would certainly try my hardest to make it more robust. So yeah, any questions? So Owen, great presentation. We have some questions here I put in the screen for the first is there is only four bands are sufficient in your case? Yeah. So that's a good question. That was a discussion that was had quite a bit. Because that is a trade off with NAPE imagery, right? Because we can use moderate resolution imagery, say Landsat or at the time HLS, the harmonized Landsat Sentinel data set wasn't quite as robust as it is currently. But we wanted to have that really fine scale spatial resolution classify with so it's a trade off. But the four bands, I believe will be will be sufficient. Because oftentimes with vegetation sort of remote sensing, you're going to be using the the NIR bands in any ways that those will be your main wavelengths that you'll be looking at and analyzing. If we wanted to do say, more different feature extraction, say we wanted to potentially be better at removing water, we could use the square bands that that's a more moderate resolution imagery would have. But now, since then, I think this would be a great use case for say planet data, which offers probably better wavelengths and more kind of, so what I'm looking for, updated data, right? Because it's almost near daily data for that monitoring paradigm. But yes, four bands sufficient in this case. Okay. We have another question. Is it possible for extending the library to use in another biome? Yes. So that's a good question. I was focused a lot on the state of Georgia. Because that's where we had the data for the day imagery. It's approximately 39,000 data tiles. So there wasn't any sort of robust testing for other biomes. Georgia itself has quite a few biomes. There was testing in kind of emergent wetland areas along the shores within the inner city and then within the foothills of the Appalachian Mountains and then the mountains themselves. But there was no testing, say, in areas that perhaps would have snow. And that's also a symptom of the nature of native imagery. It's taken during the growing seasons. So we weren't really able to test for any different, say, like snow masses or anything like that. So that's a great question and a consideration for the future. Okay. So if we don't have another question, I say a good presentation and the people say in the chat, they love your presentation. So it's a very good presentation. So do we have some minutes if you can say something or invite the people to know more about you working and share your contacts? Yeah. So hopefully a lot more work coming from me. I just started my PhD this past month or so at the University of North Carolina. No, North Carolina State, don't tell North Carolina State, I just said the University of North Carolina. But yeah, but you can find me, you can email me at my email. I'm on GitHub. My username is Ossi Smith. I'm fairly active with the grass community as well. So looking forward to contributions there. Yeah. Okay. Thanks Owen. Great presentation to you soon in the social gathering, I think. So we made a little break in for five minutes for another presentation. So you can drink a water, drink a coffee and you will be here with my presentation. So thank you Owen.
Forested areas play an integral role in the maintenance of both local and global environments. They are the bulk of Earth’s carbon sequestration for mitigating anthropogenic processes, provide natural erosion and runoff control for flooding events which have been growing in frequency because of climate change, and can offer respite for urban heat islands. The effective creation of canopy data is of utmost importance to analyze the aforementioned processes in addition to forest patterns such as disturbance, mortality, and the societal and economic effects forests can provide. Because of the importance of forests and the cycles they are apart of, it is imperative that systems are created that enable the effective monitoring of forest canopy. In particular, canopy classification using remotely sensed data plays an essential role in monitoring tree canopy on a large scale. As remote sensing technologies advance, the quality and resolution of satellite imagery have significantly improved. Oftentimes, leveraging high-resolution imagery such as the National Agriculture Imagery Program (NAIP) imagery requires proprietary software. However, the lack of insight into the inner workings of such software and the inability of modifying its code lead many researchers towards open-source solutions. In this research, we introduce CanoClass, an open-source cross-platform canopy classification system written in Python. CanoClass utilizes machine-learning techniques including the Random Forest and Extra Trees algorithms provided by scikit-learn to classify canopy using remote sensing imagery. One such similar Python module that is based on scikit-learn is DetecTree, but it does not utilize near-infrared (NIR) band imagery. Subsequently, to the best of the authors' knowledge, there are no dedicated tree canopy classification libraries that use scikit-learn in conjunction with infrared data.
10.5446/57180 (DOI)
Okay, let's start. Let me just put my screen like this first and welcome to, good morning and welcome to the academic stage. This morning you have a great lineup of presentations for you that talk on cropped land changes, for example, that are related to violent conflict to landslide monitoring in Vietnam using Sentinel-1 imagery. My name is Carol and I will be the session leader for the academic, this academic session. And before I introduce our first speakers, please note that five minutes at the end of the talk is allocated for a question and answer session. So please feel free to put in your questions in the Q&A chat and I will be monitoring these as the talks progress. The first talk today is titled, Assessing Cropped Land Changes from Violent Conflict in Central Mali with Sentinel-2 and Google Earth Engine by Alex M. Law. Alex is a CalMapper and Drat specialist based in Senegal or up until recently was based in Senegal. For most part of the decade, he has dedicated or utilized FOS to encourage open data solutions to understand food security, insecurity in West Africa with a focus on the needs of livestock herding communities. Most of his work has been on developing tools and methods to track the changing movements of livestock herds as they respond to climate change. The second speaker, Law, discovered observation when she joined the European Space Agency in Rome at the time when the fleet of Sentinel were launched. She later then joined the World Food Programme, her code is to apply remote sensing to the humanitarian sector and since 2019 has been based in West Africa to explore further linkages between conflict and land cover changes. So without further ado, I will add the speakers for our session. Alex and Law to take us through their first talk. Thank you. Thank you. So yeah, I'll start. I'll share our screen here. Now I think everybody can see this. So yeah, thank you. Just to be clear, Laura, Carol is this visible? Yes. Excellent. So yeah, thank you all for coming. So as Carol mentioned, my name is Alex and I'm presenting with Laura and we're talking about how to use Sentinel to to show cropland abandonment from violent conflict in central Mali. We'll be going through a little bit about background behind the analysis while we did it to what the actual method included and a little bit of information on, you know, how this is reproducible, maybe how you could do this and where we're going next. So to give you a little bit of background and context, we're talking about a study area of Mali, specifically central Mali, the region of Makti. So Mali, one thing that's important to know is gets a single rainy season. And it has a harvest in September or October. And this is important to know because this means that every year, the food security analyses are performed in September and October with the harvest. This is usually done by looking at agricultural production data and, you know, basically assuming, okay, has enough food been produced to meet the population's basic needs. Unfortunately, since 2011, there's been an ongoing conflict and humanitarian emergency. A lot of armed conflict has been happening and now about 2020, about 760,000 people were food insecure. So there's this ongoing conflict, which is creating a lot of food insecurity. And one thing to notice that the vast majority of the population depends on either subsistence agriculture or livestock herding. So basically, what that means is this harvest period determines a lot. And in order to figure out what the food security situation is and whether or not, and to determine whether or not there's going to be food assistance and how it's going to be distributed, be distributed, excuse me, that typically depends on agricultural surveys. But if you have a situation where you have ongoing armed conflict that makes in-person surveys almost impossible to do. So long story short, there is a huge data gap in understanding food insecurity in Mali. Oftentimes at the time of the harvest, we simply don't know what the situation is. Now, what this means is there's a lot of cropland abandonment. Basically during armed conflict, you know, you'll have villages that where people might be fleeing or where fields cannot be accessed because it's just too unsafe. So this is something that can actually be seen from space. We have an example here of Sentinel-2 imagery. On the left, you have an image from August 2017, which is a pretty regular year for cropland. And on the right, you have the exact same area, but in 2019, two years into an ongoing conflict in the area. So you can see August, this is a, you know, sort of prime growing period for crops. You can see a huge gap between natural vegetation and the area that should be being cropped. It's bare soil. This is pretty strong indication of cropland abandonment. This shows you, especially on the lower left part of the screen, you can see basically where these fields aren't being tilled. And this is important to note for a couple of reasons. One is the obvious food security implication of it, right? If the vast majority of your population depends on subsistence agriculture and you are not growing and, you know, hundreds of villages just simply aren't growing food right now, that has huge implications for hunger. But also it's important to note that it has pretty strong implications for where the conflict is happening. If you don't have a lot of ground data on conflict, but you can see where cropland is being abandoned, where fields are being abandoned, that shows you a pretty good data set of actually where the conflict is happening. And in the absence of ground data, this can be important. So what did we do? We developed an analysis of interannual cropland change in cropland abandonment. Well, LOR really did most of the work. LOR developed it really. This is basically LOR's method is a toolkit that identified 493 villages with significant cropland losses just for this one analysis. But it's been spread to a lot of other areas of the Sahel. And what's really unique about this is it shows you very, it can create a data visualization very easily, which we're going to show on which areas have been abandoned and which are still having ongoing cropping. The target audience are humanitarian actors, which means that it's a toolkit that needs a quick turnaround, right? It's being done and implemented in an emergency setting. And it's the method is the three-period time scan. And the bulk of it is that it creates a time series composite of an NDVI image stack. So it creates multiple NDVI images from throughout the growing season. And it puts in a single image stack that allows for really easy visual interpretation. And so I'm going to hand it over to LOR, who's going to talk a little bit about the methodology now. And she will go into greater detail. So LOR, if you want to take it away. Okay, cool. Thanks Alex. And hi everyone. So I'm going to go a bit more into the methodology and to start with the data and the tools that were used. So the analysis is based on Sentinel 2 imagery, because it combines the best characteristics to really depict agriculture in Mali. And actually in the Sahel, more generally speaking, but for this study we've been focusing on Mali. And so yeah, basically the fact that it has a spatial resolution of 10 meters, it really is vital because in this area, especially rural areas where most of the agricultural fields are not mechanized, it means they're really small. And so even looking at it with Landsat imagery or some other lower resolution would not help. So it was really essential to Sentinel 2 and because there is archive imagery available since 2016, means that we could actually come back in time before the start of the security crisis, which in this area was mostly 2018, 2019, depending on the area. So Sentinel 2 and this is for the data, and regarding the processing environment, we've used Puget Earth Engine, which I don't think I probably don't need to go too much in detail. It's now a very commonly used and well-known tool. But yeah, you can see a screenshot of this script that is at the end of the day, very simple, very short. And yeah, we can share it as a link that probably we will look into whoever is interested. And I'm going to explain a bit more what it does, but just want to focus on the fact that those tools is very important for us to use and have developed this methodology around freely accessible tools, because we work with local partners and in country governments. And it's very important that we can't really propose a methodology that's based on expensive tools and everything. So if we want it to be used, so that was what I wanted to focus on. Now we can look at what the actual script, what the script does, what it looks like with the three-period time scan. So Alex, maybe you can put the next slide. Yes, thanks. Alex mentioned the three-period time scan. So what is it? So this is this image in the middle and the very colorful image is the three-period time scan. It's derived from, let's say, approximately 20 images from Sentinel-2 images that are available between 15th of June, which is the beginning, let's say, theoretically the beginning of the growing season in the area and 15th of October, which is more or less the end. Why is it useful? Just when we compare it to a single-depth image, which we have an example on the left, it's just an image taken from Google Earth where we can guess, but that's not so easily that there are two villages and some agricultural fields, but it's quite hard to say whether it's actually cultivated or not. And in any case, it's an image dated from April 2018. So now we want to compare actually between years. And so the time scan, the three-period time scan, the colorful image is this composites image that actually really, that singles out croplands. So you can see in darker corals the cropland compared to natural vegetation, which is much lighter like cyan, and the villages built up areas in black. And you can actually even see an image that was taken from the ground that we got to go at the end of the growing season 2019 in one of those villages. And you can really see very clearly the denation between agricultural fields on the left and the natural vegetation that has grown over what used to be cultivated fields before. Right. So yeah, so this is what it is. This is what it looks like now, what's behind it, what it actually is in the next slide. We try to put together a graph that explains the three-period time scan is basically a red, green, blue composites of NDVI values along the agricultural season. So it really shows the evolution in time of vegetation. And that's why it's useful because we're trying to look at agricultural fields and they have a different vegetation evolution in time compared to natural vegetation. Right. So the red band corresponds to the maximum value of NDVI in the beginning of the agricultural season. So let's say like you can see approximately I've shown that the graph is showing the rainfall and the NDVI over that area of interest. The NDVI generally speaking for the whole area. Now green is for the middle period and blue is for the end of the period. So it's usually when the peak of vegetation is reached. Right. So now to the next slide where you can see why it's, it's a very basic idea, it's very simple actually, but it just really is useful and it reveals the patterns that are quite easy to interpret and the different length of the types are just associated with different with specific colors. Right. So agricultural will have the land is being prepared at the beginning of the season. So it's almost barren land and then it grows reaching a peak and then it's going to be harvested. For us it's always going to have very high vegetation index and natural vegetation is, it really depends and especially in those areas where the Sahelian band is quite specific. So it's quite hard to, there's not one natural vegetation but we just put this to give an idea. So yeah. So I hope that gives a good idea of how this three-period time scan is obtained. Now we can look at an example because what we want to find is the changes. Right. So if we go to the next slide then we see in Google Earth Engine directly how it looks like. So this is 2016 where you see very clearly all the crop plants and the difference when you go and see the three-period time scan for 2019. So this is it. Now it's quite clear. So this is what, yeah, what it reveals that basically in 2019 people stopped cultivating far from the villages because it got too risky. And so yeah, we can clearly see the massive abatements inland. And why is it, how was it used? So like this is the products, right? So we understand them and but working in West Africa with the humanitarian and actals, we need to translate this into a product that's going to be useful and that's going to arrive on decision makers desks. And so if you go to the next slide, Alex, you can see how this was translated into one clear product that gives another view of the crop plant change in the multi-region. So this is in 2019, but we also did it for the following years compared to pre-conflict years. And so in red, you have the severe decrease, which is more than half of the agricultural surface areas for each locality that were abandoned. In orange, you have the medium decrease and yellow for the slides. Yeah. So it really reveals areas of high vulnerability villages that where something happened clearly. And so we try to understand why and we overlap also with the ACLAID data, which you can see is as the brown circles. And so it clearly shows that there is a link where you have violent events doesn't mean you have cropped abandonment where you have cropped abandonment and you have decreased in agricultural lands, then you have violence ongoing in the area in that year. So yeah, this has been useful because there's a lack of understanding in what's going on in this area. And it's always quite, it makes it hard for local institutions, also like humanitarian organizations to make decisions. So on the next slides, I've tried to put together the type of the two main operational uses that are done with these products. The first one is that it's used, it's integrated into national food security analysis, which occurs twice a year and are essential to the organizations in country. So it's this, sorry, analysis is used to inform especially in health rich areas where there's no field data, no field survey that could be conducted. And it's also used by the humanitarian actals themselves to better target the assistance when organizing and planning the earlier response. Right, so this is a very quick overview. I think that's it for me and I'm leaving Alex to conclude on that. Thanks. Thanks. So basically, to end it, I guess where we're going next. The first is scaling up. This is a method that is not just limited to central Mali. So right now, this method is being tried for other contexts or the situations across West Africa. But perhaps most interesting to you, the audience is the fact that you can access this. So the Google Earth Engine code is on a GitHub, we just got to clean that up a little bit before we publish it, but you can contact either Laura or myself and we can share it with you. And we've also developed, we've translated it for PyQGIS. So you can actually run the analysis directly from QGIS without having to go into Google Earth Engine. We have experimented with machine learning. I mean, we were, typically whenever we present this, we talk about the fact that we use visual interpretation and the common question is always, well, why don't you automate it? And so we are experimenting with it, but right now machine learning is not appropriate for operational use. You have so much heterogeneity in the Sahel of different spectral signatures for Cropland that it requires a lot of cleaning and you simply cannot use the same algorithm year in, year out. You would have to do a lot of tweaking and that's just simply, we don't have the time for that. We're looking at a turnaround time of a couple of weeks, oftentimes less, that humanitarian actors need immediately after the harvest. We're also working on trying to develop synergy with regional initiatives and early warning systems to try and make this method more accessible, more usable and more in sync with what's being done in West Africa right now on early detection of food crises. And the last is capacity building. In addition, it's not just enough to make this toolkit open source, but really to try and do trainings and try and get people to use it and try to make it as easy as possible of a transition for people to use it and really bridging the gap between the humanitarian and, or the operational and the technical communities. So thank you very much. You can contact either of us by email or Twitter. We have our emails here and at the bottom there's a citation for the paper. If you want to use it and check out the data, please don't hesitate to contact either of us for code or with further clarification. So if you just want to talk about this, we're more than happy to stay in touch. So thank you. And I guess we can open it up to questions if there are any. Thank you, Alex and Law. How was it a very interesting presentation? It's different when you read like the abstract and you see the full blown work. It actually like boggles my mind. So stream is a little bit delayed, but they do have a couple of questions. The first one is, are you also thinking of making an Earth Engine app to share results with non-expect? Yeah. Yeah, so I think, yeah, that was a good question. We also, yeah, thought and I think we made an app, but there was mostly like to show the different types can in different trainings that we conduct. It's always super interesting because then you look at the different years and the time scan and but then to actually conduct the analysis, it's actually good to have access to the processing environment and create. So from this, you can actually create the shapefiles directly of like localities are affected. So yeah, it's actually something that can be done. I don't know if Alex, you want to add anything? No, I mean, yeah, an app would be a good idea. But also, I mean, the Google Earth Engine code, it's already pre-tuned. So you basically just all you have to do is press run. You put in your point at the top line and you just press the run and it generates it pretty quickly. So you don't even actually need to write any code to use it. I see an next question on how can I get a geotiff of the derived products? I mean, you can download it from Google Earth Engine. There is an export option. And I think I think, Lauren, the last version of the code, we have we have we have an export commented out, I think. I think so. Yeah. Yeah, it's something it's very easy to add. And we've tried this also because then it was interesting to to further explore the three time period times can in QG or other software to try more, yeah, to test it on different levels. But yeah, as Alex explained, we're sticking to it for the moment when we need to actually produce the map, we do everything in Google Earth Engine. So but then this is open to I'm sure there are so many avenues for improvement and ideas that can come out of it. So I mean, that's also why we're having this talk, we're super interested, interested in knowing what people could do with it. Like, I'm sure there's lots of steps that can come out of it. So any thoughts? I see a question. Oh, sorry, Carol. This is the question on biomass. I think that I don't think this product is necessarily the best for detecting biomass simply because it works really well on showing the difference between cropland and natural vegetation. But I wouldn't use it for like, you know, dry matter, like quantifying dry matter productivity. It's a very visual product. I wouldn't use it. I mean, I think that there are other biomass products out there, like, you know, the one, the dry matter productivity that's produced by Vito, that's on Sentinel three, I think is more appropriate. Laura, I don't know if you. Yeah, no, definitely. I mean, this product, the methodology that we propose is not for quantitative results or anything that's it's really so you saw the map with the red dots and the yellow dot. This is like what it's used for at the moment. And the product is also it's very simple. It's very full visual and making this very visual product at the end of the day, not to get any quantified results. Because we're quite limited in time and also because what the users that we have, I mean, that this methodology was was was developed around operational needs and the users are yet not looking for I mean, that would always be great to have very specifically quantified products. But in this case, it's already a great move from nothing to super precisely quantified products. This is halfway and it's it comes pretty quickly in a few weeks. So yeah, this is full this product is pretty qualitative. Yeah. Okay, maybe the last question before we wrap up and move to the next presentation. What other countries have you assist or are you planning to assess? Yeah, so now that's 2020. And it's also it's gonna be the case this year as well. We've looked at seven countries in West Africa, Western Central Africa. So I can list them by memory. It's Mali and Burkina Faso, Niger, Nigeria, Nigeria, I mean, Cameroon, Central African Republic and Chad. Yeah. But they're all very different. So it's a super it's super interesting because if you look at Central African Republic, which is very it's not in the Sahelian bands, Cameroon either. So they you have to adjust the parameters. You can't use the same times. For instance, you can't you need to read each country has even within a country, you might need to adjust as well as well the parameters of the at the beginning of the scrapes. And yeah, that's great. Thank you so much for your presentation. I think a number of people have enjoyed it. And to see some really great comments. And I think if anybody else has any more questions, they could definitely find law and Alex, maybe in the social the social gallery. Thank you.
In Central Mali, climate change, food insecurity and growing conflicts over land use necessitate being able to localize areas of food production (Benjaminsen, 2018) . The region’s heavy reliance on subsistence agriculture livelihoods means that humanitarian actors must quickly assess changes in cropland to plan the distribution of food aid. Typically, in the absence of extensive field data, publicly available land cover datasets are used to identify cropland cover. While the proliferation of such datasets (e.g. ESA-CCI or GlobeLand30) has increased over the years, they are often ill-adjusted to the Sahelian context. Assessments conducted of cropland identified by the most used land cover datasets found that none were able to meet the 75% accuracy threshold in Sahelian West Africa (Samasse et al, 2019). While countries like Mali are among those most critically in need of cropland mapping, the current toolkit of landcover data is woefully inadequate for the needs of humanitarian actors. To address this gap, the “3-Period TimeScan” (3PTS) was developed using Google Earth Engine (Gorelick et al., 2017). This product consists of a Red-Green-Blue composite of Sentinel-2 Images where the red band represents the maximum NDVI value during the first period of the growing season, the green the maximum NDVI in the middle, and the blue the maximum NDVI at the end. This condensation of the agricultural season’s temporal evolution singles out cropland from other landcover types. A highly localized cropland change analysis was conducted comparing the 2019 3PTS product with the one of 2017, a year prior to the start of the Central Mali’s conflict. The change status was visually determined per populated site, as supervised classifications required exhaustive manual cleaning to produce a reliable product over such a large and ecologically heterogenous zone. The resulting map was compared with georeferenced data of conflict events, indicating a strong spatial correlation between violence and cropland reductions. In June 2019, during the planting period, a peak in both the numbers of violent events and of fatalities was recorded in central Mali. Most of the significant cropland losses occurred in localities where violent events were reported for the period between April and October 2019. Cropland abandonment, but also an effect of concentration of crops in the proximity of habitations (due to access restrictions, violent threats or attacks in farther fields) and settlement damage are as many consequences of the violence operating in central Mali, as visible from space. The World Food Programme (WFP) operationalized the analysis from the methods detailed in this paper. By offering a map and a list of localities showing significant declines in cultivation, a more precise picture of food insecurity could be drawn, highlighting vulnerable areas in need of food assistance. These outputs were quickly absorbed into the humanitarian response planning process, notably through the Cadre Harmonisé (CH), the bi-annual national food security analysis (led by the national early warning system in collaboration with line ministries and humanitarian actors such as NGOs and UN agencies). The goal of the CH is to estimate the number of food insecure people in the country and provide coordinated targeting priorities for humanitarian response. The remote sensing results contributed to estimating 757,217 persons in food insecurity for the 2020 lean season (the seasonal period where hunger typically peaks during the year). Beyond the CH, the unprecedented level of spatial precision provided by these results fed into humanitarian response mechanisms and strategic decision-making, as a tool to enhance village-scale geotargeting of most vulnerable communities. WFP used these outputs to target their humanitarian assistance for the lean season of 2020 as early as March (2-3 months ahead of the start of the lean season).
10.5446/57181 (DOI)
Your next speaker is Thomas. Hello Thomas, who are you? Hello, I'm good thanks. Yeah, Thomas, you talk about Serbless and he is the CTO of address cloud. Where he leads research and development of geographic risks and location intelligence services. Yeah, and Thomas, your turn. Brilliant, thanks. Just going to share my screen. How's that? Looks good. Great, so thanks everyone for joining my talk. And as has been said, my name is Thomas Holderness and I work at address cloud. And today I wanted to talk to you about our experience at address cloud of running our software as a service business on 100% Serbless architectures. So a number of phospho G events now, phospho G UK online last year, I think, phospho G at Bucharest. I've zoomed in to our architecture and talked about very specific configuration and use cases of free and open source tools like post GIS and data formats like cloud optimized geotifs and how we're using them to serve large amounts of geographic data in response to customer queries. And the focus of this presentation is to kind of take a step back or zoom out a little bit and think about actually how what our experience been like and kind of share our experience and kind of advocate a little bit for some of the things that we've done and also kind of give a heads up on some of the challenges that we've had. Because I think the space is only going to see more investment in the future going forwards. So there's six things that I'd like to talk to you about today that are kind of facets of our serverless experience. The first is cost. I was involved in a Twitter question, a Twitter discussion a couple of months back and someone was like, yeah, but what is the cost difference between running virtual machines or running a suite of Docker containers or running EC2s or even physical machines versus a service architecture. And that's quite a hard thing to measure and to quantify. But I've got some examples that I'm going to work through because I think that for any business that's running a service, the cost of cloud infrastructure is key. And then we move on to scalability. And this is hand in hand with cost really because you need to understand the capacity of your system to know what throughput your customers can get from your service. And you need to be able to deal with that maximum throughput. And related to that is latency. You need to know that as your customer demands increase upon your service that you're going to be able to respond to those queries without the latency increasing. And at the bottom here, we've got service. And by service, I mean what service can we offer our customers and what is the quality of that service. And it's the service that I'm offering actually benefiting because of this serverless architecture. Going to look at infrastructure as code, which is one of the challenges and was great to see in the previous presentation, some examples of a Terraform plugin to deploy a Geo server. That's really cool because we use Terraform as well. And I'm going to talk a bit about why we chose that and how that was really a key kind of learning experience for us about adopting this kind of serverless architecture. And then lastly, observability. So if I haven't got a physical machine to log into, it's really important that I can adapt to that. And then to it's really important that I can observe that my system is responding in the way that I think that it should be, that the latency isn't increasing, that the scaling is happening, and that the costs aren't spiraling out of control. So that observability piece is really key. And I'm going to show just our flow line of how we do that, which might be helpful to some of you. But it would be remiss of me to not talk about Geospatial, Geospatial Conference, and maybe even show a map or a screenshot of a map. And so for those of you that haven't heard of us, I just wanted to really give a brief overview about Address Cloud, because it really sets in context a lot of the things that I'm going to talk to you about. So the first thing you should know is that we're a software as a service company. We're small and there's five of us. We're based in the United Kingdom. We're 100% employee owned. And we work in the insurance sector in financial services with banks, in logistics, and also in property survey. Those are kind of our key markets. We do about 10 million transactions a month. We have about 400 users of our system. And we power some well-known brand names here in the UK. So if people are getting insurance quote, then lots of that processing is coming to us. And the reason that that is the case is because we really provide two key services. We provide geocoding, so entering an address and getting a location back on the earth's surface. And we provide property intelligence and a geographic risk assessment of that property. I'll show you some examples of that in a couple of slides. So it's really important to our customers. We've got quite strict service double agreements with them that our transactions are being processed really quickly. And that it's available all the time, because the public are essentially using our customer systems and that's backing onto us. And if we can't resolve that piece of information, that geographic query fast enough, then that's potentially a lost customer or a lost sale. So it's really important that our services on and available at that capacity all the time. So it's kind of an infographic of a little bit of what we do. And we've got an address up here at the top. And so we've taken that address in, and we've geocoded it so we know where it is on the earth's surface. And once we know where it is on the earth's surface, then we can try to understand what sort of property is at that address. Is it residential? Is it commercial? Is it mixed use? Is it government property? And is it a tower block? Or is it a family home? And once we understand what sort of property it is, we can move on to understand what perils might affect that property. That an insurer or someone in financial services might be interested in. Is that property at risk of flooding or of fire or wildfire in North America? Or of subsidence or earthquake? And then another kind of interesting geographical dimension that we're starting to add to the service is for insurers who are taking on that risk. How are they exposed in the neighborhood if they start to take on lots of risks nearby? Then if there was a flooding event, how would that impact them? So that's kind of what we do puts in context why we need to architect in the way that we do. Excuse me, there's a screenshot of one of our applications that shows a map. This is for an insurance underwriter that would come into a dress cloud to view a risk. And we've geocoded that risk and we can see that we've added some layers to that map. In this case is a flood risk of some nearby properties. This is actually a train station in the city near where I live. So we've geocoded that. We've got the point on the earth surface. We know what the property is. We've got some risk scores over here. And so all of the processes that have happened behind this application, including loading the vector tiles and the raster tiles that power these overlays, those are all coming through a dress clouds API. And we're now using map Libre to power this React application, which is great. So I didn't want to show an architecture diagram, but I did want to kind of show how all of this fits together because it gives a lot of context to our experience. And really it's important to note that we didn't lift and shift our existing solution into a serverless architecture. We've re-architected for a number of reasons really into a serverless architecture. So we've changed the way that our service operates. So on this slide, you'll see on the right hand side that we've got kind of four layers, and I'm going to start down here at the data layer. So we've got three principal sources of a data store that we work with. I've talked to Fossil for Gene Bucharest, and you can see the video and the slides are blog.addresscloud.com. I will share the link at the end about using cloud optimized geotifs. This is kind of a scalable data store when they're putting an Amazon S3 bucket as a way to query large, really large, complicated, continuous surface models and get handfuls of pixels back at any point in time. I also talked to Fossil for Gene Edinburgh about how we can use Postgres to and Post GIS within the Amazon Aurora service to have a serverless and scalable Post GIS suite. We use Elasticsearch, which is hosted by Elastic Clouds. It's not necessarily 100% serverless as you say, but it is a managed service for someone else that's doing the infrastructure management for us. My colleague Mark Vali has given a number of presentations about the nuts and bolts of using Elasticsearch as a geospatial data engine. So sitting on top of these three kind of scalable data stores, we've got some application logic. This was originally our monolithic JavaScript application in Node.js. It's now split into a series of Lambda functions. One of the things that we've done is think about, okay, so we've got these supposedly scalable spatial data stores in the back end, but we know that there are still some bottlenecks here. We know that to scale Aurora takes a couple of seconds. So that could be a potential bottleneck, for example. We know that the capacity of our Elastic Cloud can scale, but again, there's some latency associated with that. So one of the things that we've done as we've looked at the information that we've got and the service that we provide is we've actually pre-indexed lots of our data using the H3 geospatial index. And then we've put that data inside a DynamoDB table, which is a serverless database offering from Amazon Web Services. And it's a key value lookup. So very quickly, we can look up for any known location, any known property in the country, and we can look up its properties. And this is kind of like an all-you-can-eat database with as much capacity and speed as you could ever imagine. So that's great. So it's got some logic that decides which of these to query or potentially queries, all of them, and hopefully tries the cache first. And if not, falls back on these. We've got an API which manages our API keys and authentication. And then we've got a happy user sitting at the top here that's interacting with that API or is interacting with the desktop application or the web application I showed before. So that's kind of it in a snapshot. That's the way that we've set things up. So let's dive in and look at some of the kind of experiences and advantages. So the first one, kind of as I promised, was to look at cost. It's really hard to compare sort of before and after costs with our serverless architecture at least, because we changed so much in the development. So what I've done is I've picked two examples of things where I've tried to understand what the differences in cost would be between a serverless and non-serverless setup. And we could probably pick holes in some of this, but it's just kind of a, I guess it's a working case study. But what I wanted to kind of illustrate here is on the right, we've got some graphs. And this is some queries that are coming into our vector database. And that database is post GIS. It's hosted in Aurora Serverless. And we can see that over the month of March this year, we've spiked about 500 post GIS queries in an hour. So we're starting to tax the database. Customers come in and probably put some quite complicated query requests through our API. Potentially, we haven't got those values in the cache or they're really complicated shapes. And so we've had to fall back on a true spatial query. And you can see what's immediately happened here is that on our behalf, without us even knowing about it, Amazon has scaled that database for us. It's spiked up from two compute units to eight compute units. And then after those queries have kind of calmed down, and there's probably some data in the cache as well, those compute units have dropped off and we've gone back down to two. So what I've done here on the left is, so we spiked up to eight ACU. Obviously, we weren't using eight ACU over the entire two of the month. We were only using them probably for a couple of hours in that day, whilst we saw this peak load. So our bill for Aurora serverless that month was $114. Now, if we were to provision capacity in a traditional manner without any auto scaling, I estimate that we'd need a T3X large database type, so virtual machine from Amazon that's running Postgres. Relational database service is a managed service that you can use from Amazon, but you still have to control the capacity and the scaling yourself. So you still have to do some work there. And I've also selected an instance which allows cross availability zone replication, which is something that you get out of the box with serverless. So if one of these, there's actually three database instances in this case, and if one of them goes down, the master goes down, then it'll fall back on one of the other two. And the same happens with Aurora, and I've actually got an example of that happening in a few slides time. But you can see the price difference, there is a price difference there, and you could think about there are some ways you could reduce this price probably. But the point being that because you're trying to be at capacity all the time to make sure that your customers, if they come knocking, can get that performance, you've got a four times increase in cost if you were to run those databases all the time. Now, you could put a load of work in there to put auto scaling in to manage that, to think about different database operations or approaches. But the point is that you can get that out of the box here and save money without really having to do anything. And it's just post-GIS, under the hood or it's post-GIS compatible, at least. So that's one example. The second example is comparing our compute capacity. So what I did is I added up all of our function invocations for every Lambda function that we use. So this is all of our environments development production, it's helping our customers do their integration. So it's our sandbox environment, it's all of our back office processes now, which we also use Lambda functions for, pretty much gone all in on this. The only thing that isn't captured by this sort of processing is our data, our data preprocessing, which tends to have very long jobs that run over a couple of days. And for those, we still use a combination of EC2 instances and Docker instances. And those aren't captured here. But for all of those, so we've done 20 million function invocations and Amazon has kind of come up with this magic number of said that in that one month, we had 82 days worth of computing time. So we did a lot of computing in that month over the month of March. And so we've got these 20 million transactions, it cost us 56 bucks, which is kind of incredible. So that's all about everything that's customer facing, everything that's in development and every back office process that happens in Amazon Web Services for 60 bucks. If we were to replicate that across the three services and the back office services, I think we have this as kind of an estimate that I came up with today, we'd need about 90 to medium EC2s to meet capacity to meet peak capacity, you can see these peaks here. And so that would cost us a couple of hundred bucks a month, we could probably reduce that if we went for reserved instances, and we had a conversation with Amazon sales guys, and you know, we could probably hack all that down. But the point being that because we're not paying for this stuff to be on all the time, we're only paying for what we use, even though we're using a lot of it, still very cheap. So I would kind of advocate there that there are cost savings to be had, but it's got to be coupled with the way that you think about architecting your application. And that kind of leads nicely into scalability and latency. And I was a bit worried yesterday when I started writing my slides, because well, I'd left it to a little bit too late, and I normally try to be a bit more organized than that. And then I was also thinking, we haven't really thought about this service for or any of our services operationally, because we've been busy doing other things. So we've been onboarding customers, we've been improving speed, we've been adding new functionality. But the service itself has just been chugging away, it's just been serving up those customer requests. And I think that's kind of one of the real benefits, the hidden benefits of choosing a serverless environment. So the purple graph across the top shows our geocoding requests coming in, again, for the month of March this year, I've just picked it because March seemed to be a good month, the graph seemed to look nice. So that's what we're doing. We've got to about, looks like 43,000 requests over about an hour or so on one of the days, that's kind of the peak that month. So this is, that's what this activity is showing. So it drops off in the evenings, you know, when there's a few requests coming in from customers, and then it peaks again in the day when people are using the system to do their work. And then down here on the second graph, the red line shows the latency of that service. So we've got geocoding requests coming in across the top. And what's quite interesting is that it doesn't appear to be a relationship or any similarity between the changing in latency of the service and the number of demand that's in the top graph. I'd say it would also point out that this latency graph is a smooth average. So this is by no means perfect. And the tool that we use here, I could have dived in more and seeing if we could get a distribution plotted on this graph, but I didn't quite have time before this. But you can see that, you know, our peak latency is still an average of 240 milliseconds. And our fastest response time was about, on average, about 80 milliseconds over the month. So it's pretty good, we're well within our SLA for our customers. And we're able to gobble up these increased transaction loads of bits, these big jobs that customers are checking at us. And we've got no sweat about the latency. So service, what does that mean? That means that I kind of have this metric of how many times do I have to get out of bed to deal with the to adjust cards operational side of things. And I'm pleased to report that this only happened once in the last 365 days. So I counted up all of our external API transactions and all of our internal services that have also got APIs. And we did 142 million API transactions over the year, across every environment, across all of our APIs in the stack, so both dev and production. And overall of those transactions, we had one production alarm. Now, frustratingly, this actually happened whilst I was on holiday for the one week that I was on holiday over the summer, actually away from my house. But it was fine. I still had some internet and there's also a team of us, so we can also share out when these events happen. So we've only had one event. And what happened was our Postgres database experienced intermittent connectivity for an hour. And that was because Amazon had made some configuration errors or had some configuration problems in a couple of their data centers. So what they did without us even having to do anything is that that master node was in a data center that was having problems, they shifted that master node to another data center, still in the same region, but up to 100 miles away, they shifted that across and then they started to split the traffic over. And actually, because of the way our applications are protected, we didn't none of our customers even notice that there was an issue. This was just something that was picked up by our internal monitoring. And they had that fixed within the hour and within about 30 minutes, we actually saw that the error rate decreasing. So kind of an example of all of that stuff going on throughout the year. And we're all busy people and we're all kind of building our services or running our projects or trying to deliver the tools and contribute to FOS4G. Really nice to be able to know that kind of in safe hands of the cloud provider for most of the work and that really they're taking the lead and responsibility on maintaining that uptime. So lastly, sort of the five of the six things to mention infrastructure is code. This was a steep learning curve for us and something that we started a couple of years ago, so a long time before we went fully serverless. We chose to use Terraform for a variety of reasons, but predominantly because it's very robust, it's declarative, which I really like. And it also supports more than just AWS, so you can use it to configure lots of different types of infrastructure, which is great in lots of different clouds. And so we actually ended up investing a lot of time in this, building our own deployment pipeline so that we understood what went on, making sure that we could version control all of this infrastructure, which we've defined inside Amazon Web Services, so that at any point in time we can see exactly who has changed what and what our infrastructure is represented. So we've got lots of code files that represent all of the suite of tools that we use and how they're all connected together. And we can ship that as part of our CI CD process. So a developer can come in, make a commit either to a logic change in the application or to some infrastructure, push that to GitHub, have some tests run, if everything passes, that gets pushed to Terraform. If everything passes there, that actually gets deployed against that infrastructure and we've got a process of making sure that we're happy with that in the development stage before that gets pushed to production. And I've written about and talked about this in the past, you can grab that blog from our website. And then lastly, observability. In the last couple of minutes of this presentation, observability is really important when you're thinking about a service application. And I think it shouldn't be thought about as an afterthought, it needs to be thought about as part of your ongoing IT or business processes. Because you don't have a normal service coming from our traditional monolithic application where we're backing onto PostGIS and Elasticsearch and we just had this thing that was running queries, we could dive into that box or those boxes and we could grab the logs and see how those queries were running and what queries are being executed. Obviously, with lots of the tools that we work with, there is no physical box or virtual box that we can log into. So we need to capture those logs if there are any, we need to capture the metrics around those transactions. Ideally, do some of that end to end. It's not something that we do well, but something that I think we could definitely improve upon. So we want to capture all of that information and then we want to be able to interrogate it so that we can actually query to understand what was going on and we can produce some of the graphs that you've seen in this presentation. So there are a few different ways of doing that. And this is the way that we chose to do it. We used the Postman library. They have a core module, a library called Newman, which is Apache licensed, an open source bit of kit. We took that, we wrote some code around it to run a test suite against our API that's running every 60 seconds, every 30 seconds. And that's running all of these tests and we're pumping those logs into an Amazon tool called CloudWatch so we can see how those tests are performing. We can grab those metrics and those logs into Grafana, which is another open source bit of kit, and that allows us to see in real time what's going on. And if any of those graphs are over a certain threshold for a given period, then Grafana sends us a ping via a pager duty, which is what wakes you up when you're on holiday. So that's kind of how we've architected our observability. As I say, lots of different ways to do this, but just wanted to make the point that it's a key thing about thinking about and it's not necessarily an easy challenge to solve. So in summary, we're a big advocate for serverless because we've managed to re-architect our SaaS application to serve the needs of our customers and to serve the needs of our business and for us to be able to continue to scale and to operate at scale and have low latencies and basically make sure that we're doing what we need to do to make our customers happy, which is kind of the key goal. And really, I'm happy because we've managed to take some of the best bits of FOS that we've been used to using, be it GeoJSON, tiles, COGS, Factor Tiles are in there as well. I haven't talked about those today, but maybe another presentation. We've combined all of that kind of best bits that we've managed to get it together in a nearly pure play serverless architecture in AWS and hopefully we'll continue to grow and we'll be at FOS 4G next year as well. So that's me. If you've got any questions, then pop them in the chat or feel free to send me an email. Thanks very much. Thanks Thomas. I have one question. It's a personal question. Yeah, it's I imagine you collect data from a very internal external source. How is the ETL process to keep everything up to date? Is everything serverless too? Well, that is a great question. And traditionally, no, we had a series of scripts and we were running those on an EC2 box. What we're actually just building at the moment this month, Mike has joined us and he's built some tech using AWS Batch, which is a dockerized batch processing tool where you can have a queue of tasks and then you can have a fleet of instances that run docker containers. And so he's doing some brilliant work at the moment to basically operationalize that process so that exactly that so that all of those ETL processes don't have to be run manually or semi-automated in a semi-automated manner as they are now, but can actually we can just have a queue of tasks that can be triggered by different events or by different time periods and they can go grab the data and they can then wait for the availability of the processing pool and then they can crack on and then they can be pushing the data artifacts out into the testing environment and then eventually to production. So that's something that we're working on actually at the moment, but yeah, it's an interesting problem to have. Okay, thank you. Another question is, did you find AWS lambda coding sometimes hard-going what with library size layers and etc? Yes, that is a great challenge. We do for the smaller functions that are in JavaScript, we just we use webpack to compile down that code to basically build a JavaScript executable if you like that then gets uploaded. For big geo libraries like the stuff that we do with raster actually give it it's on the blog but we talk to give a talk about it earlier this year about Hammerson's new Docker container lambda environment where you can use a Docker image instead of packaging up using sort of traditional packaging tools like instead of having a zip file you can create a Docker image and I've actually did a demo of using and we're actually using in production now the Docker image that builds rasterio so that we can use rasterio in python to do our query our cloud optimized geotifs and that's working really really well it's actually slightly faster than the traditional python zip file method operationally. Okay, you have time for another question. Well, do you think serverless architecture could work for latency sensitive applications like the the high vehicular routing that typically really keeping a ref in memory for quicker response? Yeah, I think I mean anything where you've got a status you've got like a stateful transaction where you're reliant on a server side process managing state on behalf of a user's process is going to be trickier with the with the sort of traditional API gateway and lambda architecture that I've shown. That being said I think you you know there's a number of options of things that you could do there about okay why do you need to keep that why do you need to keep that thing in memory and or is there an in-memory cache like memcache that you could use to store that to sort of back onto there. It's an interesting one that's something that I've never experimented with is API gateways socket sockets so as well as just doing like restful interactions you can do longer lived connections and I think that that would be really interesting to sort of play around with that and so say if you've got a continuous connection between machine and your service you could stream data between the two but what the back end would look like that I don't know I haven't talked about it. Okay, thank you a lot for your answer Thomas. Did you have anything more to do to talk? No that was brilliant thank you and thanks for everyone for joining. Okay thank you.
Serverless enables geospatial developers to build applications without worrying about servers or containers. In this session we will look at the advantages and challenges of serverless for geospatial, drawing on Addresscloud's experience as an early adopter and insights gained from using serverless to power production geocoding and location intelligence services. What’s it like to run a geospatial service without any servers? Addresscloud is a Software-as-a-Service for geographic risk and location intelligence. Addresscloud is powered by FOSS; using a combination of PostGIS, COGs, Elasticsearch, Vector Tiles, MapLibre GL and GeoJSON our APIs are used by millions of consumers in the insurance, finance and logistics sectors across Europe and North America. In 2020 we completed a re-architecture of our service to become 100% serverless. As early adopters of serverless for geospatial this talk will explore the advantages of serverless, demonstrating how it has improved our scalability, reliability and consistency of service, and enabled us to become more competitive. We will also share our experience of the transition and the challenges faced, particularly around developer learning curves, system observability and complexity. The presentation will be useful for members of the community looking to use their favourite FOSS tools to build geospatial applications in the cloud.
10.5446/57182 (DOI)
Stephanie, heading over to you. Okay, well, thank you. And thank you, everyone, for joining me today to learn more about Biopal and how we are collaboratively developing open source software for ESA's biomass mission. I'd like to start today by telling you more about the biomass mission and the challenges we sometimes face here at ESA in operating our algorithms that translate raw satellite data into data products we distribute later on to the public. I'll then go into detail about Biopal and how Biopal could be a solution to these challenges we're facing at ESA. I'll show you how open source development in Biopal looks like. I'll share with you a couple of lessons learned in setting up such an open source software project here in the agency. And last but not least, I'll tell you how you can get involved yourself contributing to Biopal. So first of all, biomass is ESA's seventh Earth Explorer mission. And ESA Earth Explorer missions are experimental research missions that are dedicated to specific aspects of our Earth's environment, whilst also demonstrating new technology in space. So in other words, these missions really address very timely, critical and specific issues raised by our scientific community, while really demonstrating the latest Earth's observing techniques. And these experimental missions, if successful, may even evolve into operational missions such as the Copernicus Sentinel, for example. So biomass, you can see the satellite here in particular is ESA's global forest height and biomass mission with a primary scientific objective to study the Earth's carbon cycle by measuring and quantifying, for example, global forest structure like forest height or above ground biomass. It shadowed for launch in February 2023. And in the case of the Biomass Earth Explorer, we are expecting an operational period of about five years in orbit. So with biomass, we really at ESA are planning on exploring the unknown, both in terms of research but also in terms of technology used for the research. And in terms of operational procedures we use here in our ground segment. So biomass is the first mission at ESA designed to estimate above ground biomass and to address the role of forests in the global carbon cycle. Biomass is also ESA's first P-Benz R mission in space, including full polarimetric ZAR and interferometry as mission objectives. And for us here in the ground segment, it is the first time that we have to deliver systematically generated biophysical parameters, meaning systematically generated maps of global forest biomass, forest height, or for example, forest disturbance. So it's the first time for us that we are developing global operational processing chains for interferometric data from space. And when I talk about operational algorithms or operational software, I mean the algorithms, the software, the code, that for us translates raw satellite data into level two or level three data products that are distributed, for example, such as global maps of above ground forest biomass or forest height. So the novelty of all Earth Explorer missions really poses certain challenges for us in developing and operating these processing algorithms within ESA's ground segment. For example, scientists like Machik Souya or Alberto Lanso-Gonzalez, who are developing the initial prototypes of these scientific processing algorithms before launch only can work with very limited P-band airborne or in situ data to develop the algorithms. So that means we are expecting to be able to prove these initial algorithm definitions quite quickly once the biomass mission is launched in 2023 and the actual global mission data becomes available. Thus far, the improvement and updating step, though, here at ESA has presented a challenge in particular because processing algorithms are generally not publicly accessible. And updating cycles can really take up to years until improvements are made in these algorithms and hence also improvements are made in the final data product. So, and then third, due to the novelty of the biomass mission, we'd really like to see ideally scientific community formation as early as possible pre-launch to both push scientific discovery in, for example, the processing of P-band SAR as well as scientific discovery really with biomass generated data products such as global maps of a background forest biomass and to really name biomass as a mission a success. So what is Biopal and how does it address these named challenges? Biopal is an open source software project called the Biomass Product Algorithm Laboratory publicly hosted on GitHub. It's really the first time that official processing algorithms from ESA are made publicly available. And as an open source software project, it contains the source code coded in Python for the official biomass algorithms generating above ground biomass, forest height and forest disturbance from a raw P-band SAR data released under the MIT open source license. So as an open source software project, however, Biopal does not only contain these prototype processes but also contains, for example, analysis tool, the tools to analyze the maps of biomass, it contains governance structures and contribution guidelines for our scientists or also external contributors to work together with us. We've just started packaging and distributing Biopal so you can, for example, now install it via PIP installed Biopal. We're working on continuous integration and testing of the source code. We're also working on providing documentation on tutorials that shows how to use and work with both the library as a user and also as a developer. And for example, we're also really working towards supporting Biopal to be used in interactive coding environments such as Jupiter to be able to, for example, show Biopal in key. Classroom settings and educational settings as well. So what are the goals of Biopal? It's really supposed to be an open and collaborative space for the improvement of the currently defined biomass operational processing algorithms. So for ESA, we're trying to really accelerate the innovation of doing P-band SAR processing and additionally, Biopal is supposed to act as a bridge between these scientific discoveries and the innovations in P-band SAR processing and the timely improvement of the official operational processing algorithms. So from the agency's perspective, it simplifies our operations of the source code and it also allows us to reach superior code quality more quickly. And last but not least, Biopal is from ESA side thought of as an example or template project really promoting best practices for open scientific code development and explorers for us a new way of doing open and collaborative science that could be implemented for all future Earth Explorer missions. So talking about open source software development, how does this actually look like for Biopal? So first, we have our Biopal project hosted openly on GitHub containing the operational biomass algorithms containing documentation and tutorials showcasing how to run these algorithms and also containing contribution guidelines explaining how changes could be integrated into these official operational algorithms. We then invite users and developers to leverage version control in Git and clone or download this project to their local workspace. We additionally provide testing and validation data to run the algorithms. More information on how to access these data sets can be found under Biopal.org or in the future, users also you will be able to run these algorithms with real biomass data, for example, accessed by other efforts such as the ESA-NASA multi-mission algorithm and analysis platform that makes the biomass data publicly available. So now that you have the algorithm source code, tutorials and testing and validation data, you can start running the algorithms on your own, on your local workspace, for example, or you can also start making changes to the algorithms, maybe even implement your own algorithm. In the next step, you can now propose to include your changes and improvements within the official Biopal project, again writing so-called issues or pull requests on GitHub. Those changes are then being reviewed by officials at ESA and the Biopal core developers like Francesco Banner, Paola, Matsu Ciali, and they can then be approved and merged back into the Biopal project into the official operational algorithms, really updating the algorithms and also ESA's data product. So this signifies a general workflow, how you could contribute to Biopal and how we are currently working on Biopal as well. So how did we approach from an agency perspective, creating Biopal as an open source software project? And what lessons did we learn in the process setting up such an open source software project? So this graph shows you the commit history of Biopal, in particular the above-ground biomass module, our processing code as our spare head module. In orange, it shows you interfaces and maintenance commits here in white and the overall project commit history. So we officially moved the source code of the prototype operational algorithms to a private organization in GitHub just about a year ago in August 2020. And starting out with the scientists and also the developers working together being involved, we realized that the current prototype processor source code was not easy to work with in its then current form. And we spent, first of all, a couple of months time to refactor the code base to really make it easier for scientists and people with, let's say, not a software development background to work on the code, really only focusing on their specific area of expertise. For example, above-ground biomass processing from P-Benz R&D data. So we modularized the source code a bit better. And after refactoring Biopal into the separate modules, we started also adding more documentation. Our team grew a bit bigger. We decided on governance structures and additionally added guidelines for contributions. We then were really set up for the scientists to keep doing scientific research and continuing to iterate on the above-ground biomass processor and basically develop the algorithm. Then a couple of months later, we realized that we needed to refactor the interfaces a bit more to make it easier to read and write data from modules to modules. So for example, from AGB, from forest height into AGB. And we added more API documentation and tutorials on how to run the different processors, which we needed actually for ourselves as well as for external contributors planning the release to release the Biopal project soon. So after which we started working on adding tests and really setting up a testing infrastructure before finally making the repository publicly accessible at the beginning of the year. Since then, we have had first external contributors and bug fixes as well as also released Biopal on PyPI to allow now installation, for example, via pip install Biopal. So what are the lessons we learned at ESA creating such an open source software project? Well, the number one lesson was that creating a successful open source project was not only about adding an open source license, but especially about putting measures in place to really create an active community of developers and contributors, such as adding guidelines, how to contribute or tutorials, how to get started, as well as inviting interested people directly to our five week meetings. So the entire team actually learned how valuable common guidelines and practices were and also valuable it was to be using the same tools in the same space as compared to working separately with each contributor, each institutions that worked on the source code to be using their own tools and habits. And even though there was a learning curve for most participants and scientists involved at the beginning, it really made working in large distributed teams much easier in the long run. Additionally, a centralized communication with GitHub issues really helped tackling issues faster. So each team member had a different expertise, scientific or software development expertise and could pitch in when there was trouble. It was also interesting to working with people with really different backgrounds and helped expanding basically our own backgrounds and solving bottlenecks concretely and also very fast. And even though the project has been open just recently, the help of external contributors was really valuable for improving documentation or also spotting bugs in the code. And through the involvement of external contributors, we at the agency really already experienced the value of having a centralized software review process set in place also for updating algorithms for that online. Okay, so now you know about Biopal. Here I'd like to call you for your contribution. If you're excited about software development, open source software or SAR processing, you can find us on GitHub or also learn more about Biopal on our website, biopal.org. And in particular, we're currently always really grateful for contributors that are simply testing the library, reporting bugs, or improving API documentation, or simply just sharing the project within the network. If you're in particular interested in development, we're currently actively working on improving the computational performance of the algorithms. We're adding analysis tools for, for example, output data such as visualization tools, and I always work on tests. And if you're a researcher, we always encourage testing Biopal, for example, on other P-Benz or data sources, or integrating also SpacePoint LiDAR as calibration data. So feel free to reach out to anyone in the team. We're distributed between ESA, DLR, RACES, Polytechno, Code Milano, and MagicSoya consulting. And of course, I'm also still going to be here to answer your questions. Feel free to write me an email or reach out via Polydecent as well. And with this, I say thank you. And happy to answer questions. Thank you very much, Stephanie. That was fantastic. I'm really pleased to hear that there was an educational context for what you're working on as well, bringing the next next generation into all of these, these cool and high impact things. We do have a question from the audience for you, and it's about measuring the accuracy of the algorithm and how you do this. Was it trained in specific regions? Yeah, so we have multiple algorithms. And basically, different algorithms are currently, the output of different algorithms is currently compared to different in-situ data sets. And that's actually one of the problems we are facing because we can collect some in-situ data sets, but we'll never be able currently to collect in-situ data sets that are valuable for or valid for all global forests. So this is one of the things we hope to improve once the biomass data really becomes available. So currently, algorithm outputs are basically optimized on in-situ measurements. Fantastic. Thank you. And I've got a question for you as well. I'm very curious about how you're coordinating the collaborative effort to the code. I'm curious if you've been running code sprints as part of the big boost forward in development. Yeah. So currently, because we were basically a small team and there was still a lot of scientific work done on the algorithms, we started on just doing things like peer programming with people that were already involved. Code sprints is actually one of the things we're looking forward to do in the future. But we haven't done them quite yet. Fantastic. Fantastic. Thanks.
ESA's BIOMASS mission is designed to provide, for the first time from space, P-band Synthetic Aperture Radar measurements to determine the amount of above ground biomass (AGB) and carbon stored in forests. The novelty of BIOMASS’s sensors poses the challenges to develop scientific algorithms, estimating i.e. ESA’s AGB data product, with limited data pre-launch and for timely improvement of operational algorithms with the mission launch in 2023. The BIOMASS Product Algorithm Laboratory (BioPAL) is an open-source scientific project, supporting the development of official BIOMASS mission algorithms coded in Python. The goal of BioPAL is to bridge the gap between advancements in scientific algorithm development and fast integration into ESA’s BIOMASS’s ground operations. It is the first time that official processing algorithms for an ESA mission are released publicly and supported by open and collaborative development within the scope of an open-source software project and community.
10.5446/57183 (DOI)
All right. Can you see this? Because I can't see you anymore. Is this visible? Yes, looks very good. Okay, so thank you for the introduction. I think that was from LinkedIn, so that sounded much more confident than I actually am in person. Anyway, so I would like to talk about how we introduce multilingual support for PyGEO API. So let's get started, by the way. So I'm going to introduce a bit about PyGEO API and the previous stage of language support in there. Then I'm going to talk a bit about internationalization and localization, often abbreviated to these obscure terms, I, ATN, L, 10N, and what's the difference between them. Then I'm going to talk about requirements. Those aren't really requirements that someone made up, but these are basically requirements that we decided were important for multilingual support. Then I'm going to show the solution, the technology that we came up with to solve this. And I'm hoping to do a little live demo for you, which is hopefully not going to be very spectacular, because if you do translations the right way, then you hardly notice that it works. And then I'm going to conclude with some final notes on this. So I suppose that everyone who is currently listening to this knows what PyGEO API is. If you don't, so it's a Python server implementation of the OGC API suite of standards. There's currently a lot of them, but the one that's currently approved is the Features API, if I'm not mistaken. Then there's, well, there's the Maps API, there's the Records API, the Coverage API. Well, you name it, there's a whole bunch of them, you can find them on the OGC site. And it's a really cool initiative that will be very powerful in the future when all these APIs have matured to want to obtain data, no kinds of ways. So PyGEO API did not have, and technically still doesn't have language support until June 2021. That's when the PR was approved and merged with the current master branch. And the official release that will also support language will still needs to come, I think that's the 011 release. So until June, PyGEO API was English only, and all text was hard-coded somewhere in the core. But, so in the spring this year, we had a customer of ours, of GeoCut, and they are called National Resources Canada. And they also have the Federal Geospatial Platform, that's an initiative of them. And they had a cool project with this API that they have, this is called the GeoCore API. It's kind of hard to find actually, because lots of things are named GeoCore, I found out. If you search for GeoCore and National Resources Canada, in that combination you will find it. And this API can be still an experimental state, but they thought it was cool if PyGEO API would offer a provider plugin for this API as well. And one of the key features of this GeoCore API is that they also had a language support built in. So they had, of course, because it was Canada, they had French and English, obviously. And they thought, hey, it would be cool if PyGEO API could deliver that data in the same way. So that's how it started. And so then I started thinking, or we started thinking, me and my colleague Pao or Franchon Nürttel, or back then he was still my colleague. We started thinking, like, what do we need for this project? So first of all, because PyGEO API is very flexible and it's working independently almost. Not really, but it supports several web frameworks, like Flask, Starlet, Django. And obviously there's more to come still, maybe a fast API support in the future, I don't know. But it was important that this multilingual stuff should work regardless of the web framework. So that immediately also means that we should come up with something ourselves that integrates with everything. Another aspect was that it should be customizable to a high extent so that PyGEO API maintainer or developer can tweak all the things according to his needs or needs. And it should also be an invisible utility, basically. So that means that invisible, I mean, if you disable it, then you don't know that it's not there. If you enable it, it works and you also don't really notice that it's there. So it's, yeah, in that sense, it's invisible. It should be easy to use. So for all types of users, for the developers, translators, maintainers, system maintainers, and of course the end user. And it has to implement common standards concerning localization. Those are already thought up by organizations like the World Wide Web Consortium. So a very common technique is to both support the query parameter language selection. So you specify in your query string, in the get string, you specify lang is and then some extension of your language. Or you can use the accept language header to specify the language you want. This is also what most browsers do. So if you have your system set to a specific language, then the accept language header is automatically sent by your browser, which is the language that best matches your system locale. Another requirement was that IGU API was already using the JINJA 2 HTML templating engine. So it would be cool. We thought if it already could use that, integrate well with that, it would save us a lot of time. And the final aspect, which was important, is that there should be a decoupling of the API core and the plugins, that is providers or the processes or so on. And by that, I mean that the API should work independently with language from the plugins. Because the provider might support different languages than the API core itself. That is a reality. So it could be, for instance, that you're searching on the JGU API using the HTML interface, and you want to retrieve data in, I don't know, German, for example. But the API core has not been configured for that, but for English. However, the provider, you know that the provider offers data in German. So then this requested language should be passed on to the plugin, and it should return the requested language. So the solution we came up with uses Babel. Babel is a localization and internationalization utility. And I'm actually thinking now that I think there is it. No, I think I have removed the slide. So anyway, because in the table of content, it said that I should also tell the difference about internationalization and localization. So I'm going to do that right now. The main difference basically is that internationalization is the technique provided to enable localization. So localization is that you offer multiple translations for different localists. So if there's a, you can imagine that, for instance, Swiss German. So that's the combination. The local would then be D, E, C, H, so German and Swiss Switzerland as the local. That is a different kind of language potentially than high German, which is just the E, E. So that's a locale and often it also comes with different settings related to how, for instance, date formats are passed or how currencies are passed and so on. So internationalization is then the technology provided to enable localization. So the work that we were doing here, it mainly involves internationalization. So now back to the solution. BABL offers already a lot of utilities to do this. This is a Python library and also some command line interface tools. And BABL is not to be confused with the BABL JavaScript BABL that's a transpiler or something else. It's based on get text, so a common new tool, which works with these PO translation files. So you can compile translations and then when the language is requested, the right language is replaced in your HTML template. And that also brings us to the second point. BABL already has this out of the box integration with the Ginger 2 template engine. So you can simply add these special tags in your HTML template and these trans tags will be replaced with the translation file. So what was also important is that we needed to normalize the 5G API core, API and the provider based modules. They were all working in a slightly different way. So we needed to smoothen that a bit and make sure that they were all passing the same kind of request and so on. So we actually did lots of stuff in the core API, which I will not bore you with right now. But in the end, what we decided as a first support for provider languages is to get the query and metadata functions in the provider. Those are language aware. Other functions can be made language aware, but they're probably less relevant to most of you. We also developed a whole new module that deals with all the translation stuff. Part of that is based on Wexsoy's best match method, which figures out the correct language from the accept header or the query parameter passed in. There's a lot more to it than you might think because these accept language headers can be quite complex. You can specify multiple languages in your request, for instance, also with a weighting factor. So yeah, this module has been developed for that. And in the end, we added translations in the YAML config, the current YAML config, 5G API and DHC not that late, of course. But we still need more in the future. So this is just in a nutshell what it all comes down to. So we have the configuration here on the left side, the YAML file. So basically what you do, you specify the languages that the 5G API core supports. You can specify languages here that or locales, I have to say that provider support. And then you can also enter different translations for text strings inside your configuration with these special language structs. So in the core module, for instance, the get method as discussed before, query method for provider. They receive the role of Kala from the original language request. So the core will figure out based on the languages defined here, it will figure out what the best matches for the provider. So that doesn't need to be the same language that the core speaks. So at the bottom you see some essential functions inside the language module. And here's an example specific to this customer's federal dual platform of Canada. This plugin that we made. And there you can see that the language is passed in initially with a non-value. But of course we see the right local control in core. And last but not least, there's the translation files and HTML templates. So I can now show you hopefully, and I can also see how much time I still have left. I can hopefully show you how this would work. So this is the default by GUI API. In instance, you see it's in English. But I could request this site in the French language, for instance, this has been configured for this specific case. And then we will see that some text strings, not all of them because we still have some work to do. But most of the text strings here are translated nicely to French. This also applies actually to the JSON string. Here you still see the English one. But again, if you request it in French, then you should also see some strings, not all of them, but some strings get translated to French as well. So back to the main entrance. Here is the provider that we created for FTP Canada. I have to warn you it's a bit slow, this API actually. So it often takes some time. This stuff over here, this is in English always. That's still hard coded in the PyG API core. And I still need to make a PR for that to fix this and make this fully translatable as well. So we can browse through some items. As I said, yeah, it's quite slow. So here you see some items. So also the description, everything here. This is a records API by the way. This is all in English. And if I translate this, again, say a language is French and this is passed on to the GeoCore API. And it will also deliver me with descriptions again here in French. So you can say, well, this is not very exciting. And why do I need to add this language FRR all the time? I fully understand that you might think so. But so there is this setting. I will do this manually now in this case Google Chrome. But I think everyone has it. And now I can say the French Canadian language here. And let's move to the top. So now immediately effective, I should get the French result only because my browser is now sending automatically the accept language header for French. And again, I apologize for how slow this API is. Once again, this is not the project API, but the GeoCore API. And like I said, I didn't pause in a query parameter here and yet I get the French results for this record. So that works in a really nice way. So back to the presentation. Some final notes. So pretty soon, I'm not sure we have to talk to Tom. Carlitos for that. The next release will come to the 011 probably. And that will have language support and then also the demo on the official demo on the 5G API.io site. Which it isn't right now. Another thing we need to do still is find and set up some translation editing tool. So it's easier to make translations. TransFX is a well-known one. The QGIS community works with that thing. Which brings us to the next point. We need translators for all the languages. I think we currently get some German, French and English only. But that's as much language skills as we have in the team. So that's it. We have to remove hard-coded text strings from the code base. And we can still find... Well, here's the link to the documentation about the language. You can check that out yourself. That shows you how it all works and how it should be set up and work with it. And then the API docs on the localization model is not available yet. But that's still on my to-do list. If you have any questions, technical questions, if you want to work with this, so you have problems with it, then please contact me. So I can't really read. That's my Twitter hook, but it's also my username on GitHub. So you can find me there. Yeah, that's pretty much it. Just one thank you to all of my, as I said, my former colleague from GeoCAD. Now working at Iserik, he did a lot of the thinking work behind this. Then of course Tom Kralidis for doing everything, 5G and API. And then Bolu and Chris Malmöck McDonald from FTP National Resource Canada for providing the API documentation on GeoCore. That's it. Thank you very much. Okay, thank you very much, Sandler. Adding language support to an existing project is always a challenge. I've seen that before, so that is nice work. There are two questions I see. Also for myself to know, I bring in, you can see, so I read it, has language support actually been integrated already in the, let's say, the main branch of 5G API? Yes and no. So the PR has been merged actually to PR. So that was the internationalization part. So that's really the Python code to make it work. That was merged before. And then Paul, I think, recently merged another PR that makes these HTML templates work. Of course, not everyone needs that. Some people really don't care about that aspect, but some people do. So there you will find these co-files that the bottle uses. But I think that out of the box currently, it doesn't work. I try to download the master branch and just see if it already worked and it didn't. So you still need to do a couple of things to make it work at the moment. Also, like I said, the demo, but I think you know that better than I do, yes. The demo website, you mean? The demo website, yes. I'm not sure from which state that is, from which commit that is built. But that also doesn't have any working stuff yet. I'm really hoping that the 011 release will feature that. Okay. Yeah, that's good. We're always honest about the stability of the state of our open source. Yeah, maybe there are people now that say, well, I will fix it for you. And we always welcome PRs, pool requests. And yeah, there's a second question. We still have like three minutes. So how can people maybe already contribute to new or updated translations? I saw you mentioning the transiflux or can they add the EO files? Yes, exactly. We didn't hook it up to something like transiflux yet. It would be very interesting to do. Although I'm not really sure. I think it is a, yeah, it's not really sure what the, how transiflux works, to be honest. I mean, it's not a, it's not really an open source tool or anything. It's a commercial product, but they do have some exceptions for non-commercial projects or something. But there's, I think there's limitations. Okay, I can tell you. Another QQIS project and probably the MapBander project, they also have something. Maybe it's transiflux, but maybe we can ask them as well. I was told it was transiflux for the QQIS community. Yeah, so I'm just thinking. No, not sorry. Yeah, you had another question. No, we're also almost on time and I can imagine we're also in the middle of development or now quite far. But this is really, really valuable for the project. So thanks and thanks for this presentation, Sander. And we'll see you, people can talk to you on the icebreaker. We've already seen people breaking ice, opening bottles, but we still have talks here. Okay, thanks and we'll go to the next session.
The pygeoapi project easily allows developers to build their own data providers. This talk describes the creation process of a bilingual OGC API Records provider and how it led to a pull request that brought multilingual support to pygeoapi. The Canadian Geospatial Platform (CGP) has recently built an open REST API, known as the geoCore API, that offers users the ability to return metadata records both in French and English. As part of the OGC API Records code sprint, a pygeoapi data provider was developed that queries CGP's REST API. However, pygeoapi did not provide a mechanism yet that allowed us to query the CGP records in the desired language. Furthermore, pygeoapi's web frontend was available in a single language only and featured lots of hard-coded text strings. To solve this problem, a PR was created that made pygeoapi language aware and allowed users to request data in their language of choice using either a query parameter or an Accept-Language header. This talk will discuss the difficulties faced when adding language support and demonstrate the resulting pygeoapi provider and the technologies used to implement it.
10.5446/57184 (DOI)
Okay. Hello, everyone. My name is Jorge and I will be the host for this first session on the Puerto Madri in room. I'm going to welcome Alex, who is our first speaker. Hello, Alex is going to present the Digital Africa talk. He is a certified special professional with extensive experience in software development, develops and project management. Alex is a founding director of FOSGEO Oceania and has spent time volunteering for the Surveying and Special Science Institute, and he gets paid to do interviews. When not writing code or at least talking to people about writing code, he shares care of his three kids and loves great craft beer. Alex works at Geoscience Australia, leading a team of software developers and data rangers helping people to more easily access and analyze her observation. So I'm leaving you with him. Yeah, let's get started. Thank you, Jorge. Hello, everybody. Good morning. Good evening. Good afternoon. I'm presenting here from Lutro Vida, Tasmania. This is the land of the traditional owners, the Muonina people. I want to acknowledge their pay-my-respects to the elders past, present and emerging. Today I'm going to be talking about building Digital Earth Africa. I've given this presentation a couple of times. First, about the intentions of how we were going to build Digital Earth Africa, then how we were building it. Now I'm going to talk about, to some extent, how we built it. So we are in the end of the third year of this project and we're now transitioning into an operational stage in some regards. But yeah, look, here's building Digital Earth Africa. First, a bit of history. So Digital Earth Australia has been around for a few years time. It was founded on a project called Unlocking the Landsat Archive about 10 years ago where Landsat data, which was stored on magnetic tapes in a deep, final repository of sorts, was unlocked by digitising it and putting it onto spinning disks on the supercomputer. There's a bit of software written to help do this called the Australian Geoscience Data Cube or AGDC. And this worked pretty well, but had some limitations. So it was rewritten to be the AGDC V2. And then in a blast of creativity was renamed to the Open Data Cube, which is now a throbbing open source software project used all over the world. So the Open Data Cube is a foundational technology that we use in Digital Earth Australia and in Digital Earth Africa. Digital Earth Australia has ongoing government funding to organise Earth observation data and make it available to government academia and industry all across Australia. And on the back of success, success of Digital Earth Australia, Digital Earth Africa was founded about three years ago. So what is Digital Earth Africa? Well, it's funded by the Department of Foreign Affairs and Trade in Australia, so the Australian Federal Government and the Helmsley Family Trust, a US based philanthropic. And the broad goals are around making Earth observation data more easily accessible, this time over the vast African continent. So we want to develop in-country capacity, we want to make data accessible. We have broad applications, mining, land cover, surface water, agriculture, COVID response, biodiversity. And there's about 20 staff across the world, including three employed in Africa. And we have a program management office, which we've recently announced, which is being housed within the South African National Space Agency. And over the next six to nine months or so, we're going to be transitioning technical delivery of the project as well as the program management to Africa. So I want to talk today about some technical principles guiding Digital Earth Africa. So one is around collating the best Earth observation data. Another is around being around shared infrastructure as code, collaborating through the program code. Third is being unopinionated, which I think is really important. Ongoing data flows first, and then looking at the backlog processing, and then just to make the point that we live and breathe open. So when we talk about the best Earth observation data, we're talking about analysis ready data, or data that has been corrected for atmospheric conditions so that you can compare like for like something that was captured by Landsat 5 satellite in 1984, can be compared to a Landsat 8 or soon to be Landsat 9 satellite captured today or next year. We take data from the USGS, so the Landsat 5, 7 and 8, and from a European Space Agency, so Sentinel 2 for optical data, and Sentinel 1 for synthetic aperture radar. And we hopefully copy it from one place to Cape Town in Africa where our data lake is. But if we need to, we will convert it to cloud optimized geotips. We use a Spatio Temple asset catalog, and we've converted all of our stack documents to stack 1.0.0. And to talk about, to sort of put some perspective around the volumes of data, we have 2.1 million Sentinel 2 scenes, each of them around about 1 gigabyte each. We have about a million Landsat 5, 7 and 8 scenes, and almost a million Sentinel 1 scenes. We manage a total of 2.8 petabytes of data across those products, and a few other smaller data sources. We also have some other sources, as I mentioned. We have ALOS summaries, annual summaries, so that means we've got two optical products and two SAR products. And due to the success of some of the stack metadata interoperability that we've worked on with the Open Data Cube, we can index data from a couple of other spaces. So Microsoft's planetary computer, for example, has a fantastic stack API. And so there, NASA-DEM, we index, which goes along with our SRTM cloud optimized geotips. And we also have the impact observatory and ESRI's land-use land cover data. So when I talk about shared infrastructure as code, we use AWS extremely heavily, and all of our AWS deployments are managed through infrastructure's code using Terraform. So we share and make Open Source a range of Terraform modules so that other people can use the same structure that we use. And Digital Earth Australia, Digital Earth Africa, and Cyro, and a range of other folks are using the same or very similar templates to deploy infrastructure. And we architect everything using Kubernetes, using Amazon's Elasti Kubernetes service, and the Helm charts are all Open Source as well. So you can deploy your own Jupyter Hub, Open Web Services, Explorer, Data Cube tools, using our infrastructure's code as well. So again, like sharing and building amongst other people. And we share the work of maintaining those templates and building the collaboration, especially with CSIRO. Being un-opinionated, I think, is one of the most interesting pieces. So we organize a lot of data, and then we build services on top of that data to make it accessible. So that might be web services, OTC web services like WMS, WMTS, or WCS that feeds into our map. So we've got a couple of levels there, and our web services are published and we consume them, and others can consume those. We also have a metadata explorer. So it's a human navigable view of all of the data that we've got indexed. It also has a stack API so that you can go and hit it with a machine. And then we've got our sandbox or Jupyter Hub space where you can go and do data science using the open data to interrogate data anywhere across Africa. So that's like the front door, the opinionated way in to use all the data. But we have the back door, which is data on S3, as publicly accessible data sets with static JSON stack documents consistently describing that metadata. And so you can go straight to the data and consume it without having to use our tools or our APIs. The thing that I think is really interesting about cloud optimized geotifs and spatial temple asset catalog is that the API is HTTP. You don't actually need to understand any sort of bespoke fancy way of doing things. You can do a HTTP request together, a single pixel or entire data set, or to get metadata about what data exists over a location and then use that to then get that data. It's pretty powerful. So this one's a bit subtle, but ongoing first and backlog later is around organizing the data. So an example of that is that we take Lancet data from the USGS. They create a notification. It says there's new scenes. We copy a scene to Cape Town to AWS. We changed the stack document a little bit and index it into the data cube. We create our own notification. And that notification is then used to create derivative products. For example, a water observations from space product, which for every pixel in a scene, it flags whether or not it's likely to be water. When we started processing our water observations from space, which I'll show an example of a bit later, which is running that analysis over a million Lancet scenes, we turned it on on the new data first so that as a data, as a scene arrives, it gets processed and goes through the pipeline. What this means is that we can test our pipeline, we can test our automation. And once we run the backlog, we put all of those jobs onto the same queue to be processed as the ongoing data goes on. What we learned was in the past, we ran some jobs to organize a bunch of data. So then it sat there and slowly drifted out of date. And that drifted up to 18 months out of date because we didn't, the process of going back and organizing more data or processing more data was a manual job. It was just as hard as doing it the first time. Whereas doing this automation up front and then feeding through that automation for the backlog means that it's easy to have that ongoing processing working. More work up front, smoother progress over time. To talk tech for a bit, we're using Lambda to do this scene copying. It does massively parallel the processing and handles fast amounts of data really easily. And then for bigger jobs that don't fit into a Lambda, like doing the WAFs, the processing, we use a tool called Keter to auto scale based on work that's in a queue. And the thing that it auto scales is Kubernetes jobs. So it creates a job which runs feeds, seems off the queue, processes them. If there's more on the queue, it runs the next one. And then if there's none left on the queue, it'll close. I don't finish. Just a note there that Argo is our emerging automation tool. We can do Cron jobs in Argo. It's Kubernetes native. It's fantastic and really fun to use. So we live and breathe. Open DataCube is an OSGO community project. We, there are deployments of the Open DataCube all over the world. There's a vibrant Slack community with thousands of people in it. And look, we stand on the shoulders of all of those foundational projects, GDAO and Proj especially. The Rust area in the Python realm, XRA and Dask. And so many more. And it's great to be able to do our work on the Open DataCube in the open. You can go and have a look at our issues and pull requests and see us talking about things and making plans. It's chaotic as that is. But we also make sure that we feed changes upstream and fix it where it needs to be fixed rather than building worker rounds in lower down the stack. So what do we do with this big, whole, gigantic stack of data and Kubernetes shenanigans? We turn it into information, hopefully. So one of the big products that we've worked on is something called the GeoMAD or the GeoMedium with absolute deviations. And this is doing, this is an annual summary of Sentinel-2. So Sentinel-2 is a 10-meter pixel across all of Africa. That's a lot of pixels that are about 70 scenes captured for any location over a year. And to get a median pixel out of that, you need to load that entire stack into memory so you can find the middle. This process did that. So it's an annual summary of Sentinel-2 data over all of Africa over four different years. And in doing that processing, we consumed about 6,000 CPUs, spot instances on AWS and about 30 terabytes of RAM, which is pretty fun. Results in this image here. So you can still see some systematic errors in that Sentinel-2 data. And if you hopefully can see my mouse, there's an area there, you know, Equatorial Guinea, where there's still some of the cloud comes through after using Cloud Mask to mask things off. That's one of the rainiest areas in the world. One of the cloudiest areas in the world. But still, it's a pretty fantastic effort. And you can zoom right in and see these at native resolution. So I talked about water observations from space. This is something that we recently ran and just to rattle off some numbers, there's about a million Landsat 5, 7, and 8 scenes from 1984 through to today. We ran this process on about 300 spot instances in Cape Town on AWS. Consumed about 5.5 terabytes of RAM and 2,000 CPUs. It takes about two minutes to process each scene to put it in context. So that's like two million compute minutes. We're able to run the whole thing in about nine and a half hours. And the fantastic thing about AWS is that you can work out what your bill actually was. It cost us about 700 US dollars to do that. In doing this process, which loads four geotifs, four bands for each scene. So it's four by a million, four million geotifs. We actually found three corrupt tiffs. So we put that back upstream to USGS. They reprocessed and we've got those through and run the algorithm on those. So that's a nice benefit there. This is what it looks like to run it. We've got a nice smooth ramping down of the messages. We've got a bit of chaos here, but in flight SQS messages, so how many jobs are running at once? And then how many instances we're running and it's a dead letter queue. So when a job fails a couple of times, it goes into the queue and we redrive those and generally they work again. So there's some, sometimes there's congestion on the network and on the compute cluster really and some of the Kubernetes pods are killed. And this is an old time summary of that product. So we've done every scene. We can then add them all up into account and say this is the frequency of whether a pixel was wet. So the deep purple areas are always wet and as it gets lighter, these are areas that are flooded sometimes. This analysis is then used further downstream to do more work. So I'm running out of time, so I'm going to move a bit faster. So we've also done a, we're working on a continental crop mask. So this one here is brighter where an area is more likely to be crop and we can compare that to the edge reland use and say that we've got a lot more detail. We're just looking at one thing, whereas as you picked up a whole bunch of classes across all of the world, still, it's a nice comparison. I mentioned this problem a little bit earlier around cloud masking not being great and some of the areas of the world which are extremely cloudy or of Africa. And so this is where the radar data can potentially be very valuable in that we can use the radar sees through clouds. So you can actually identify surface water, for example, even though it's covered in cloud and currently raining. Another big problem that I want to highlight is that this is internet access. So vast areas of Africa have close enough to zero internet access and so us providing for free petabytes of data is not that useful if there's no internet access. So I'm not sure how to solve that problem. We are expanding our services. We've got 1000 different people that have been using our Jupyter Hub sandbox to do work and we've just increased the sandbox to enable you to use four cores and 32 gigabytes for free. So you can do some pretty reasonable analysis on there. We've got 150 people that have gone through our training and are certified. So there's a whole bunch of people using our data and our tools to derive new knowledge. Over the next three years, Digital Africa is transitioning to Santer. As I said, we're transitioning the tech, management, governance. We really want to bring people on board all across Africa to get value off the data. That's what was there. And we want to make sure that we are reliably making those data sets updated. So Sentinel-1, Sentinel-2 are updated as of yesterday. Basically, Landsat data takes a bit longer to process. It's about 10 days old. But that's always consistently about that latency. Finally, I'll rush through some acknowledgments. So thank you so much for AWS for supporting the hosting of the data. They provide that to us for free. So that's there as an ongoing support, which is amazing. Element 84 and AWS, we worked with them early to get the Sentinel-2 cogs sorted over in Oregon and available over Africa. The USGS has been a fantastic partner for a long time, making their data, the Collection-2 data available earlier than the rest of the world, as in the provisional status. So we had that to do our early work with. And obviously, we've now got their production data embedded. Look, the Open Data Cube community, a huge range of people. They're fantastic. And get involved, give it a try. Geo, Steve Rammage was at my talk, the phosphogen and Dar es Salaam, where we talked a bit about this project starting up. And I work knowledge all of my team members for being fantastic and a heap of fun to work with. So if you want to get involved, check out our sandbox, check out the training. We've got a stack API to explore all of our data. Get in touch with me if you want to know more. Thank you very much for your time. Excellent. Thank you, Alex. Let me share. Let me remove the screen. And I can focus you. Okay. We have a few questions. The first one, there are quite a bit, but you already talked about this. So maybe you can just quickly answer again. What is the purpose of this data cube? What are the expecting assets? And what are the funding resources? Yep. So I talked about funding. So it's the Australian federal government and the US philanthropic, the Helmsley Family Trust. The purpose is really adding value to Africa by making this data readily available and freely available. And as an example, the Sentinel-1 data that we've produced is not available in this form as radiometrically terrain corrected data as cogs, as freely accessible over all of Africa, over almost as long as Sentinel-1 has been capturing these images. And so, like making this data so readily available, there's this anecdote that if you're going to do a research project, for example, about 80% of the time is spent in organizing data so you can do your analysis. And we've done all that. And you've got a sandbox space that you can go and do your analysis. But the real why is that we want to make this available for the community, for business, for government to get value and to make the world a better place. There's another question on, are Africans receiving any training? Absolutely. So when we say that we've got 150 people certified, almost all of those people are in Africa. And so we have, yep, sorry. Yeah, sorry. There's another question on where the funds will come after the project will finish. That's an excellent question. We're working on that now. So we have, the initial project was running for three years, but we still have funding remaining. So it's continuing. And as I said, hosting of the program management offices in Satsa. So one of the tasks that we have now is to seek ongoing funding to continue to enhance and improve the platform. Okay. Thank you. And the last question is, what are the advantages to get you developing your own digital earth instead of simply depending on Google Earth engine? It's a great question. I think Google Earth engine is an amazing platform and it does some things. When you think about it, that are incredibly complicated, very easy to do. But there are, it's difficult, not impossible to build a commercial platform on top of Google Earth engine. And if you want to have like sovereign capability or have control over your environment and not know that it's not ever going to be end of life, then maybe you want to have a platform that you build and own yourself. But look, I think we see ourselves as a kind of appear to something like Google Earth engine, although they've got an extensive catalog of data and a lot of different things. But look, Microsoft Planetary Computer is another alternative that is very strong and they've got some fantastic data and a fantastic stack API that I talked about earlier. So also you get to use Python in open data given the dual earth. I understand. And this is from my own that building your infra on top of using Terraform and QNIT is you can change cloud provider if needed. So you're having a little bit of governance or yeah. Yeah. So Terraform theoretically can be cloud provider agnostic, but in practice, it's not so much. But look, if you've got a Kubernetes cluster, you can deploy the open data cube, helm charts into there. And yeah, so absolutely. I mean, we work with people in working in Azure and yeah, the good cloud. Okay. Thank you very much. We don't have any other questions on this. We don't see anything else. So we are perfectly in time. Thank you. Thank you very much, Alex. And yeah, and now we have three minutes until the next session. So see you soon.
In 2019 Geoscience Australia announced the creation of the ambitious Digital Earth Africa initiative, modelled on the rising success of Digital Earth Australia. The goal of the Digital Earth platforms is to make petabytes of Earth observation data freely available and accessible to inform policy, stimulate economic growth, and build a deeper understanding of our dynamic planet. This talk will describe how we’ve been building DE Africa and why. The Digital Earth platforms are built on open geospatial data and open source technologies. From the Open Data Cube and the growing library of Python based remote sensing algorithms to TerriaJS, Docker images and infrastructure as code, all our work is shared with the world as reusable, extensible and free open source software. This talk will delve into how: • the Open Data Cube works, • using Xarray and Jupyter Notebooks revolutionised Geoscience Australia’s approach to developing remote sensing applications, • community movements such as Open Geospatial Consortium standards, Cloud Optimized GeoTIFFs and Spatio Temporal Asset Catalog drive the open architecture behind the Digital Earth platforms, • how we’ve been using modern technologies and the cloud to handle working with large volumes of data. Hear the story behind the hype as we explore the past challenges, lessons learned, future opportunities and how you can get involved.
10.5446/57185 (DOI)
All right, so let's get started. My name is Peter Pokorny and I'm a developer working at Tyler. I was lucky that I could start working a little bit and help to migrate or help to fork Mappox into the open source one fork. I'm going to talk a little bit about this project from more from the technical side. And the presentation will have kind of two parts. First part is going to focus on a simple example that will show how we can build the application with Mapply brand native or Android. And the second part will focus more on the technical side of the project. Let's get into it. Mapply brand native is a library written in plus plus 14. It's kind of a huge code base. It's a button up approach for the multi-platform application. So the core library which is written in C++ contains everything from the renderer abstraction on top of OpenGL, networking, logging, all these things are written in C++. And on top of it there are wrappers for native platforms, for example, for iOS and Android, but also for QTE, for Node and so on. This is the library which we are going to talk about. The example I wanted to go here is kind of similar to the example which we introduced in the workshop on Monday. It's a simple application for Android. I selected Android rather than iOS because I think it's more open source. So the application just initialized the map, the control, and then it loads the map style and then it parses the geogestion from the local assets and puts it on the map as a vector source and vector layer. It's simply application. We don't have much time in this session in this talk, so I'm not going to do any interactive presentation. It will be all just slides, but there is a link at the end of this presentation to get up where you can find the source code and also the tutorial to do this. In order to build an application for Android, you need Indeed Android Studio, which is the official development tool from Google. It's built on top of IntelliJ Powerful Editor. It comes with Gradle. It's this building system for Android projects and it comes with all the SDKs for Android devices and emulators and all that. So this is what you want to use when you need to build a project for Android. The application for Android consists from their typical basic building blocks for Android applications like activities, services, account services, and so on. I will focus only on what we will need. So in this example, we will need activities. We will write one activity. You can imagine activity as a piece of user interface, the screen basically. On Android, activity can be launched from any application. You should think about it as an isolated piece of the interface and be used from anywhere. So that's the screen. And what is on the screen is controlled by something which is called layout. On Android, layouts are quickly written using XML files and they basically organize all the buttons and controls and things on the screen. You can, when you think about it, it's similar like, for example, FlexLayout on the web. But there are many, many layouts for Android. And then, of course, there is the library itself, MapLibre SDK, and the application you will write. There is also something which is called application manifest and that's a list of metadata for your application. For example, marketing name. It contains marketing name version and so on. So to start, you will just create a new project in Android Studio. You will choose simple activity template. And there are some options where you can control how you want to be compatible with devices even to support the high levels. So that's obvious. And then the second thing which you do, after Android Studio generates a project, you install the SDK. Installing the SDK consists only from editing the two Gradle files. These files are for the Gradle build system and once you have something there, Android Studio knows that you want to SDK or some library. So here you will just add the repository in Maven Central. It's typically there already. So there. And then you will add the implementation. MapLibre GL Android in the latest version. Pretty simple then. And then you will synchronize these changes and you are ready to start developing. The layout which we will use is very, very simple. It's just a simple constraint layout. You can see that there is really almost nothing there. The most important part for us is this MapBox SDK.maps.mapViewControl. So we are naming in the name MapView. We are making it full screen that's all. Next we will write source code which will define how the application will behave in activities on Android. You are basically handling the lifecycle events of the activities. It has events like onCreate and they are created onSleeve when they go into the background and so on. So here at this point it is important to set up MapControl and hit on the screen. So we will write code for onCreate event or we will overwrite basically onCreate method and activity. We will read the API key from the manifest. You can see the code for this in the trail link presentation. Then we will initialize the decay. Here we are passing in the API key and we are telling the SDK that we want to use, MapTiler. And then we will on the next line we will on Android they call it inflate layout to the activity. It will basically parse the XML document and it will put the controls from the app to the activity to the screen. And then we will create the MapView. So we will get the reference to the control and these two lines or one line here is required by the SDK. And then we will get the reference to the map. We will set the style. So this is the line which is loading streets, face map into the map view control and we will set the camera position to point to Buenos Aires. You might notice that here is a bunch of callbacks, the kernels, that we are not blocking UI thread. It's common way how to do things on Android. The application keeps responsive. So that's standard stuff which anybody would like to use of the branch. Next thing I'm going to talk is how to add geojson from the local file on the device. There will be bundled in assets. So we will use something which is called async task on Android. But I just encapsulate everything in a single class so that it's to understand. So this line basically use that class, create async task and then execute it. The async task is a way on Android. There's many ways how to do that. But one of the ways is to use async task to execute a synchronous code on the background. So we are using here a class which is called geojson loader. This is our custom class. This base class for this is async task, which is generic arguments which are specifying what are the tasks writing on. And then we are implementing two methods. One is in background and second is on post execute. Post method will be run on worker thread and we will implement here the code for reading the geojson from local file, our sync it creating a future collection and then there is on post execute method which the operating system, Android operating system will run on UI thread. And in this method we will implement code for adding the actual player and source to the map. Now you can notice here that there is a weak reference here, the weak reference type. It is the reference which is not protected from being garbage collected because on Android when the user rotates the screen or when the application goes in background or there is a memory pressure, the writing system can dispose the activity and since we will be running some code in background on the worker thread, then we will go back and we will need to access the activity. Again, it might be disposed. And the weak reference is a way to be able to check if the disposed or not. Basically this line here. So that is the skeleton for the class which will help us to read the geojson from low class sets and then the rest is easy. We will just open the screen from sets on the device and then we will use Kotlin and the library there to read the data, parse it and once we will have the feature collection we can add the source to the style and add the layer which will link back to the source and will tell to the SDK how to render the geometries from this source. And that is it. The application is pretty simple. It just shows basic map and on top of it there are just all the lines rendered on top of it. So that is for illustration how you can build application using this SDK. And now let's talk a little bit more about the SDK itself. The code Petra Petra Pseidel already talked about the motivation of this project and needed to do the open source fork and so on. So from the technical point of view, Mapbox had three repositories on GitHub. One was with the native library and then there were two repositories, one with iOS and one with Android. In the SDKs. What we did, we merged these three into the single repository. We implemented CI, CD stuff using GitHub actions and we updated some things like a compatible version for iOS, but it is now using the last version of the client compiler. We removed the telemetry. We removed the hard-coded creation. And replaced it with API where you can, it allows you to configure the backend tile server and choose which one you want to use. And we also make it possible to distribute the binaries through C framework on iOS and they are using even central Android packages. We also migrated several Android plugins which were provided for another bindings like React and so on. So these are the more like infrastructure changes and one thing which is still underway, I mentioned already is make it implement the support for a format on iOS devices. To summarize the differences between Map Libre and Map Box, indeed Map Libre has a source license. As I mentioned, Metal on iOS support for it is underway. Map Libre doesn't have all the latest 3D maps, although it can render RGB terrain and I appreciate all that, but not the 3D. And it doesn't support all the latest additions to the style specification and there is no tracking telemetry. This is kind of one slide summary. What's the difference? Related information, you can learn more about Map Libre if you can visit maplibre.org. You can find documentation site on MapTailer. You can find samples for Android, iOS specifically and there is a geogest and sample which you might want to check out if you would like to implement something similar, but I present it here. And all these tutorials has links to GitHub where you can download the source code and you can run it on your development environment. There is also a project called Awesome Map Libre which tries to collect all the interesting projects related to the map.org. So check it out. There are additional interesting information. Most important thing from my side, just a big thanks to all people who worked on this and the library, and last hours definitely if you will check commit history. So big thanks to my book developers, big thanks to all the contributors who helped the project since we made the work. They compiled this list 14 days ago so if somebody stepped in the between I apologize, but again for helping. So that's all what I wanted to talk about. Feel free to make some noise in the chat if you have a question. Thank you Peter for the presentation. We have one question that I see, are there any apps on Play Store or App Store using Map Libre today or coming soon? Yeah, good question. There are applications which are using this SDK. MapTiler has four applications, two for iOS and two for Android. One for MacTiles and one for mobile, both applications are free. So you can download these applications and look around and play with those. But I also know about other developers who are using Map Libre already in their application. Thank you. One question, what is the purpose of the API key when initiating your project? So you can configure and tell the SDK which server you want to use for getting the maps and styles and clips and all this. And we put into the SDK four options now, but it's open and can be added. And right now there is Mapbox, Map Libre, MapTiler, actually three. And Map Libre doesn't need any key. There is one map which you can use, the simple map, kind of political map, and Mapbox and MapTiler needs a key. You don't have to initialize it like this way. You can just slow the map from your URL and you can embed the key if it is needed by the server. Okay, thank you. And one more question, is it possible to support offline vector tiles on the device storage? Yeah, it's possible. Mapbox implemented the line caching. So that's one option. You can just use it. And this solution basically makes it possible to cache all assets which comes from the service of not only the vector tiles, but also styles, waves, writes, and vector tiles indeed. And the second option is to use, it's called file source, MBTILE-FILES source, which allows you to put vector tiles in MBTILE-FILES format on the device and then create the tiles from this file straight from the device. Okay, thank you. Checking for last questions, last ideas. I think that was it. So thank you very much for your time, for your presentation, and I hope you have a great Phosphor G 2021. So thank you very much. Thank you for giving us the chance to talk about it here. It's a great example. So thank you. And in a few minutes, we'll come back with our last presentation for this session. Thank you.
Want to learn how to build applications with vector maps for iOS or Android? Looking for more information about MapLibre? Confused about how MapLibre differs from Mapbox? We will explain all that and show how you can add MapLibre and enrich your application by high quality vector maps and custom overlays. We will present the state of the project, the roadmap. We will explain how the MapLibre was forked from Mapbox and how it is maintained. At the end of this talk you should have all you need to get started building mobile apps with MapLibre. Agenda: Simple map application use case, Building the application, Prerequisites, Building app for Android using MapLibre GL Native, Kotlin and Android Studio, Building app for iOS using MapLibre GL Native, Swift and XCode, The origin of MapLibre - fork setup, versions, how it differs from Mapbox, The state of MapLibre project, roadmap, bindings (Flutter, React Native), More sample code on GitHub, Q/A
10.5446/57186 (DOI)
We were out for two or three minutes. We're waiting for the next presenter. The next presentation, next topic in the session, we have acknowledged, wanted to know what we are doing. A geospatial professional loves the intersection of spatial mapping, open source geospatial and open data to solve global changes. He works at the cartography team in the ride-hailing company in Indonesia and he helped form up and is an active member of the QJS Indonesia community. I'm hoping that the cartography team is watching us. I just want to welcome them. Welcome it now. Hi, yes. Thank you, Jan. So, yeah, hello everyone. So, yeah, my name is Ignore. Good evening. It's now 9pm in Jakarta time. I live in Jakarta. Yeah, maybe if there is some issue with my audio, feel free to let me know because there's kind of heavy rain right now in Jakarta. Okay, I'll start sharing my presentation. Excuse me a second. Okay, yeah, I hope everyone can see. Okay, yes. Okay, so, yeah, again, my name is Ignore and on behalf of my colleague, Ismael Suni and Adi Kurniawan. So, I'd like to share about how we built an open source community during this pandemic time. We are from Indonesia and I believe most of you have heard about Indonesia. We are an island country located in Southeast Asia. So, in 2020, we initiate the QJIS Indonesia user group community known as QJIS ID for short. So, QJIS has become quite popular Indonesia. I would say for the last six to seven years. And I myself have been using QJIS since the 1.6 version and actively use it as my daily driver since the 2.12 version. So, in this session, I'd like to share about how we initiate this community and what are the challenges, what are the progress that we have made and maybe just sharing ideas on our long term plan. Okay, let's go to the next slide. So, yeah, I'd like to start with the background first on why we initiate this community. So, based on our observations, we saw that so many Indonesian discuss about QJIS in their social media like Facebook, Twitter or other like a WhatsApp group, Telegram group and so on. But this is kind of informal discussions between colleagues, between their co-workers, between their friends, between the college students, something like that. And from that findings, it seems like there are a lot of people using QJIS to process or use spatial data. So we are thinking that how we can capture this. And so we are curious how many people exactly use the QJIS, what are their use cases, how they have been doing it or how they experience bug reporting or contribute to the community and how they find solutions for their own problems. So, that's why we kind of roll out a survey, just a simple online survey to the people across Indonesia to have a better understanding and based on that, we initiate, we are trying to initiate the community to facilitate about sharing discussions, create regular events regarding the QJIS as one of the use spatial tools. And yeah, so in 2019, we rolled out a survey and yeah, just a quick summary just to give everyone context. As you can see, we have 316 submissions and after the data cleanings to reduce the duplication, so we're not doing any double counting. It turns out there are only 302 submissions that was valid. And yeah, as you can see that the distributions, most of the respondents or people who responds to our survey is located or lived in Java Island. The one with the most darker blue colors in the southern part of Indonesia, that's the whole Java Island. We have some people also respond, resides in Sumatra, Kalimantan, Sulawesi, Bali, and also Papua, that's the big island in Indonesia. So, most of the respondents are located in Java Island. Next slide, yeah, this is the first questions that we asked to the people, to the community on how we want to know how many people are using QJIS as their main or their primary GIS software. So it turns out only 35.8% from the respondents at that time. We are doing the survey in April 2019. Only 35.8% of the people are using QJIS as their main software to process their geostation data. The other, the highest percentage, which is 39.6%, is a mixture with other software or GIS tools. And the other 34.7% is they did not using QJIS as their main software. Okay, yeah, so the next thing is we want to know on how long that these people are already using QJIS and most of the respondents say that they have using the, sorry, they have been using QJIS for less than one year at that time. And yeah, only a small percentage of 11.1% have been using QJIS for more than five years. So most of the user is a, we can categorize as a new user, a new QJIS user. And also wants to know on how they're using QJIS itself. So it turns out the most people are using QJIS to create a map. Of course, QJIS is a software, basically QJIS is a software to create a map. I mean that what is it actually is to do the georeferencing, to do the, to draw the maps, maybe did some queries on the attribute stable, something like that, create a layout, create a styling, symbology, and so on, and produce the maps itself. So then the other thing is, yeah, I'd like, I think I'd like to echo on what Torsten has mentioned previously about the contribution guidelines. I mean that open source is fully rely on the contributions from its community or from its users, right? So we are also capturing on how people do some bug reporting or escalate issue if they found issue or errors while they are using the QJIS itself. So we are asking, hey, do you ever post a bug into the QJIS community? So it turns out almost, I can say most of the people say that they never report a bug to the QJIS itself. So how then they solve their own problems whenever they found issue or errors while they are using the QJIS. So it turns out that so many people are, tend to do some search in Google. Or they are reading or finding it from the blogs from other people, something like that. And the other 23.4% of the people are tend to asking their coworkers or their friends. So they just say to their friends, hey, I found these errors. Do you know how to solve it? Something like that. Okay, then based on this data, then we are trying to create the community. We are contact with the QJIS Project Steering Committee. We make some queries. We are sharing our ideas that, hey, we have this data, we have this survey, and we applied to form up the QJIS Indonesia user group for the first time. Then just about on the February 29, 2020, it's kind of a unique day because you don't have February 29 every year, right? So we have our first meetup. It was held in Joggetarta, a city in Central Jeff Island, Joggetarta. So we have this first event, first meetup. We are collaborating and partnering with the University of Gajahmada, or the Gajahmada University in Joggetarta. With the students there, with the GeoGraphic students, we are able to help our first meetup. We did some sharing sessions about the QJIS use cases, so many use cases from the analysis part, integrating with other open source tools, and also QJIS use cases for village maintenance and things like that. And also we are sharing, we are introduced about the technical sessions, which is, my colleagues, Ismael Suni, presenting about how we report about in two QJIS projects itself. And also Andy Kurniawan also shared about how we can contribute to the QJIS project itself as a translator. And this term is Indonesian. So yeah, we are sharing about that, about how to work efficiently with the QJIS graphical modeler, and also some QJIS basic training, something like that. So this is kind of our first meetup. And yeah, after our first meetup, just about two days after we held the first QJIS Indonesia meetup, our government announced the first case of COVID-19 in Indonesia. So yeah, I believe most of us experience are facing this pandemic. It's kind of such a hard time. Yeah, and then after this event, more and more cases are increased every day in Indonesia. We can see from this graphic here. Yeah, so our government released some of the policies, restrictions to limit people to do some face-to-face meetup, or also in-person meetup. So yeah, as you can see, we have our first wave around February and March in Indonesia, and then we have this huge spike, or we call it as a second wave somewhere around June, which is the Delta variants of COVID spreading out in Indonesia at the time. Then what we do? So we are thinking differently. We are taking some approach to have this community about the events, because initially during the first meetup, we have formed some events, some ideas, we even put some rough timeline on, okay, we will do some regular events for sharing, collaborate with each other, but then it's coming. Okay, so one thing that we can do is doing this offline, or the pipeline is a dirumah, or working from home, doing all the things from home to stay safe. So we held a couple of initiatives, online talks and also competitions. So we have four online talks regularly, almost every quarter. This is a chance for the QGIS Indonesia member to share about their ideas, to share about their work that is utilizing QGIS more. And then most of the resource person here is, most of the speaker here is Indonesian, but we also find this online talks is also create a new opportunity for us to have a speaker from another part of the world. So yeah, we are very thankful for ATN, the creator of the QGIS, quick OSM plugins, and Saber, who is the creator of the input app, who spent their time to sharing with us with the QGIS Indonesia community about the input apps, about the list map plugins, the integrations with QGIS, and yeah, this is kind of good for us. And we also supporting the government to campaign about stay at home. So we kind of create some small competitions for the people to share about how they're making a match with QGIS, and then we can share some give or merchandise to the people. That's a couple of activities that we did during this pandemic time to growing up the community and engaging people to join and to share and collaborate. And then, so yeah, we are opening on how we connect, how QGIS Indonesia connect to the online world. So actually we are trying not to be conservative here, we are opening multiple channels to communicate with the QGIS Indonesia community. So we have the Instagram, we have the Twitter, we have Facebook, we have the website on our GitHub pages, we have the blogs on the WordPress, and we have also the YouTube channel. Interestingly, organically, Telegram is the most active one. As I will say, more people can easily interact with each other in the Telegram. Almost every day there must be a topic or bug or issue found or questions from other members and there has been a discussions almost every day in the Telegram. I think so many Indonesians find that Telegram is the easiest one to connect with others because of the functionality, there is a quite good amount of data that we can share, something like that. So yeah, we are trying to open all the channels so people can interact with us easily. So yeah, this is our channels, so we have Instagram, so yeah if you guys curious on how we are, how does the activity is going on, how it's going on in QGIS Indonesia, feel free to follow us through Instagram, through the Telegram, through the Twitter, Facebook and so on. We have also the YouTube channel, but yeah, again, since this is QGIS Indonesia, then most of the time we are using the Bahasa Indonesia, so yeah, you can use Google Translate if you want or you can load in Bahasa, for example. Okay, next part is, yeah, we have this kind of merchandise, QGIS Indonesia merchandise. Why we built this, yeah, this is actually just to create good engagement for the people. And also create open the opportunities for people to give us some donations, because all of our events, all the online talks, all the competitions that I shared previous week, it's actually a free, we don't take any payments there. People just can join it freely. So the way we do it, we just sell this merchandise, the t-shirt, the tumblers, the water bottle, the stickers, something like that, so people can donate through that selling. So yeah, we also provide the transparency of the financial statement, which is published in our website. So each time we have some events that we sell the merchandise, then we will release the financial statement or financial reports of that event. So everyone can access it. We are pushing transparency value in this community. And then, yeah, because communities are one of the big benefit of communities, we can do so much collaborations with others, right. So we are collaborating with other institutions like the BMPD, the government institutions, the Indonesian National Board for disaster management, Perludem, NGO, non-government organization focused on public elections, the HOT OSM Indonesia and the Perkumpulan OSM Indonesia. With Sinaugis, a local GIS trainee and service provider who actively using GIS, mostly to spread out the news on what's going on, what does the community can help to support the activities on the ground. We are also collaborating with other communities, other open source communities like the OSG Indonesia, the GIS Indonesia, one of the largest geographic information system community in Indonesia, and the tropics info local environment community and cross cutting issue related to geostationality. So, yeah, I mean Indonesia is now, I kind of say that Indonesia is now the community about open source tools, not only in geostationality domain, but for others, it's now, it's really, really growing up. So, yeah, it's, it's create a good time and good benefit for each of us to collaborate each other because that by doing collaborations that that is a very good way to share the ideas to build something, build the network and maybe create some new innovations. And we also collaborate with the PRAMUKA or the big Boy Scout, Boy Scout organizations in Indonesia. We are also partnering with the United States, sorry, United Nations of Ocha to train the humanitarian organization to utilize GIS. We provide some technical expertise there about GIS. So, the way we do it is UNOcha contact us and we announced to the community, hey, we have this opportunity, who wants to join as a speaker to share the knowledge about GIS. And yeah, we are also collaborating discussing with the universities in Indonesia, who actually using open source and geospatial tools. So, yeah, we always open for collaborations. Maybe after this event, if there's someone or some other communities from outside Indonesia wants to collaborate with us and feel free to reach to us. So, yeah, this is some of the other collaborations that we had with the other communities in Indonesia. We also talked about a couple of data analysis workshops about working with the satellite imageries using QJS, processing drone data and others. So, yeah, this is what we did during 2020 and also 2021. The next slide is related to the challenges that we face. Yeah, we have this couple of challenges during this building this open source community. We always want to encourage the member to be more proactive on sharing their ideas, knowledge, and we also encourage the people to initiate a local chapter in its cities because as you can see Indonesia is quite a huge country. So we have so many cities, we have 36 provinces. So, yeah, there's a lot of opportunities that the members can create their own local chapters in their cities, maybe just sit together in the local coffee shop after the pandemic, of course. And yeah, we are also trying to provide regular updates and communications on latest updates from the QJS Project Steering Committee about new plugins, events, on how to report the bugs or issue. So contribute more to the QJS project itself. And yeah, the third point is COVID-19 number is still worrying, especially in my country, Indonesia. So we are fully rely on online events since it's hard to help face to face or in person. So yeah, we hope, I think almost all of us are hoping that this pandemic ends soon. So we can do some new activities together again. Okay, so last but not least, we would like to share the ideas about what we believe in communities is about the people, right? So we start from the people, by the people and for the people. We are still have a long-term plan on making this community as a good organization following the guidelines in Indonesia itself. So, but right now we are engaging people to be more proactive to share the ideas to collaborate. That's why we are trying to help some of the events, provide them a space to share about the ideas, asking questions and collaborate more. Okay, yeah, so yeah, I'd like to end my slides by saying terima kasih. It's a thank you in Bahasa. So that's a little share about how we build this community just during this pandemic time. So yeah, after this, maybe we can have the Q&A sessions and if you guys want to know more about us, want to collaborate or asking any opportunities, maybe you stay in Indonesia or you visit Indonesia, feel free to reach our social media. I think that's all for the slides. Okay, terima kasih. Now, we have little time left for questions, but I'm just going to read them through and maybe you can quickly answer, try to answer all of them. So the most wanted question is how did you target the audience for your surveys? Are you an unofficial community or are you registered as a legal body, like an NGO, etc. And how did the community expand during the pandemic and by which person? Okay, okay, yeah, for the first question is on how we spread the surveys on we targeting the people to answer the survey is we contact our colleagues from work and also multiple, the wide variety of the business, private company, government institutions and also non-government organizations and even through some of the universities. So we're covering most of the like the professional GIS, the students and also the government institutions. So we are spreading the news through social media, some colleagues, some networks that we have. So yeah, we know some, we have some friends who works in the wide variety of business. So that really helped us to share about the survey itself. Okay, the second question regarding the legal organizations. So until today, we haven't form up these organizations as a legal under the Indonesian law. So it's still a basic community, I would say, but in the long term because we have several requirements to fill out as a legal organizations in Indonesia. So we will submit this to the government of Indonesia, so we can have legal under the law. Yeah, again, this is kind of the blocker and the challenge is due to this pandemic time. We already discussed with some of the other people regarding to on how we can leverage up or scale up this community under the law of Indonesia. So until today, we haven't become the legal organizations, just a community in Indonesia. But in the long term, I believe in next one or two years, we will submit this as a legal community. And can you repeat the third question? So how did the community expand during the pandemic and by what percent and also one extra question I noticed you held the first meet up at the university. Are most universities in Indonesia using QGIS or is it about the same usage 35% in other places like other places? Okay, so yeah, roughly when we start the telegram group, I think it's all just about 60 or 100 people where we started back in March, February or March, I think. And right now it's almost 2000 people who joined the telegram group. So yeah, that's kind of kind of big, but yeah, for the active users, I think it's maybe we can say that roughly almost 200 or 500 people who who active in the telegram. There is a one of individual who just asked one question and he never show up again, something like that, but yeah, that's the nature of the community. And then about the university, the QGIS usage in university. Yes, I would say most of the university in Indonesia are actively using QGIS. So yeah, maybe they kind of mixture. There are some university who fully rely on QGIS, but as far as I know, there are also some other community, some other university that also using the mixture of QGIS and other geospatial tools, both the one that is free or open source and also the paid one. Okay, okay, thank you. Just in time. So thanks a lot for all the questions and presentations. So if you have any further questions to ignore, please get in touch with him through the menu list. So we are going to continue with Felicita Baros. And the title of the presentation will be Data Journalism.
Building an open-source community is already a huge effort. The covid-19 pandemic made this even harder. We started the QGIS Indonesia community - then it called QGIS ID, an Indonesian QGIS User Group while trying to overcome the pandemic. We manage to have one big meet-up before any social meeting is prohibited. As a new entity, we need to emphasize our existence through a couple of activities. In the circumstance of these limitations, we need to think more innovatively to look for some ways to keep the action of this community. We do not want this pandemic to dampen our enthusiasm for developing the newly formed community. one of the keys to deal with tough situations is adaptation. Upon the limitations to meet physically, we plan some events whereby all participants may join from everywhere, even without the need to leave their house. In this talk, we manage to build the QGIS ID community by creating several online events and also what challenges that we faced. We want to share what we did and hopefully, it can be an inspiration for other open-source communities. We will also share what’s our strategy to run the community without too much administration because we believe the community is the people and the other things can be done later. We started our 1st meetup in Yogyakarta right before Covid-19 spread in Indonesia, there were 60-80 participants who joined the event. Initially, we have created a plan to hold another event in 2020 to collaborate with another GIS/geospatial community. However, Indonesia Government applied a physical distancing policy were limiting the people to create events in March 2020. In the circumstance of these limitations, we need to think more innovatively to look for some ways to keep the action of this community. We started with creating a Telegram group and other social media to share our activities and discussions between the member. Besides that, we create an online sharing event with a presenter from Indonesia and aboard (the good thing about an online meeting). QGIS ID is also becoming more and more popular. We have almost 2000 members in the Telegram groups. We also collaborate with other organizations to do a sharing session, training or doing a project. Last but not the least, we created a QGISID “Di Rumah Aja” QGIS Indonesia “stay at home” where we create an online contest on what they do with QGIS during the pandemic time. All of the activities are free of charge. All the expenses are funded by selling QGIS ID merchandise (t-shirt and Tumblr). It’s a good way to raise funds when you do not want to charge for membership. Besides that, we also want to share what challenges that we faced, for example, the characteristic of our community which made mailing list an obsolete thing or how difficult to get a new volunteer to join the community.
10.5446/57187 (DOI)
Okay, Rob, we can start if you are ready for the presentation. Yeah, great. Can you hear me okay? Yeah, I can hear you okay. Perfect. Well, thank you very much. Welcome to this presentation. My name is Rob. I am a program manager with New Light Technologies. Wes Richaudet is a software architect with New Light. He is unable to be here today, but contributed significantly to this presentation. And we're going to talk about the importance of serverless designs and technologies for use in building geospatial applications. I want to say that I am not a software developer. I did come from the geospatial data science track and evolved into the enterprise IT side of the field and then eventually into project management and so forth. So I worked with software of all kinds, but I cannot do all the kinds of things that you all are probably better at. And I have become passionate about some of the techniques that you're using and how they can be applied to geographers and the building of geospatial applications, which I think is why we're all interested in this conference and the subject. So keep that in mind. I'm not an expert in all these elements, but I think they're very important for what we do and really changing the world. So the agenda here will be to first look at the problem with legacy geospatial architectures, then discuss what is serverless and why does it matter. And then we'll show several different case studies of where we have applied different kinds of serverless designs for building different geospatial kinds of applications for web and for data translation and data science and so forth that I think are very interesting and illuminating. And then we'll conclude with some remarks and ideas for future directions and where we see this evolution going. So just a little more about background. We work for New Light Technologies based in Washington, D.C. for a broad-based IT consultancy across cloud and software development, data science, geospatial research and other kinds of services. The firm has been around for over 20 years working for different kinds of organizations, state and local government, federal government agencies, commercial organizations, nonprofits, both domestic and international. And we do a lot of work across technology platforms, so we're quite agnostic and so that's given us a lot of experience with both open source and commercial systems and how they have to be integrated to develop new solutions. And I guess from this diversity of experience with different computing environments and technologies, I think we have developed some insights into best practices for how this can be done and especially I guess we're excited about some of the serverless cloud native approaches to doing this which we think can be taken advantage of. All right. So we can start with a discussion of the limitations with traditional architectures compared to the newer cloud native designs. The basic problem here is that traditional architectures are still being used by many, many geospatial kinds of applications and operations across the world. And the problem here is that those architectures tend to limit flexibility, scalability, interoperability and require a ton of maintenance and are very costly for that reason. And so this picture sort of gives that impression there of, you know, it takes a lot of manual intervention and ongoing care and feeding to keep these things alive. And, you know, that limits our ability to build and deliver and do great things. So this is certainly the case. A good example here is with, you know, building, you know, what should be fairly simple web applications, geospatial web applications. Traditional architectures are overly complicated, oversized and so forth. Traditional architectures require multiple machines for, you know, database application, web servers and so forth. You know, require licenses, patching, operations and maintenance and so forth. These kinds of configurations generally have a lot of interconnections and dependencies making, you know, patching, upgrades, rollbacks, integrations with other systems and that kind of stuff, very tricky. And of course, ultimately, this is very expensive. So, you know, this is a problem. It's not just limited to web applications, though. You know, traditional architectures are hindering many kinds of common workflows that we have in this industry for, you know, ingesting, processing, analyzing, translating, disseminating geographic data as well. So, you know, these kinds of workflows often require still a lot of desktop processes, manual handling of data, conversions in many different formats. This limits also the kinds of programming languages and modeling tools that can be accommodated. And these kinds of workflows are very difficult to scale as well. So, you know, both with web applications as well as just, you know, under the hood kinds of workflows, traditional architectures are holding, you know, geographers back, if you will. So the emergence of serverless and cloud native technologies and approaches in recent years are new ways to reimagine solutions to overcome these problems that are presented by traditional architectures. So serverless, I'm sure many of you know that it's a term thrown around a lot, but basically, for my purposes, I'm viewing it as, you know, any architecture, computing architecture here that would provide, you know, servers as software on demand, essentially. So, you know, the virtualization of servers has advanced steadily over the years, and particularly with the growth of the commercial cloud providers. This slide here helps to depict this evolution, and it shows, you know, the physical servers which need to be intensively maintained on the left, evolving into VMs and containers over time to serverless architectures on the far right, where all the hardware and operating system and increasingly server-side software components, like databases and web servers and so forth, are provided as native services of the cloud itself, requiring little to no management at all. So serverless functionalities are exploding across the cloud providers and offer opportunities, I think, to truly develop object-oriented architectures for applications as well as interconnected systems, something that I think can really benefit geospatial developers and users. And so, I've added one component on this slide here on the far right, showing that sort of system of systems approach that serverless technologies enable in ways that traditional architectures do not. So, you know, some of the benefits pretty well known, but certainly the ability to programmatically deploy infrastructure as code and automate things is enhanced by serverless approaches. Resources can be consumed only at run time, so this means you're not using stuff when you don't need to. There's the cost, there's a lot less maintenance, no patching, very little management and so forth. There's a wider variety of run times that can be deployed using many of these than with traditional or monolithic application architectures. And this last point is the one I'm really focused on here for geographers is the ability to have really fine grained control of workflows and distributing jobs. That's something that we have not had and I think we're starting to have and that's very exciting. All right, so let's take a look at some case studies. I'm going to present a couple of examples from our work where, you know, cloud native designs have improved or transformed applications and workflows for geospatial professionals and organizations. So the first example here is a web application that we developed and host for a state health agency in the U.S. that needed a tool to help facility management and planning during hurricanes. And the application reads in live hurricane feeds and provides spatial analysis tools to determine where there are threatened facilities and populations. So here we used a fully serverless architecture along with open source libraries to serve both the application and data without any traditional servers or operating systems at all making this really low cost to host. And so what we did is or what happens therefore is, you know, we pushed the processing and the rendering of the map tiles and so forth to the browser and we can cache that using, you know, content delivery network. So this is very lightweight and fast and extremely inexpensive to host compared to how this kind of application would have been built using traditional architectures. So this is a great example of using a serverless design for web applications. All right, the next example here is a solution we built for a federal agency to dynamically translate and disseminate geospatial data services. So here the application reads data from various API and database endpoints and then automatically converts that data to different formats that can be integrated into interagency common operating pictures and ArcGIS online and other kinds of applications. So here in this case using AWS event bridge and land is and so forth, the application runs only when data is updated and it handles near real time data updates without servers, third party software licenses for, you know, some of the specialized software that does that kind of thing. And it's a great example of how, you know, serverless pattern can be used to simplify and streamline a geospatial workflow that's very common. Here we have another example where we developed a serverless and open source solution for converting climate related imagery from various endpoints and the data existed in net CDF format and this solution converts it to cloud optimized geotifs with metadata conforming to the spatial temporal asset catalog standard or stack. And you know, this entire workflow again automated and can handle dynamic data and is a great example of how, you know, a serverless design can handle and streamline a workflow to improve, you know, interoperability as well, something that's very important in the geospatial industry. Okay, in this example, here we built a platform, actually we're still building this, but it's a work in progress, a platform to support federal disaster response analytical workflows. So the customer here requires the flexibility to run many different kinds of models that predict risk and prioritize operations, assess damages after a storm or an incident, as well as monitor community recovery. So models can come from different agencies, national labs, universities and so forth, could be written in different languages and using different kinds of geographic data formats. So the customer needs to be able to quickly integrate both data and models during an incident and run various simulations and then output consumable data services that can be integrated into different tools and the requirements are changing all the time based on incidents, the demand, how many simulations and so forth. So this is a very challenging to do using traditional architectures as you can imagine. So here we developed an open platform leveraging a serverless design pattern that enables, you know, data scientists to do everything in one place, they can build, they can test, they can schedule, deploy and monitor different runs of their models and workflows. This is a solution that's therefore really flexible. It enables them to integrate different languages and models built in different kinds of languages and it makes it really easy for, you know, the geospatial and data scientists to focus on developing better models and analytical insight rather than, you know, the IT and the system to do a run of their model. In this case, we utilized a prefect system for workflow and are building this in AWS but this is capable of expanding to handle massive workflows, workloads based on demand. So there's a few examples that give you ways, you know, cloud native serverless architectures are being used to streamline geospatial kinds of workflows and make development and hosting of web applications easier and more robust and lower cost. Some of our conclusions and takeaways here, you know, because serverless technologies enable object oriented microservices designs, we think this has the opportunity to improve scalability not only of systems but the teams who build them and rely on them. So serverless architectures have a ton of benefits here for geospatial industry and for building geospatial applications and we think that, you know, this hasn't been leveraged enough thus far and so we're hoping more organizations will realize these benefits, flexibility, being able to compose and orchestrate workflows more easily, lower costs and so forth. That essentially, you know, the outcome of this is that, you know, geographers can be geographers rather than IT experts. IT does require learning some new tools and re-architecting systems and workflows that may exist today or in legacy configurations to new technology but, you know, the benefits we think are fantastic and again that this will really make geographers more impactful. They'll be able to produce more maps, more data, more insights about the world which is what they're there for. Some of the future directions, you know, again, we think that this serverless enables the microservices approach and that that allows, you know, not just for systems but for teams so by breaking systems down into more modular components, it's going to be easier for distributed team members to simultaneously contribute to projects and it'll be easier for, you know, geospatial scientists to focus on the parts of a system or workflow that they're the experts on which is, you know, really something that hasn't happened a ton in the past and that's the opportunity. So this, you know, further enables the reuse of widgets and components that are built and the development of service-oriented architectures so that teams aren't reinventing the wheel each time. So you build, you know, one data processing widget in one place. You don't have to redo that but just reference that as you would in an object-oriented design, you know, these kinds of concepts are commonplace in software development but for geographers and using traditional architectures, that kind of practice is very challenging so it's really why we think it's so important to move to these kinds of designs. So this will enable practitioners to develop, test and deploy models and algorithms at higher frequencies and at bigger scales and, you know, that's critical for solving many of the global issues that we face today such as climate change. So that's our presentation. That's why we think it's so important for organizations to adopt serverless patterns and technologies in building geospatial applications. Thank you very much. Thank you, Rob, for the very insightful and interesting presentation and congratulations and best wishes for your work. So very quickly we'll move to the questions and we have a lot of them. So first of all, the first question is, separate from the technical side of the presentation, how does your team give back to the open-source projects used in these applications, so regularly offering professional development for employees to contribute to selected projects or are there any official sponsorship of any key projects? Yeah, great question. Well the first thing that comes to mind is that on some of these projects, because we're using open-source technologies and sometimes pushing them to their edges, we do contribute to the code base there. Our developers are actively doing that on a number of the tools. I didn't include a list of which ones those are, but I do know that that's happening. So that's one thing. And then yes, we certainly do, at least in our company, we kind of have a policy of making time for developers and staff to be able to refine their practice and improve their professional development. So often that's trainings or building prototypes or doing R&D, even writing a blog or writing a paper or going to a presentation, delivering a paper, that kind of thing is, I think, very important to furthering our knowledge as a population, as a profession and so forth. So that's a big part of our company's culture and it's partly why I'm sharing our work here with you today. Okay, yes, that's great. So the next question is, how does a serverless setup work with a connection or pool or socket DB connection? So this is with regards to pole-source database and a very follow-up question is from my side. Like when should we know to go with a typical serverless architecture or to follow a usual database connection architecture? Yeah, yeah, that's a tough one and there is no clear answer. I'm not an expert in this but I do know the dividing line moves around based on the project requirements. So in the example of the Louisiana application we showed, in that case, we didn't have to do a lot of server side processing. We had data coming in from software as a service. We could push data to the browser in ways that enabled the application to function on the front end without impacting the user. In those cases, it made sense to use a fully serverless design rather than there may well be other kinds of operations that require a longer server side process, in which case it doesn't make sense. I have to confess. I don't know whether there's a formula for doing that. There may be, maybe somebody can educate me about that. But in my experience, I see that being debated every time we build something. So I think it's something you just have to get a debate going in your team and see which one's going to work. There's obviously a lot of factors too in terms of has the organization or customer already invested in technology that it makes sense to continue down that path or does it make sense to introduce some new technology and patterns too. So all that has to be taken into account when you're building something. Thank you for your answer. The next question is, how closely coupled are these serverless tools tied to specific cloud providers such as AWS, Google, SEO, et cetera? Yeah, that's a good question too. Again, I'm not an expert in these across the cloud providers, but my understanding is that they generally are offering quite a variety of serverless functions for different things. I think there are many in common databases or things like that, which if you don't have to manage that, why do it? Again, there are cases where it does make sense to intensively manage the database, but for many applications, that's something people would rather not have to do. Maybe just incorporate that piece, let the cloud manage it, and focus on building the front end and the spatial logic and so forth. But I think this is growing all the time. That's the exciting thing. I've worked mostly with the AWS tools, but I hear from others that Google and Azure have a lot of examples of these as well. There is a follow-up question very similar to this, but on a different perspective. It is regarding, are there any barriers to geospatial library incompatibilities with respect to Lambda or serverless architecture that we should be aware of by using them? That may be one I don't have the expertise to answer, but yes, I think in the example we showed of the imagery translation service, one of the tools we were looking at there, I think only runs on EC2s, for example. Then you have to choose, are we going to develop our own solution or something else? Can we put it in a Lambda? Is it so much better to use that tool and have the offsetting issue of having to manage the EC2? Yes, there are these concerns. I think each one is a concern. It brings up to me the point that hopefully developers of open source software projects are thinking about this for people like me who do want to try to run these in cloud environments. That's very important. Certainly for the future growth of open source, I think that's necessary. I encourage that. We'll take final two quick questions. First one is, how the final user can edit data for field selection by example? What kind of service? I think this is coming for the example which you mentioned. Let's see. Field data collection, is that the question? We are working on a component that can handle this. It's not integrated yet. I think there are a number of commercial solutions for that, primarily commercial solutions for handling data collection that our customer has invested in already. That's probably where it will start. But it would be very interesting to see if there are open source solutions for doing it. That's certainly an area for future development. I should have put that down as a future direction. I do think that's an area that's underdeveloped overall, the mobile collection solutions, open source solutions. One other project we're working on, we are developing a solution for offline compatibility. This means being able to use the application's functionality both in range and off range for field data collection. In that case, we're using some of the local data store capabilities of common browsers and operating systems. We're hoping that will help with offline and remote field kind of uses. That's awesome to know. Finally, there is a request to post your slide deck online if it's possible. Also, are there any good resources out there to getting started with serverless applications? Yeah, good idea. Certainly I will make the PowerPoint available. I think just becoming, actually I don't know of that. There probably are some sites that are talking about this. I didn't consult them in doing this. I really relied on just my project work, but certainly learning about the cloud and is mandatory I think in today's world. I think it's really less true for developers. You all are already there. This is more of a message for the geographers who are used to maybe older tools and methods for doing their work and trying to get them comfortable like, hey, you can do this stuff in the cloud too. In fact, it's easier and in fact, it's lower cost. It just means you have to learn about these tools. You can do it over here. I guess that's how I would answer that. Thank you so much, Rob. Thanks for the great presentation and answering all the questions. We wish you best of luck for the future and upcoming work. Thank you so much. I appreciate it. So with this, we wrap up our today's session of Peo to Madarin in the Wednesday afternoon session. We had five awesome presentations on Praveeta Geoportal. The second presentation was on MapMint, the service-oriented platform, followed by a presentation on PM Tiles, an open cloud optimized archive format for serverless map data, and the penultimate presentation on the cloud devolved open source. Finally, we looked into the presentation on building serverless geospatial applications for the enterprise. So as a session leader, I thank you all the participants, all the speakers, and the audience for joining in and for their questions. I wish you all a very good day ahead and a great post-4G ahead. So with this, I wrap up.
Historically, enterprise-class geospatial application architectures have generally relied on computationally intensive and ponderous server-side databases, webservers, and software platforms for data processing and retrieval. Traditional architectures for simple web-based GIS applications have required the use of expensive multi-server configurations that require ongoing maintenance. With the ascension of the public cloud, however, a plethora of native sotrage, compute, content delivery services and design patterns are available to build robust and scalable applications at lower costs. This presentation will provide an overview of how to leverage common cloud services to develop serverless applications and a synopsis of several case studies where this approach facilitated delivery of more sustainable dynamic geospatial analytics and interoperability solutions. This presentation is aimed at educating geospatially-oriented technical and management audiences about modern cloud-native design patterns that facilitate the use of and deployment of lower-cost lightweight open source applications.
10.5446/57188 (DOI)
He's a developer and data analyst and memorialist and has specialized in ROTAR remote sensing. He will be delivering the Cold War in reconnaissance imagery loaded up to rectifying the 60s in high resolution. So I will play the recorded session for you. I will be talking about Cold War reconnaissance imagery reloaded, auto rectifying the 1960s in high resolution. Specifically, this talk will be about the Corona Satellite program by the United States in the 1960s. This is the outline of my talk. First I would like briefly to talk about what Mondialis does before going into detail about the Corona Satellite program and its applications. Then I will show the imaging geometry and distortions that are inherent in this data and will showcase our image rectification workflow that we developed to cope with these distortions and to auto rectify entire Corona scenes. A few brief words on Mondialis. We are a remote sensing company based in Bonn, Germany and we are focusing on the analysis of large amounts of Earth observation data in time series using cloud environments. Mostly we use free data such as Copernicus Sentinel data and we are also committed to using free and open source software exclusively. With Marcus Metz and Markus Niedler we also have two GRAS GIS core developers in our team and in our projects we regularly contribute to the GRAS GIS development or to the development of GRAS GIS add-ons as in this project. Now I am aware that you have heard enough about Corona in the past one and a half years but let us give this name a different meaning as it is also the name of a satellite program that was launched by the United States in the 1960s. The goal was to have a reconnaissance mission giving an idea of what is going on on the ground in high resolution especially since it was the Cold War of the Soviet Union and their allies. The entire program consisted of 144 satellites in eight keyhole missions. Now this is of course a lot of satellites but the mission duration of a single satellite was rather low because the satellite operated with two panoramic cameras that were operated with a physical film and once the film was used up there was no use for the satellite mission any further. Also the retrieval of the physical film was rather spectacular as you see in this image. Once the film was used up the satellite would tilt towards Earth and eject a capsule with the film inside into the Earth atmosphere and then an aircraft would attempt to catch this capsule mid-flight. The capsule was also designed to survive in salt water for two days such that it could also be retrieved by boat but after two days it would dissolve and sink in order that the data would not fall into the wrong hands. Now today's value in this data lies in the spatial resolution as you can see in this image because the satellite flew rather low at around 160 kilometers orbit. The latest corona missions yield effective spatial resolutions of 2.75 or 1.8 meters. In this image you can already see individual houses, cars, vehicles and streets so it's a very valuable data source dating back 60 years. This gives you another idea of why this data is so valuable. This is the city of Tunis in Tunisia 60 years ago versus how it looks today. And as you can see a lot of things change on the Earth's surface in the meantime so with corona we have a unique way of looking back in time that no other data source provides us. So luckily all this data was declassified in 1995 and is now available to download from the USGS Earth Explorer. One scene costs $30 but scenes that have been ordered previously are usually available for free simply because the only cost is associated with scanning the physical film. So any data that has been processed before can be downloaded for free. Now what can you do with this data? As I said before there's a lot of valuable information in there simply because it dates so much back in time even further than the oldest lens that archives. So there's a lot of valuable information concerning for example long term land cover mapping and monitoring. You could monitor vegetation or forest changes throughout the time or the urban sprawl throughout 60 years. In this image here you see an harbour village in Algeria and compared to today you can also see how this city sprawled enormously in the past 60 years. This information however is hard to automatically extract simply because in the corona data we have no spectral information. And so all automatic approaches are limited to feature identification and extraction methods. However in methods where we can focus on the qualitative interpretation of data corona data benefits a lot. And one very classic example for this is archaeology to give you an example. This is a corona scene of southern Turkey from the 60s and this scene was used to identify potential archaeological sites simply because human historical structures such as dwellings or infrastructure altered the local topography that could be visualized with high resolution data like this one in order to identify potential sites that are interesting for ecology. In the meantime since the 60s a lot of activity was going on on the ground a lot of construction land cover change so even using today's high resolution data would not yield to this information because the land cover changed so much and buildings and cities were constructed in the meantime. Here's another zoom in of an example we see an individual farm and around it you can clearly see that there seems to be some kind of human-made structure which might have been fortification. So again a very valuable data source for archaeology. So why is corona data not that widely used yet? This is because there are massive distortions in the imagery. On the left you see a diagram showing the imaging geometry of the satellites. It has two panoramic cameras one tilted slightly to the front and one tilted slightly to the back. And since it is a panoramic camera there's a lot of distortion especially at the edges of one scene. So one scene is approximately between 200 and 250 kilometers wide and 10 to 20 kilometers narrow only. So if you're interested in maybe just a very small area of the Earth's surface that is located in the middle of such a scene you might work well with classic linear auto-rectification methods that assume essential perspective. However, if you were to auto-rectify an entire scene there is an urgent need to cope with these distortions due to the panoramic effects here. Further, there is no calibration data available that you would use for today's systems such as the position and different angles of the air or spacecraft at the time of the acquisition. This was the 1960s and such data could not be collected so any transformation model needs to estimate these parameters. Here's an example of different auto-rectification methods of the same corona scene. On top you see just a linear classic auto-rectification approach applied and on the bottom you see an image where the panoramic distortion has been accounted for. And in the top image you can clearly see that the offset especially at the edges is in the order of magnitude of tens of kilometers. So this is not an appropriate way to go to auto-rectify an entire scene. So what is necessary is a transformation model that deals with this and this is done basically by a cylindrical shape that represents the physical film surface to be put in this transformation model and such a model was already introduced in literature by Zohn et al. in 2004. But however in this project we didn't find any implementation in free and open source software. Because of this we implemented this model in the iAuto photoshoot in GrassGIS. So the iAuto photoshoot is the standard rectification workflow in order to auto-rectify aerial or space-borne images normally assuming just the central perspective cameras but now we extended it to also be able to cope with this panoramic distortions. It is part of GrassGIS from version 7.9 onwards so if you install GrassGIS locally on your computer you already have all the software you need to auto-rectify the corona scenes and of course it's free and open source. Finally I would like to show you the entire workflow of rectifying an entire corona scene. It begins with the scene recomposition because if you order a scene from the USGS, my name is Guido Rimbauer from Mundiales and I will be talking about the city of Rimbauer. The example for this is archaeology and one tells it slightly to the different angles of the model. So what is necessary is a transformation model that deals with this and this is done basically by a cylindrical shape that represents the physical film surface to be put in this transformation model and such a model was already introduced in literature by Zohn et al. in 2004 but however in this project we didn't find any implementation in free and open source software. Because of this we implemented this model in the iAuto photoshoot in GrassGIS. So the iAuto photoshoot is the standard rectification workflow in order to auto-rectify aerial or space-borne images normally assuming just the central perspective cameras but now we extended it to also be able to cope with this panoramic distortions. It is part of GrassGIS from version 7.9 onwards so if you install GrassGIS locally on your computer you already have all the software you need to auto-rectify the corona scenes and of course it's free and open source. Finally I would like to show you the entire workflow of rectifying an entire corona scene. It begins with the scene recomposition because if you order a scene from the USGS Earth Explorer what you get is not one entire TIFF file but you get four individual TIFF files that are split but have some overlap. This is simply because the film cannot be scanned in one go but it's scanned in four individual rounds and on each scene pair is a significant overlap in order to recompose the scene. So you could use automatic stitching methods or just go with a simulated auto-rectification approach by finding points that are the same in the respective image pairs. Next comes the part that is most time consuming and this is the collection of ground control points. This means in order to auto-rectify your scene you would need a set of points in your corona imagery that you can assign a specific and precise location to. For this obviously one needs reference data. If there's infrastructure for example which is always the best choice then you could use open street map data as in this example. You can use for example road crossings or bridges. In this case we used a crossing at the airport of Tunis of the runway and this crossing can also clearly be identified in the same or in the corresponding corona scene here. However this is not always possible and sometimes there is no infrastructure in your scene and then you would need to go for natural features such as for example river confluences as in this example or characteristic rock formations for example. But again you would need to really have a close look to find this specific point in both data sources here in the respective corona scene. We found that collecting 20 to 40 ground control points over an entire scene is enough to give a good auto rectification but it would be necessary to distribute these points as evenly as possible over the image in order to have a good result of the auto rectification in each part of the image. With this data now we can do the actual auto rectification and for this we need some additional inputs. So first we need the patch draw scene from step one. Then we need some basic scene metadata. When you order a scene from the USGS Earth Explorer you get some very basic coordinates of the scene which are not very precise but they give you a very rough idea of where you are on the Earth. And this is important for the parameter estimation of the transformation model in order to have some very basic initial guesses. Then you need a digital elevation model in order to cope with distortions due to topography and you need the set of ground control points. With all these inputs you can then estimate the parameters of the transformation model and by doing this each of the ground control points gets assigned a root mean square error that depicts how well this point fits to the overall model. With this you can identify points that are perhaps that have a high error and do not fit well to the model maybe because in the ground control point collection step some points were misplaced. So you can use this step to iteratively adapt your set of ground control points until you are satisfied with the overall root mean square error. And this what is achievable what we found is more or less 5 to 10 meters root mean square error in the models is realistic. Then you can run the actual auto rectification and here the final scene. This is then how it looks like if you do it for an entire strip of scenes this is the Nile Valley in Sudan which is also very interesting from an archaeological point of view. And using multiple scenes you can also use the overlap of individual scenes to identify ground control points that are visible in each scene pair in order to avoid double work. This brings me to the end of my talk. I thank you very much for your attention. I hope I made you a bit curious on the corona mission and the data and also made you curious about trying grass.js for its rectification. And now I'm looking forward to your questions. Okay. First apologies for the video cut just my bad. And also apologies for the issues listening to the audio because I didn't have maybe because I was using the earbuds but I hope you get a grasp of the talk. So okay we have one question. Is the output being used professionally? Was this a speculative project or did someone commission it? If you can talk about that. Yes, thank you very much for the question. Also thank you for hosting the presentation. Yes, so we had a customer that was interested in the corona scenes for archaeological purposes although I can't point out the exact one but it is being used professionally and also in the let's say in the archaeological world. This corona data set is known very well. So it's a far used data set. Yes, I have a couple of questions. Or well maybe the first one is the comment. Apart from archaeology have you identified other potential use cases like I can imagine for example coastal evolution or line coast changes for example? Yes, of course all kinds of land cover mapping basically is applicable for this and it's very interesting because this data goes back a long time. For example there was also a study not conducted by us but by others using this data that also looked at forest change in the Amazon basin. Now you can even look at data way back from the 60s which is a nice extension tool let's say the Landsat archive for example. So there's far more use cases than archaeology but as I also said in the talk it's a bit hard to automate this simply because we only have so we have the distortions and second we have only one spectral channel if you will which is not even a spectral channel but a physical film. You haven't mentioned exactly or shown any coverage map of the imagery. Is this imagery worldwide or because you've shown data from Africa mostly but actually you mentioned now South America and I understand also Russia and the Cold War areas of interest but there are also data from Europe, North America. Yes there's a lot of data from North America so in general it is worldwide and was available so the focus for historical reasons obviously was on Asia and the Soviet Union and also the Middle East which is also very interesting for archaeological purposes but there are data available for North America, Europe, literally anywhere in the world. At some places there might only be one specific scene at others there are hundreds it really differs a lot. Okay I have a couple questions from the audience, did you try AI, super resolution method? Super resolution to identify specific features so we know because we didn't analyze the data itself yet but this we only prepared it for the rectification but this would be a very interesting approach to identify features and objects in the data itself but again this is then the application world that we didn't touch in this project yet. Another question is have you looked at AMC's stereo pipelines method for working with KH and how it compares to your methods? No we have looked at, I'm not sure if that's the one, there is one from I think the University of Oklahoma I'm not sure if that's the one that this question refers to, there is a stereo pipeline and we also tried to use it but we weren't able to contact or we didn't get any feedback from the hosts of this platform because there is a service that allows you to auto rectify Corona scenes online also but we weren't able to get in contact with the host of the system unfortunately. There was a question that gets removed from the chat or no? Well I have one, are the results of your studies there rectified images available somewhere or it's just owned by the customer? Exactly and perhaps I'm not sure about this but the customer may publish them as well for example in a WMS or somewhere but in general what I would like to point out again is that also the data you can get from the USDS Earth Explorer so the raw data is available and there is also free data already there. Okay there are folks already sharing the coverage in the chat for anyone interested in. Okay yeah thank you Matt, is there any community effort to georectify this data so to outsource I guess the time consuming human dependent part of the process? I think there was, there is this Corona Cast project that is also the one with the online possibility to autorectify. But I don't know if this project is still ongoing so from our side we didn't put any effort in the community building here but in theory of course it would be very nice because this is a very rich data source and everybody is just waiting to have it autorectified to get started and the autorectification is really what hinders the analysis ready data. Any initiative that would like to use this workflow is very welcome of course and we'd be happy to help them. I can imagine a kind of a web service to help creating the control, the ground control points. Okay any, how good is the special accuracy? Yeah this really depends so in the best scenes we would get an offset of around 5 meters for the entire scene. There will be regions where there is absolutely no offset and you have a perfect match and there will be regions where you have 20, 30 meters offset. However there are some scenes that can't be really autorectified at all or where we still have hundreds of meters of offset in there simply because there are still so many factors like the absolute position and movement of the spacecraft that was simply not documented in that time. So there might be occasionally a scene where these factors are really hard to come by with also with our autorectification methods. Yeah I can imagine it has to be challenging to work with data from the 70s or the 60s. Any other questions from the audience? Okay then thank you very much Guido. Yeah very nice, a very interesting project. Now we have five minutes for Martín so see you soon. Thank you.
CORONA is the code name for the first optical reconnaissance satellite mission of the United States (1960-1972). The goal of the mission was to produce high-resolution analog photos of most of the Earth’s surface, especially of political hot spots and military locations. Due to the regular recordings, large areas could be continuously monitored and evaluated for the Department of Defense. Until 1995, more than 800,000 photos remained secret and were then made publicly available by the US Geological Survey on the order of President Bill Clinton. The high-resolution CORONA photos (2 m to 60 cm pixel resolution) are available as scans for a fee from the USGS and represent a unique source of information for science, archaeology and other disciplines. Since the camera systems of the CORONA satellites have a special panoramic distortion, common linear methods cannot be used for the orthorectification of the scans. mundialis has developed an innovative free and open source technology to rectify these unreferenced scans of CORONA photos to current map references and published it in GRASS GIS 7.9. This photogrammetric solution models the CORONA camera mathematically and thus enables a precise referencing of the CORONA data. In many parts of the world, the CORONA scenes have preserved images of a landscape that predates the most intrusive infrastructural and land-use projects of modern times. Traditional architecture, agricultural patterns and settlement systems can be observed in great clarity on CORONA imagery. This makes CORONA a precious resource in fields such as archaeological and historical geography.
10.5446/57189 (DOI)
Okay, we are back. Hello, Astrid, again. Okay. We can start with your presentation. I just want to present you again. Astrid is an active member of the Oshia and Charter member since 2010. She will talk about creating great applications for your need with MapBender. So, you go. Okay, so hello everybody. It's great to be here at FOSFUG 2020-21 Buenos Aires. It's a pleasure for me to talk about MapBender and show you how you can create great applications. You can see a picture already with MapBender applications where you can see that you can use it, this software on different devices. And I will give you an introduction and do also a live demo to show you how you can administrate MISC MapBender. So, my name is Astrid Emde and I'm broadcasting here from Cologne. I work at Wehrgruppe in Bonn since 2002. And I work with Oshia software every day at my day job. And I work with PostgreSQL, PostJS, MapServer and all this great software. And I'm in the MapBender Project Steering Committee. And I have my focus on web.js and web mapping and do consultancy and trainings. Wehrgruppe is a company based in Germany and we have more than 40 employees. We are developer, consultants and geospatial experts. We are located in Bonn, Berlin, Freiburg and Hamburg. And we have successful open source solutions running since more than 15 years. And we are the company behind MapBender and also MetaDoran, MOPS. And we are active in PostJS and OshGew. And if you are interested to work at Wehrgruppe, you're welcome because we are hiring. But now we will have a look at MapBender and I will show you how easy it is to configure. So, it's a web.js client suite with an administration interface. And you can create a view portal without writing a single line of code. So, it might be interesting for you if you are not a developer. You can create any number of applications with only one installation. You can create and maintain an OWFAS repository with your services. And you can distribute configured services among applications. You also have users and groups that you can administrate and you can grant users and groups access to applications and services. You have support for several languages. And if your language is not presented present already, you could add it by editing some of some language files and add your language to MapBender. We have a website where you find all the information. It's linked on the slide, which I will publish after the talk. And here on this next slide, you can see the MapBender project. Most of the people are from Wehrgruppe. So, this is the team that works on the software. It's an old software project and it incubated already 2006 as first OSGU project. We have an MIT license. We have code on GitHub. And our architecture is it's a PHP framework with symphony that we use. We have HTML, bootstrap, open layers as map client and Java configuration. And what you require, if you want to run MapBender, you need a server where it is installed on. You need a web server like a patchy HTTP server or engine X. You need PHP for MapBender and you need a database for administration. This could be SQLite or PostgreSQL, for example. Then MapBender offers a big toolkit with functionality. So MapBender offers many features that you can combine. You have elements for visualization. You can create and edit data. You have search and print functionality and much more. And we have a feature list on our website where you can find the feature and have a look. I will show you if you have a look on our website. We have functions where you can discover the functionality of MapBender, which we divide in different sections. And you can find out what MapBender offers. And you can get inspired by our gallery as well. We have a gallery where we show cases. And there you can see that MapBenders are also in Argentina. So greetings to the Gupeano de Tucumán, which run this MapBender application. And if we go back to the gallery, you will see that there are much more applications that you can discover. So let's have a look at some of these applications to give you a first idea about MapBender. So we have a COVID dashboard from an area in Germany where you can see the active COVID cases. And you can get information about the special regions, how the situation is there. You have a site pane, we call it, where you can place HTML information or a legend. We have the functionality that you can integrate HTML as well to provide diagrams like it is done here. And you can navigate and you can keep it simple like it is done maybe in this application. And a new feature that was implemented is this application switcher. You could provide one or more other applications that you could switch. So from this view, I could switch to a different application. And from the COVID map, I could switch to this map where you find pharmacies or places where you find the possibility to get tested. So you can see that an application can look differently. And you also can add a search interface into your application. This works with solar or nominatum or photon or also with a new OTC RP feature services. You could add legend legends to your applications where the results come from the services that you integrate in the application. You have a background switcher where you could switch the information that you can see with this background switcher. And we will see a different solution how you can do it in a minute. And this background switcher makes it really easier, easy to switch from one topic to a different one. Okay, so in the gallery we saw this example and we have many more. And another one that I would like to show you is Rio. It's not the Rio in South America, but it's Rio here in close to Cologne. It's a portal from the Oberbergscher Kreis. And so here you find a lot of applications. So I mentioned that you can provide a lot of applications for different needs. And in these applications you find different collections of services. And here you can see instead of this layer switcher, background layer switcher, you have a big tree with a lot of information. Every folder represents a service and here you could add more information to your map and then find out more about the region. And you can see more functionality here and get information or you could measure lines or areas. You could switch the application as well as we saw already. And in this application you find the search functionality not at the top, but here at the site where you have functionality to search. You can also find a search for a special address or also you could create with an element which offers your SQL search. You could find parcels or addresses or historic parcels. And this search element allows you to configure your search that you are interested in which runs on a Postgres SQL table and you can configure it as you like. So this should give you a short idea how a map-bender application could look like. And now, let's see. Yeah, we will have a closer look and create our own application. So map-bender, when you get started, map-bender offers you template applications and you can copy them and modify them for your needs. If you would like to have a look how these template applications look like, you can have a look at our demo. And I prepared an installation, a map-bender installation where we can try everything. So first, if you want to administrate map-bender, you have to log in. You can see it here at the top. And if you are not logged in, you might have no, you have no access to the administration back end and you have access to some applications, but maybe not to special applications. Then when you log in, you have access to more functionality. And here you can see you have applications, you have sources, you have security, and all these regions you can administrate. And as you can see here, you find applications that are already shipped with map-bender. These are the demo template applications. And then you can create your own applications. And your own applications, you can see you have more functionality, you can edit the applications, you could use the application, copy them, download them, and administrate the access to the application. So if you get started, you can start from scratch and copy an application. This is really easy. Then you change the name and then you can get started with your new application. The application looks the same as the one that you have copied. And then you can decide which elements you would like to keep or whether you would like to change the ordering and which services you would like to add to your application. And this is done with the backend. So here you can see the layout where you can decide which of the elements you would like to provide. So for example, I could change the ordering. I put the legend to the right. And now when I update the application, you see that the legend button is here at the right. And it is easy to manage. So via drag and drop, you can modify the position of the elements. You could delete elements if you don't need them. You could add more elements if you need more functionality. You could add links or HTML contact. And a new feature is that you could decide on which device you would like to show a link or an element. So maybe for mobile devices, it does not make sense to show all these functionality or for example, not the print functionality. And then you can decide whether you would like to provide it the one or the other way. Okay. So this is for the elements and every element you can change the position via drag and drop. And I show you, for example, for the map, you can decide which projection you would like to support, which start extent you would like to choose, which extent you want to cover with this application, and which other projections you would like to support. And every element has a different configuration. And yeah, it's kept simple, but still you have to learn about every element and what it offers. So here you can see that this application covers the area of bond. You see the extent that was defined in the map element. You see the projections and you could switch to a different projection. You can navigate. You have the scales that we saw in the administration, but you can navigate independently and choose every scale you would like. Whenever you change the map, navigate in the map, a map request is sent to the WMS service. And with OpenLayer 6, which is integrated now, we have new map features in the navigation. You can rotate the map, which is quite nice. Okay, so we saw that you can administrate the tools that you put into your application and also the services. So you could add more services. And this is done this part. So here you can see all the services that are already in the OWS repository of my Bender. And you could add a source easily. So you find here this form where you have to insert the address of the service. Let's see, I prepared an address and then you can upload the service and register it in MapBender. And then afterwards you can use it in all your applications. And you load it once and after that MapBender knows all the information about the service and knows how it is set up. And then you can ship it to the applications and afterwards you can see in which application this service is already integrated. So for my new application, Force4G, so there haven't been much services inside yet. So I can go here and add more services. So I think this one and you could decide how you would like to present this service, which format you would like to use, whether you like opacity, or you could decide that you would like to disable some of the layers that you don't want to provide. And you could do all sorts of things. And here you can see that I added more information to my project. And one new feature that we have is that we can provide applications in different ways. You can create shared instances which are bound to this application or you can create shared instances. Or we have private instances and shared instances. So if you create a shared instance and you will administrate many applications, it's really useful because you do one configuration and this is used in all the applications where you provide this service to. And just to show you, we have this security region where you can decide which user should get access to the application. We can create users and groups. And you with this users and groups, you can grant access to applications. And that's that's it. We worked on a new backend design and a new frontend design is going to come soon. We are working on this at the moment. We have new functionality, we can create this shared instance that I mentioned. We saw the application switcher. We have feature info highlights. So on feature info, the areas that you choose there are highlighted. We have this improved navigation and we have share and view manager. So when you have an application, you can easily provide views that you would like to save. This is here and you can make them public or just for you. And like this, you can easily jump from one region to another. And then we have the simple search element which supports a lot of services. And we have a digitize tool where you can digitize point lines, polygons, and you can create complex forms to edit attributes. We have a print map and a print where you can design your own print template and you can rotate the map. And the new feature is that we have a print queue. So the print output will be stored on the server and you have a history. So you can reprint it and you can also save it in a JSON configuration and rerun it on the command line. Then you have a feature print where you can print from the feature info or from the digitizer. You can make a print layout for a feature that you are interested in and get all the attributes as well to the print output. Then we have a dimensions handler which is running together with a WMS time. So you could choose this dimension handler to see different states of your data, do this functionality. And you can design your application. You could add a CSS to your application. You can do it with this CSS editor as you see in the administration backend or you could save a file as well and design your own corporate design for your applications. And also you could write your own functionality. So MapBender is modular. It is extensible and that makes it really useful and easy to provide your own functionality. If you want to try MapBender, you can go to OSGiolive and work with MapBender on OSGiolive. It's installed there. And I hope I could give you a short introduction to MapBender and hope you are interested in the project and maybe we will see you at the next MapBender user meeting. Okay, that's it and enjoy for OSGi. Thank you Astrid, that was a great presentation, very clear. I really enjoyed it and I really want to try MapBender now, so that's great. We will see some questions. You have many questions. We can start with whether you control the symbology of layers and do you support OGCWFS also? That's a good question. Maybe I was not clear. At the moment we only support WMS and MapBender is not creating the service. It only brings them all together in an application. So to create the services you need MapServer or GU server or QG server or different software. And MapBender is a client that visualizes the services. So everything that you see in the map is provided from the WMS and from the services that are integrated in this MapBender application. So the symbology is not done by MapBender but by the services. And the second question was whether we provide WFS. So this is not in the published stack already but we had internal projects running with WFS. And maybe in the future this will be a project that we will work on. But I would expect that we will focus more on the new OGCWFS family to support this. I mentioned that this is already working with a one field search that I displayed that I showed you. And I think it would be very powerful to have also feature support in the client so that you really can work with the feature. And we work with the feature already. I might show you when we work with the digitized application. But in this case we talk to the database itself and not with a service. But yeah, maybe that's something that's coming in the future. Okay, great. Thank you. There is another question. According to the MapBender docs it seems as MapBender requires PostgreSQL version level 10. Is there a roadmap when MapBender will run on later PostgreSQL version? Okay, yeah. And I think the documentation there is a bit outdated. MapBender supports newer versions of PostgreSQL as well. So it should be fine to run it with PostgreSQL 12 or 13. So if you take the latest MapBender version you should be fine. Okay, great. Because we are now moving to a newer symphony version which integrates doctrine which works within your PostgreSQL elevation. And so it should work now. Great. Another question is does the geodata need to be in a particular format or can you define what data to visualize in the admin section? Yeah, the data has to be in a special format. So at the moment if you want to work with data you need services and we support WMS and WMTS as services that you can load. And also this WMS with time support. But that's all at the moment. And maybe on other parts like in the digitized functionality that I showed you a minute ago. There we talked to a PostgreSQL database directly to the tables and also in the search functionality where you could configure your parcel search or address search. This is also talking to the PostgreSQL table directly. MapBender is quite good working together with PostgreSQL. Okay, great. There is another question. Are there a way to create chloropleth maps joining a geosource like MBT with a tabular data provided by NAP? No, not at the moment. Okay, thank you. There is a way to connect to PostGIS, but are we also able to connect to ViewsQueries? Maybe I did not get the last one. To what? Are we impossible to connect to? Views or queries? Yes, sure. I didn't understand it at first. Yes, that's no problem. You could create views for the search or for your digitizer as well and then communicate with the views. For address search or parcel search, this is often done because you want to get the street key connected to the street name or design it in a special way. So this is quite common that you work with views. Okay, great. Thank you. There is also a question regarding if it is possible to edit data with a MapBender? Okay, maybe it was at the beginning of the talk when the question was asked because I demonstrated it here in the digitizer. So you can add data, you can design the form, how you would like to save the data, which attributes you would like to fill. And here you can add date picker or text fields, you can define mandatory fields or select boxes. And in the backend you can add triggers, for example, or you can add information in this form. And it's easy, it's really nice. And also you will have different functionality. For example, you could create objects and move them or you can only move special points of your object and save it again. And if you modify different geometries, you can save all the modification afterwards. And here you can see for polygons you have polygons, rectangles, donuts, circles, ellipses that you can create. And the power that I see is that you have all this edit, digitize functionality, but you can also create forms as you like with tabs and all these things. It's really flexible. But in the backend, I didn't show you how it worked, it's a bit of YAML code that you have to write, but after some practice you will be familiar how to do it. Okay, great, thank you. Okay, I think we are on time, so we will thank you again Astrid. It was a very nice presentation and my vendor looks great. So thank you all for attending this session. I hope you continue to enjoy Phosphor G and we will see each other probably in other sessions. So thank you for that. Sorry Astrid, you can't say anything. Bye bye, see you.
Mapbender is Web GIS Client that helps you to create applications for the web. This presentation will show what is possible and you will see how easy it is to work with Mapbender. Mapbender improved a lot. With the new version we have a refactored design and many new or improved features. You can integrated your WMS Services and confirgure them individually. You can manage access rights for applications.
10.5446/57190 (DOI)
Okay. Here you are. I just want to see if when I put it in, oh yeah, so I can't really, I can't see presentation mode. Yeah, the point is you have a delay of about 20 seconds between the StreamYard and the Vennueless. So now you could see it on Vennueless as well. Vennueless, what is that? That's the other system where the audience listens to your talk. The StreamYard is just you talk and StreamYard and we stream it live. But you're on stage, I can see you. So it's on the stream, everything is perfect. So we have four minutes to go and before I go, I will introduce you shortly to hopefully a lot of people being there. And then you better switch on your camera, not now, but then for, yeah. I can see you, perfect. It rained today on the money. Oh, well, I'm sure it'll be offset by lots of nice days. First rain I think since five months, but it was huge rain. Wow. Yeah, so is it, it's what, middle of the night? There? It's 22, 27. Right. You're sitting where? I'm in California. Ah, okay. So not Greece time. No, much different time zone. But you have a Greek email address, right? Yeah, no, I just moved back to the US from Greece. So yeah, so I lived there for two and a half years, but I'm still doing work with my Greeks. So, yeah. Great people. Yeah, definitely. So for me, it's, it's already beer time. Yeah, but it's like dinner time for them there, basically. Yeah, I just gonna ask if people can maybe show up in the chat. Give me a second. I have a Greek keyboard. That's great. So do you speak Greek? Lijo. Lijo. Yeah, I learned it. Yeah. The problem is I learned it for, for a year. I've been learning it for about two years now, but I just started to coast and it's really hard. I understand a lot, but speaking is really hard. Yeah. But I get really used to the language and being here now for four days, I guess, about that. And yeah. The point is we don't see if how many people's out there. Oh, Eric asked, yes, I'm smoking. Sorry. It's late here. It's, it's 11. So, yeah. Gonzalo says it's also being broadcast. I switched, just switched off my camera. Nobody sees what I'm doing here. I have a couple of friends here already and on Friday the whole company will come down. So we have a company retreat. Okay. Somebody told me to start. Yeah, that's right. It's time. It's 2029. I just wait for my clock. Okay. So welcome everybody. Thanks, Paula, for reminding me to start and to stop conservation. This is the last talk on this session tonight. It's really a pity because that's really been a night, a nice session up now. And at last speaker, I'm introducing you, Jennifer Bailey. And yeah, Jennifer will talk about cultural heritage, connecting factor between geo and engagement properties. And Jennifer is a research fellow at the National Oppository of Essence in Greece, but you're not here anymore. So you're in California back, I heard recently. But you do a lot of work in climate science and policy. And currently you're at Global Health, Ph.D. student at the University of California. So I guess you have kind of the time delay to me, but not to the audience. So yeah, I would say it's your stage now and I will disappear. And yeah, we are happy to hear you talk now. Great. Thank you so much. Yes, so as Till said, I'm kind of between the National Observatory of Athens and University of California, San Diego now. But at the Observatory of Athens, a lot of my work is through the Greek Group on Earth Observations Office. So I'll talk a little bit about that today and specifically looking at cultural heritage but also urban heritage. So Earth observation data is really becoming increasingly instrumental for environmental monitoring in general, but it also holds a lot of opportunities for the domain of cultural heritage as monuments and sites are endangered and threatened by both anthropogenic and natural threats. So earthquakes, fires, urbanizations, etc. And this space is really starting to be explored. You can see kind of EO's opportunities in this domain because there are a lot of projects coming up and applications aimed at providing products tailored to the needs of cultural heritage. So examples of useful EO based products include land use change maps, ground motion detection, risk assessment maps, archaeological sites monitoring and identification, so identifying varied sites, for example, monitoring of the destruction and looting of sites. And then also looking at different climate change indicators like monitoring of air pollution, monitoring of coastlines, so erosion. And EO really allows for kind of this systematic data and information gathering approach from large areas and also kind of can fill the gaps in areas which don't have monitoring systems or the ability to kind of address this and look at cultural heritage using EO data. It also addresses frequency issues and allows for timeliness of data acquisition, which is really important when you want to look at cultural heritage over time and maybe more granularly. So instead of looking at kind of this invisible sort of perspective zooming in and being able to see different aspects of the urban fabric, which might include cultural heritage. And also as a part of this, EO can really kind of inform policymaking, so start to feed aspects of cultural heritage into larger maybe environmental policymaking frames. And I think one of the most interesting aspects too is that EO can really kind of enhance public awareness. Use an opportunity to kind of pull on sort of the heartstrings of the public when you're looking and talking about cultural heritage. So helping kind of bring the need for preservation and conservation and effective management to the public's eye can kind of also influence, you know, the ability of us for us to look at cultural heritage more seriously. So since EO holds opportunities in this domain and cultural heritage is threatened sort of by these different climate change impacts, we together the Greek Geo Office with UNESCO's World Heritage Center and the Group on Earth Observation Secretariat recently launched this community activity, which we call Urban Heritage Climate Observatory or UCO for short. So it was launched in April of this year with a public facing event, which is available online and highlight documents are also available online. And also a private meeting of kind of this consortium, which is the community activity. And really the goal is to kind of integrate urban heritage and climate change impacts on urban heritage using Earth observation into Geo's work program. So I'm not sure how many of you are familiar with the group on Earth observations. It really is this unique sort of global network connecting all these different types of institutions, government, sort of research data providers, businesses, you know, private sector scientists, etc., all these experts to kind of address Geo's vision and goals, which centers around sort of engagement priorities, looking at urbanization, climate change, disaster risk reduction, sustainable development. So Geo really kind of coordinates their community to respond to use Earth observation to kind of supplement data, help countries respond to, you know, policy and reporting mechanisms and the Geo work program is really the primary instrument to facilitate the collaboration among members, participating organizations, associates and partners to kind of, you know, move forward their vision and goals. And so this really ranges from communities of practice, early stage project projects and pilots and also includes well-established services. But there's also an entry point for new activities, so the Geo community activities, which UCO falls under, which kind of, which may go on to become initiatives or even flagships, but it offers an opportunity to kind of begin to collaborate and contribute with minimal requirements and minimal structure as well. So kind of beginning the conversation and bringing together a community to kind of address, you know, different aspects of, you know, Geo's goals through this contributing to the work program. So the Urban Heritage Climate Observatory is a community activity within Geo's 2020 to 2022 work program. And so in sort of conceptualizing UCO, working together with different partners, we kind of realized that this can be sort of a vehicle bringing together the different engagement priorities of Geo as it kind of operates on the common ground of urban resilience, climate action, sustainable development and disaster risk reduction. So it really kind of cuts across all four Geo's engagement priorities. So there's three established priorities based off of sustainable development agenda, the Paris Agreement and the Sindai framework for disaster risk reduction. And then in November, a fourth engagement priority around the new urban agenda focusing on urban resilience, resilient cities and human settlements will be formally adopted in November at Geo's plenary. But yeah, so kind of the whole effort of UCO is to use Earth observation data to address climate change impacts on urban heritage, looking specifically at world heritage cities. And by kind of integrating these aspects into Geo's work program, we hope that we can kind of bring together these four different sort of engagement priorities as UCO brings together these different domains and tries to connect the dots. So some connections need to be made in terms of the urgency of individual impacts of climate change with the importance and especially the vulnerability of urban heritage. And also the untapped but increasingly acknowledged potential of Earth observation in relation to both of these concerns. So UCO aims to really be a forum to bring together experts and stakeholders in these domains and kind of determine what the needs are, what the aims are, avoid overlapping in the different domains and then kind of exploit these synergies to produce tangible outcomes, all with the aim of protecting urban heritage from climate change impacts via Earth observation. And so just really quickly to kind of describe the structure, the established partnership between UNESCO World Heritage Center, the Greek Geo office and Geo have kind of formulated a steering committee which is eight entities basically just to kind of serve as a core group and design strategic priorities for UCO, overlook the implementation of these decided activities and it's co-chaired by UNESCO's World Heritage Center and the Greek Geo office. And mainly we'll operate through working groups. So the structure is very loose but that's kind of how we plan to move forward this, you know, aim of producing some tangible outcomes. And yeah, we expect to meet once a year. We launched in April so it's still, you know, in the very beginning stages but really trying to bring together the community, the Geo community as well as externals to get people interested and involved in this area. So the response from the Geo community was actually really quite amazing. So we had, you know, very positive responses. We have 75 organizations from 24 participating countries and it's really just a wide array of expertise. We have climate change experts, you know, cultural heritage experts, EO experts really trying to like build the consortium to help us sort of deliver on this community activity. Jennifer, sorry to come in but do you change your slides? Because obviously we just see the first slide in the moment. Oh no, yes I do. Can you see them? Oh no, I've been changing them. Sorry about that. No, maybe it's not your fault but I don't know. Yeah, but now I see your new slide on the screen. It takes a while. Sorry to come in. I didn't want to disturb you. No worries. Yeah, so I'll just, I mean I'm sure these slides will maybe be available after but there's some links you can look at, Uko. Are you seeing what I'm changing now? It's now working. Now we see the objective slide. Right, perfect. Thank you. Sorry. No, no worries. Yeah, so we're on the objective slide. So just briefly, these are like laid out. We have an engagement plan which lays things out but one of our main objectives is kind of exploiting EO and open earth observation data where possible to really enrich and modernize processes for the preservation monitoring and management of urban heritage and really want to use a bottom up approach to kind of inform global decisions in this area but also kind of develop a global service that can then assist local practitioners. So kind of moving from bottom up and top down as well to kind of tackle climate change impacts on urban heritage and like I mentioned before really kind of leaning on this communication advocacy piece around climate action to try and move forward sort of protection practices on cultural heritage. Are you able to see this new slide now? I hope so. But I've laid out kind of the planned activities here. I still see also on the stream yard, I still see the objective slide. Okay, maybe I just do it this way. Does that work? Yeah, probably it's better because it takes a while because we have a time delay between the stream yard and the other one but I'm sure it works now. Sorry to come in again. No worries. Okay, so as long as you can see this decently. But yeah, just briefly to go over kind of the planned activities just to explain kind of what we hope to do in the next few years. Really building off this partnership between Geo and UNESCO. We've developed an implementation plan. We've launched the community activity and we're currently looking at these items in red so we wanted to find these working groups and actions. We want to collect needs. So what are the urban heritage needs around the world and within our consortium in relation to climate change? What EO data exists? So really leaning on our EO data experts in the consortium there. And then kind of what are the global good practices? And as part of this, we'll launch a survey to gather this information but also to help us in identifying pilot sites that could be good places to kind of test out this idea of building a sort of global service using EO for climate change impacts on urban heritage. And then moving into the future, we really want to kind of build off of the existing indicators that exist but build a specific climate change risk indicator set for urban heritage, initiate and implement these pilot cases and sites and then help it feed this global platform. So here I've kind of only shown two of these aspects that I mentioned. So we really hope to bridge geographical scales. So moving from the local up to this global platform and then using the global platform to help inform other local experiences. So identifying pilot sites, prioritizing different ones, making sure that we have adequate geographical representation because a lot of World Heritage Cities are actually in Europe. So working to kind of make sure that we have a geographical spread and also a spread looking at lots of different climate change impacts. So air pollution, flooding, desertification, all of the different sort of climate change impacts that threaten the wide variety of World Heritage Cities and piloting, refining and then feeding this global platform. And also in terms of, and those are kind of like what you see as like the tangible, you know, what we hope to be the tangible outputs of this community activity and also kind of building on UNESCO's already existing indicator set, the Culture 2030 indicator set, using what we find out on climate risks and using what we're able to find out on terms of impacts on cultural heritage to kind of help inform this set of indicators. So also including the Sustainable Development Goal indicators as well. So yeah, so as I mentioned, we have this strategic implementation plan and as such we kind of briefly talk about sort of a data policy as it's very new and we're kind of sort of delineating activities as we speak. But we've really tried, we want to, you know, make free and open access data priority using it where possible, sort of engaging with different aspects of the geo community to see, you know, can we engage with private sector, can what sort of open data is out there, what can we map to the needs of the cultural heritage community. But all keeping in mind that there really is sort of a lot of limitations and sensitivity around data for cultural heritage sites and monuments. So UNESCO World Heritage Center is really obviously, you know, aware of this and will ensure that the outputs kind of reach the necessary communities and that we're able to, you know, learn from them in terms of what their needs are, but we hope to sort of disseminate through open platforms like Geo's Knowledge Hub and all, you know, sort of keeping in mind and working with local communities as we implement our pilots and build this sort of global platform, keeping in mind what sensitivities might exist. So because of issues with theft and trafficking of cultural objects and property, dangers related to armed conflict and war, terrorism, etc., there are a lot of threats that, and this obviously doesn't include any of the climate change threats, but these sorts of reasons, etc., kind of dictate what sort of data can be open. So there really are some things to consider here. And as we move forward with specific site and pilot implementation, we'll learn more on sort of like what degree this data can be open. But some, that being said, you know, some efforts exist that obviously provide open data for cultural heritage. So Copernicus did this like wonderful mapping exercise where they looked at Copernicus services in relation to cultural heritage needs and really mapped, you know, okay, how can services and data, existing data from this program really help either indirectly or directly address needs of the cultural heritage community. And then also this Open Heritage 3D project has tried to remove the barriers to accessing and working with cultural heritage data. And so they've launched for a couple of cities open data that's accessible. And then further on this Copernicus point, ECMWF is a member of the steering committee and consortium of UCO. And so really highlighting that the Copernicus sort of data tools, products, etc., can help and measuring specific climate indicators relevant to cultural heritage and in our case, World Heritage Cities. So there is open data out there. There is, you know, services that can help us in terms of sort of identifying and measuring the risks in World Heritage Cities. And so as we move forward, we really want to see where the first one of the first steps is kind of this mapping of needs, see where open data can kind of address the needs that exist out there and help us move forward or thematic priorities, help in terms of standardization. So you know, it really is a global issue. How can things be standardized from site to site or city to city and kind of keeping that frequency aspect, so keeping everything up to date in terms of mapping and monitoring of different sites. And then also addressing kind of this affordability aspect, those barriers in terms of technical capacity and offering sort of a global view, which can support global policymaking. And then also contributing to real-time sort of emergency management and alerts. So using open EO data like Copernicus services to help in times of need or, you know, maybe there's the option to have real-time information informing like a wildfire encroaching on a World Heritage site like in Olympia last year. And then there are opportunities for high-resolution data for specific sites and we hope to kind of explore this more as our consortium starts to meet more frequently with the working groups and kind of outline these pilot cases and start to gather information on that. So that is it for me. I don't know if there are any questions, but it's kind of hard to see. Feel free to email me at any point as well. Hello, Jennifer. Sorry again for the technical problems. I don't know where they came up. Yeah, I think there's really interesting topic you're working on and we talked a little bit in advance about you've been in Greece and I think that especially here down here in Greece cultural heritage is really an important issue here in Greece. I had the experience a year before and there are a lot of players involved even if you just want to build a house or something like that. So I think it's a really good idea to put that on maps and to get the people involved who are involved in this topic. My question is how is the acceptance of the administrations you're working together with? That would be quite interesting for me, probably for listeners as well. Like acceptance in terms of willingness to work together on this project? Yeah, exactly. Yeah, so UNESCO World Heritage Center has basically been there since the beginning. So they really manage the list of World Heritage Cities and they're really excited about this because the brunt of all this monitoring and management of data and things has been on them with sort of they've acknowledged the need to kind of open up to Earth observation data but there's some technical issues there. So it's really open. We've worked together well. I mean, we'll see when things sort of start to develop and become more tangible. When we are working in local communities for the pilot cases, I don't know what the administrations might be like at that level. But no, I mean, this is very exciting and it's very interdisciplinary so you have multiple domains interested in it and wanting to be involved in seeing how they can kind of contribute. And that's really great to hear. Probably it's, I don't know, it could be another issue in other countries but I know that Greeks sometimes tend to be a little bit complicated. We have the same in Germany as well and just putting on new efforts on an old known problem. This is really really great. Thanks for your talk. Looking at the question side and I would really to encourage the listeners to put another question. We have about four or five minutes left. Put your question on the board or put it on the chat. That would be really great. We will wait for another one or two minutes. It's quite new idea to deal with these problems so it seems that nobody else has problems, questions on that. But anyway, yeah, really thank you, Jennifer, for the talk. It was especially for me, it was really really interesting because we talked about that before. We built a house here down in Greece last year and I know we have to talk to a lot of people and a lot of different positions so that was really good overview and I think that really could help not only in Greece but yeah. Thank you very much for your talk. Have a good day in California and again sorry for the technical issues. I don't know whether it really went wrong but I won't exclude it. It's really on my side. But get your kudos on the chat and have a nice day and yeah. I hope you enjoyed the round here and yeah. If you're interested in more talks on phospho G and stuff like that, I think there are a lot of opportunities for everybody to join together although we are all sitting on different places. For me now it's time to have the evening beer and yeah. Probably see you tomorrow in another session and I go and close this one now. Thank you very much. Bye bye. And have a good night or have a good day or wherever you are. Enjoy your day. Thank you.
The Urban Heritage Climate Observatory (UHCO) is a new Community Activity within GEO, working to reveal the fast-paced growth of EO technology and information to help address climate change risks and impacts on World Heritage Cities. UHCO operates upon the common ground of climate, heritage and urban related Sustainable Development Goals (SDGs), also cutting across the other GEO priority engagement areas. This is apparent through advancing climate adaptation, enhancing preparedness to disasters, and having climate-aware World Heritage Cities serving as strong advocates for carbon neutral and resilient cities. There is a broad range of free and open access EO data to be contributed to this activity, including in-situ and other types of datasets. However, issues surrounding confidentiality and ownership of certain cultural heritage information exist, and there are great challenges to be discussed with respect to data sensitivity in the frame of Open EO.
10.5446/57191 (DOI)
Thank you. Thank you. Hello everybody. I'm really happy to be here. I'm part of the organization committee. So this is really nice to be here as a presenter. Thanks, Ken, for being session leaders and all to be here. I'm here to say a little bit about the project. I had the opportunity to be part of. It was a data journalism project. And I've been using FOSS4G tools to deal with the project. And I've been using FOSS4G tools to deal with the objectives of this project. And it's interesting to be in this session because we're going to have a little bit of different points of views. We saw from Thorstein and Ian and Andrea, the point of view of the developer. With IGNO, we saw the point of view, the importance of building a community to get more people interacting and using free and open source software for GIS. And now I will show a user case about pandemic situation and COVID-19 as well. But more in this project. So before I start, just a little bit about me. I'm Felipe, Felipe Sodre Barros. I am a Brazilian geographer. Nowadays I live in Argentina. And I've been working with GIS and a lot of special analysis, which nowadays are calling as a special data sciences or geographical data science. And I'm here to present about this project that we use it free and open source software to face negationism and to face this pandemic situation of COVID-19. I'm pretty sure that you know this map already, right? This is the John Snow Collera map related to the Collera outbreak on Soho neighborhood in London. And this is like a mandatory map or presentation to be done in any GIS introduction course. But most of them, I'm afraid to say at least in Brazil, they stay only on this map showing that John Snow could relate the case of Collera death with the special distribution with the water plume. And then to help to overcome the situation. But the fact is it's much more beyond that in that situation, that case, in that specifically part of the history, there was a paradigm called the miasma paradigm, which is the belief that all the disease was spread through the air. And this is really related to the historical situation, the time they was living, the tools they had, the industrial development. And John Snow not only built this map, but together with other collaborators, he's still working on it to organize the argument to show to politicians or to managers to say that this is not a disease spread through the air. This is a new case. This is something new that there is a disease that can be spread through the water. So it's much more bigger than just the map itself. I usually suggest reading this book. It's the same book, but in Spanish, Portuguese and in English. And it's, in my opinion, it's a good plot of the situation and how, and the real importance of John Snow. And by this time, you must be wondering, no, what, what, which negation is, what are you talking about? And it's shame for myself to say that in 2018, if I'm not wrong, Brazilian majority population decide to have as a president Jair Bolsonaro. And I bring here to show that according to the news, different kind of news, there is a situation caused by the lack of activity of Jair Bolsonaro or worse, the way he is acting with this COVID-19 situation is getting worse the situation. And then this also, it's been seen as an attack to human rights because he's negating that there is a real, he's still negating that there is a real problem. He went to UN forum a few weeks ago. He's not, he didn't use the vaccine. And this is a really dangerous situation. But this is not only a human health situation or a public health situation, but this is an environmental problem as well. Because they are using the fact that everybody are concerned with COVID-19 situation to flexibilize the environmental laws and to, to, I have no words to say, but it's like they are taking advantage of this calamity to flexibilize environmental law. And not only in the environmental subject, but also in the news, he's still posing a threat to press of freedom and also in the democracy. I've read this book, How Democracy Dies. It's an interesting book that tried to relate the Trump elections with, and the way he managed the political situation with this, how important it is in democracy to have subjects like the freedom of press. The freedom of press not being attacked. So in other words, free of, free of press is something really, really important. It's a key thing on democracy beyond a lot of other subjects. So I was invited in the beginning of this year to work together with InfoAmazonia. It is a small and independent press who works basically with Amazon subjects, environmental, social, indigenous problems. And they came up with this discussion, like, let's start with a work, with a project, trying to relate the situation, the COVID-19 situation with the forest fires we have in Amazon. I will talk better about the project, but the name of the project is Inhaling Smoke, Beyond the Climate Change. It was supported by John Knight and Big Local News from Standard University, which made possible to develop this data journalism project. So about the objectives, we had to identify the best data sets about particulate matter smaller than 2.5 micrometers. In case you are not used with, the combustion smoke usually has this particulate matter smaller than 2.5, also the particulate matter smaller than 10 micrometers. And this one specifically is a really dangerous situation because it attacks the respiratory systems without COVID-19 situation. It is already a really well-known particulate that attacks the respiratory system. And so the idea was to identify the best data set that we could identify the concentration, especially, and validate this data set with the temporal and spatial values with air pollution sensors. Then thanks to researchers from Acre University, they are already working with those kinds of analyses. They have installed air pollution sensors in ground. So we had, like we used those pollution sensors as ground truth to validate with the other data sets we could use. Then validate and estimate or model the correlation, not only the correlation, but of the particulate matter with forest fire occurrence and to identify, and then this is the core object of this work, to identify if there is a relation between the exposure to high PM particulate matter values with hospital admission and length of stay by respiratory syndromes caused by COVID-19. So just as a small disclaimer, we are not relating the particulate matter with the spread of the COVID-19. No, we are relating to if there is any increase on the hospital admission related to the exposure on the particulate matter. So a little bit about the particulate matter I'm saying. This is a hair, and so you can have a dimension of the size. This is a fine bit sand. And we have particulate matter of smaller than 10 microns in diameter, and we have the particulate matter smaller than 2.5 microns in diameter. And this one is the one we was working because it's the well-known as a problem for respiratory system. So about the special data sets, we came up with the Copernicus Atmosphere Monetary Service. They have a data set specifically forecasting the particulate matter. And this forecast is a huge model with a lot of inputs from meteorological stations. And we came up with this one as the best one for us because of the special resolution and the time resolution as well. And this data set, they run daily models for every three hours predicting until five days ahead. But we decided to be used for 2012. Sorry, I forgot to say that this analysis was run for only the year 2012. And we got the models run in the day for the day, for the every three hours for the same day to get the most restrict forecast. So total we could access 2,920 images which I processed them using our end slash Python. Actually, the most process we're done in NAR using the STARS package, an amazing package that allows me to process everything really, really fast. And then the public health about the hospital admission, we use the data source, which is the public health system in Brazil. Although all problems we had on the president trying to hide those information we could use or in the case we needed, we access the state level data on health information. So about the validation, we are seeing here the values from camps in red and yellow, the sensors from the ACRE University. They have not a lot of sensors, but the ones they have, we use it to validate. And we could see that during the fires season in Amazon is a well-known season. It was kind of natural fire season, but nowadays it's been like a long time. It's not natural. It is also a human activity and not any kind of human. People, those fires associated with the first station, with mining and other activities. So just to show you that we could realize that temporal and spatially we could realize that the trend on sensors, on the ground truth, are really shown in the predicted on the model we are using, the data set we are using. And then we are showing here we have like several municipalities, the line is in white, and then we have a suggest or a limit from World Health Association. I don't remember the name exactly, but the World Health Organization, they suggest that the daily concentration of exposure shouldn't be higher than 25 grams per meters cubic. Okay, so this is the line here, the line here showing the limit, and then of course few municipalities before the fire season, they already was presenting daily exposure higher than they suggested by the Global Health Organization. And then during the fire season, not all municipalities, but a lot of those got really worse situations. So attention to, I'm saying to daily exposure, daily exposure, so all day exposure to value higher than 25 grams per, grams per meters cubic. So the key finds we could achieve that for each day of exposure bigger than two point bigger than this suggested from the World Health Organization, increase in 2% the probability of a person being hospitalized. So it's like to developing worse scenarios of COVID, of actually of respiratory problem. The person supposing that a person already with COVID, they will probably develop a problem on the respiratory system. Then being exposed for each day exposed on this high level, we will increase in 2% the probability of this person to be hospitalized on a worse case of respiratory situation. Also that the smoke fires was related to an increase of 18% in severe cases of COVID and 24% in admission for respiratory syndromes. I mean, we could split the persons with COVID-19 and then people that had problem, respiratory problem that wasn't diagnosed, to cater with COVID-19. So it's like a quarter of 24% in admission for five states in Amazon. And here I'm showing, I won't tell about this infographic, but the states are Matugrosso, Rondonia, which was the worst one, the worst place to be in Acre, Amazonas and Para. Those five states that we could associate to increase of 18% of severe cases of COVID and 19% of case of respiratory syndromes. Yeah, but this is too, talking too in a large case, but if we go to municipality cases, we can see like for instance, Pauini in Amazon state had all the 30 days of August with higher values than 25 grams per meter cubic, all the days with higher level. And this is represented as increasing on the 18% of the hospital admissions for COVID and 115 for respiratory syndromes. Others as well had the same, pretty much the same situation. And then I came up with the part of the reports. This is a work of data journalists, so it's not only gets on this model case or scientific case, but also to understand what's going on to the people. So I bring here the case of Tania Silva. She is on the report we have published. She was pregnant. She got COVID in the first season. She got really bad. She was pregnant and had to have her baby before, so both could be alive. She is quite fine nowadays. She is still living with the sequels of the case she had. And perhaps you are now asking, okay, Felipe, but why are you talking about Tania and not about other person? And I think that people include diet about, yeah, this is the point. The point is here to use all this scientific approach to understand the worst place, the place that was most affected, to go there, to talk to people and to understand the situation of these people that are most exposed because of different situations. Pretty much what John Snow did on the cholera outbreak. Tania lives on a quilombo. I should mention here that in Spanish, quilombo is used to a messy situation. In Portuguese, it's like a way to name a community of original people from or related with the slavery process we had in Brazil. Okay, so she is living in a small community far from the hospital. And then the idea is really to show that there are people suffering a lot being a part of the health system that then should be the case. Also, Haimundo is a extractivism. He lives, works with the forest, all the things he gets, he strikes from the forest. He fought together with Chico Mendes, which is a person, I know person in Brazil that fought the deforestation process, showing people that they can live with the forest getting their incomes from the forest. He fought and now he is in this situation, he got contracted, COVID got really bad situation, he is alive, but he lives in extractivism community, which is the one identified as the most attacked by the deforestation and the people that are trying to put another use the land on others way. Also, the last one, Beptok Kirin, I don't know how to say, he is a leader, an indigenous leader, we say, Kaseke in Portuguese, he passed away unfortunately and he is in an indigenous land, which is unfortunately the land most attacked for people trying to mine in their land. And to using fire for deforestation, so it's not good news, I'm bringing here, but it's like trying to show people how, what is the idea on the data journalism. This is the team I worked with, Juliana Mori is the leader of the project, she works for InfoAmazonia, worked together with, on this scientific part and then all the reporters, and we will leave a few related links I will share, or you can reach me on VanuLays, I can share my presentation. The idea is you have every report I showed here, you can read and also the reports we have done. So pretty much, is that, I'm glad to, I think I'm on time, and just to say that, thanks, special thanks to all developers, to all the people organizing community, we are using, we are trying our best to use the software you are developing to, to get a better lives, not only for ourselves, but to the others, and then to face the situation of negationism and this bad situation of pandemic. Thanks, and I'm available for any question. Thank you, thank you Philippe. We, I'm just at the end of the session, but since there is a break after which we can stay a couple of minutes for having the questions, I'm going to read through the questions if you, if you feel like right now. Okay, so what action did this work call for, and any positive responses from the responsible organizations so far? So far, unfortunately, and we wasn't expecting any, any activity from the government because they are negating, they are, they are going against the situation. So the idea is like share those histories to show that a lot of people suffering. I'm not in the info, but I'm sure we associated with others. Big press to spread this situation. So I'm not sure about how, how is the activity if there is something changing, but in Brazil, I don't think so, but the idea is to articulate and I know that there is, there are a lot of investigations going on internationally includes includes to, to, to see the situation we are facing. Okay, thank you. And the next question is from the audience. Based on the results of this work, what do you expect in a future scenario of intensifying the effects of climate change? So what are you expecting in the future? That's a tough question. I, unfortunately, with all the results we got, I, I'm not able to expect something better than we are facing. And we run this analysis. As you can see, I didn't put much effort on showing here about the model we use it, but would be interesting to go further on trying to understand how could be, how would be the situation on climate change, but I don't think we'll be better than we are seeing nowadays. Okay, okay, thank you. Thank you very much. There are no questions, but lots of comments that congratulating and thanking you for the, for your great, great presentation now. So I'm going to thank you as well for this great presentation on this topic. And also thank you for your efforts within the organizing committee. So we are hoping that we will have a great event throughout the week. Yes, thank you everybody. I'm available to talk about and have a good conference. Okay, thank you. Thank you very much. And we are going to have a break at the conference right now. So I think it is in one hour or two hours. Just let me check the schedule. Sorry about that. Yeah, the, the, the sessions will continue in one hour. There will be a talk and then the sessions will continue. Thanks a lot. And thanks for joining. Thanks for watching this session. I hope to meet you after talk with you as well during the.
How can a small and independent media press help in the fight against negationism and pandemic? In this talk I intend to share an insteresting use case from a small and independent media press on a Data Journalism project using FOSS4G to infer whether or not the forest fire occurrence is agravating respiratory syndrome related to COVID-19 in the braziliam Amazon biome, using Copernicus Atmosphere Monitoring Service, air poluttion sensors and health public data. This is a talk about facing negationist government and the pandemic with thoughts about this process and the technical explanations on how we approached this project.
10.5446/57192 (DOI)
The top contributor of GeoServer and also the top contributor of GeoTools. Anyone also the SoulCuts award in 2017 in Boston, I think most people in OSGU already know Andrea very well. So I give the floor to you Andrea to tell us about GeoServer and OGC APIs. Thank you. Can you hear me? Yes, we can hear you perfectly. See my screen? Yes, we can see yours. Awesome. So yeah, my name is Andrea. Tonight I'm going to talk to you about OGC APIs implementation in GeoServer. So first of all, oops, what's going on here? There. So first of all, just a word about my company, GeoSolutions has offices in Italy and the United States. We are a, oh my, I hope this is not going to happen often. I have some USB thing going on and on. Okay. I unplugged a few bits. Can you still hear me? Yes, we can hear you. The screen was kind of switching between the... Right. But now it's stable. Yeah, it seems to be stable. I should pull the stuff from the USB, hopefully it's going to be staying quiet now. Okay, so back to GeoSolutions. Back to GeoSolutions. Yeah, it's a service company. We have 30 plus collaborators, 25 of which are engineers, so tech, very much tech oriented. We support the GeoServer, map store, GNO, then Gio network and provide a number of services including support, deployment, custom development and professional training. And it seems that I unplugged one bit too much. Damn it. Okay. Okay. Right. So, we are a strong supporter of open source and as you may imagine from this presentation, we are actively participating in OGC working groups and test bets and the like. We also support the standards to critical to GEOINT. Now, a bit of history about the OGC API's implementation in GeoServer. At the beginning, there was double FS3. It was not called the GCP API features. It was 2017 and it was developed in a private wrapper for a while. And then one first big bang event, the double FS3 hackathon happened where us and other implemented double FS3 against our servers and verified whether it was easy or not and provided feedback. And then a number of events after that like test bet 14, the record test pilot, the API hackathon, test bet 15 and going to 2020 values online OGC API sprints. Here is a bit of a story about them. So in March 2018, we joined the double FS3 hackathon and we did the first implementation of double FS3 in GeoServer in like two, three days. It was donated to the community as a community module called the double FS3. Then we joined the test bet 14. In particular, we joined the compliance testing thread where a first site test for double FS3 was developed and GeoServer was one of the three implementation that successfully passed the site test. Then we also joined the vector pile pilots end of 2018. That was interesting because we started joining the concept of tiling with the concept of OGC API features in terms of delivering vector tiles and also publishing styles to clients to render maps client side through a new service called OGC API styles. In 2018, double FS3 was renamed into OGC API features and at this API hackathon in London, other services started exploring implementing OGC APIs as well and a notion of API commons that is the shared bit between all the APIs was formed and made into its own draft standard. Fast forward to 2020 and 2021, OGC has been organizing OGC API virtual code screens. Each code screen typically has one focus or maybe two or three focuses at most where developers implement the last version of one or more specification. We typically choose one when we participate and we use these events to keep the GeoServer API implementation up to date with the evolving definitions of the various services because most of them are still in draft. Now what are the common elements of OGC APIs? They are all based on open API. Open API is an open initiative that provides a way to specify the definition of a RESTful service based on resources, representation, HTTP methods. In addition to that, each OGC API has a core specification which tends to be very small and can be implemented in a matter of days and then a bunch of extensions which you may or may not implement as you choose which add extra functionality. The common traits of OGC API are a landing page which is your entry point into the API which basically contains links, a conformance class declaration where you can find what actually is implemented in the server in terms of extensions and APIs. For services that expose data, then you will probably have collections resource listing the collections and access to each collection. The API definition is typically linked from the landing page through a link with a rail of service dash, it doesn't have a fixed position in the resource tree, although your server typically puts it at slash API. Since we are talking RESTful services, everything is linked. In pretty much every kind of response you get from an OGC API service, you will get lots of links. For example, here we have slash collections which has a number of backlinks to itself and to alternate the representations of itself. That is maybe I'm asking for the JSON representation and then I'm going to get links pointing to the HTML representation or the YAML representation. And then to its neighboring resources, in this particular case, links to the child's single collections. And at the bottom you see the structure of a link. So it has a link, a relationship, a type, which is the MIME type of what you're going to get if you follow it, and a title for humans to read. Each resource has a representation. Typically OGC API commands recommends using HTML for humans and crawlers and JSON for machine to machine communication. How do you choose a representation? Well, the accept header is always there. And it's great for browsers because they always send an accept header. And each server has a choice of a custom query parameter that may be used to force a particular representation in the case of geoserver.f. F equals the MIME type you want. Each API is based on a tiny core, like the bare minimum for a working service, and a potentially large set of modular extensions for everything else. And you go to the conformance declaration to find out what is implemented in one particular implementation of an OGC API. Everything, I mean, not everything, but most of the APIs are still in rough stage and changing as we speak. But there are a couple of specs which reach the stable version, 1.0. One is OGC API features core, and the other one is OGC API features core in that reference system by reference. Now, let's have a look at OGC API features. In addition to what you can find in OGC API core, you will get items under each collection. Items are the features. And items slash item ID to refer to a single feature. The only supported CRS are CRS-84 in long-digit latitude order, or CRS-84H if you need also an elevation. So that's the only CRS you get out of the box without implementing extra extensions. The schema is not required. So features can be anything. They can be simple, complex, heterogeneous. You can literally take an elastic search database or a MongoDB with a load of heterogeneous JSON documents, and it's going to be a valid source for OGC API features, which is something we couldn't have said for WFS. If you have a schema, you can link it using the described by relationship. We have, of course, an open API definition, and there's a matter of style here. You can follow two approaches to your definition of the API by providing uniform collections descriptions. So you have only one endpoint at collections slash collection ID, where collection ID is a parameter, or have one explicit resource for each and every collection. Uniform collections is simpler, scales better to thousands of collections, but has limitations in that you cannot say anything specific or unique to each collection. Every collection has the same set of parameters, the same description. When it comes to the API, so it means that the parameters that you can use in queries against those collections are always the same. G-server currently implements this approach because typically, G-server deployments tend to have lots of collections. There's statistics from GEO-Seer that you can look at, the average GEO-server on the Internet has 1,000 layers. Give or take. If you go for these same collections, like this is an example from LD Proxy, then you can have different characteristics for each collection. So it means that for one collection, you might have an extra query parameter that you don't have in others, for example, and it's very suitable for a small number of collections and more flexible, but also verbose. When it comes to accessing the single items, the items resource lists the content of a collection, and it can be a GEO-JSON document or HTML or GML or anything, because, well, the representations that you implement could be pretty much anything. In terms of filtering, you have got a bounding box and daytime, and eventually extra parameters declared in the API document, which, as I said, G-server does not implement. So this is one example, bounding box, and we have a bounding box, daytime, and we have a specific time, and building state equals good is an example of the hypothetical query parameter unique to that collection. Paging, yeah, OGC API feature core defines a limit query parameter that you can use to page through, and then you have to follow links to get to the next and previous pages. G-server implements paging by using an offset or start index parameter, but it's just a specific choice. If you want it to be compliant, then you go and look at the links and you follow them blindly. So each server can do paging using a different way of constructing the links. And that's all. That's all you have in OGC API feature score. So what about filtering property selection, reprojection, and transactions, and so on? Well those are extensions. So with the coordinate reference systems by reference extension, you can support CRS is older than CRS 84. You can discover in which native CRS the data is, and you can reproject the output, and you can also query with a bounding box which is in a different CRS. This is finalized. G-server currently implements an older draft, and we are basically missing the storage CRS information, so it's not up to date, but pretty close. Filtering, filtering is currently in draft, but as Clemens was saying, getting close to completion. Filtering adds a notion of queryables, which is the set of properties that you can use to build a filter. Filter, filter lang and filter CRS, with the notion that you can implement one or more filtering languages in your server. The first draft of the specification used the CQL, the one supported also by G-server, but public feedback asked for changes, and now this one draft is implementing SQL 2, which is similar but not same. So G-server right now implements a CQL, we will have to create a new parser for SQL 2 and implement support for it. So here is an example of a query asking for a cloud cover between 0, 10% and 20%, and a couple of other properties as SQL 2 text, and another example implementing the same filter but in JSON. So we got these two options, text which is human readable and compact, SQL 2 JSON, which is machine processable, and well, generally fits better inside a larger JSON document. There is a notion of transactions in draft, as Clemens said, with the post-put patch and delete against a single feature. It's not supported by G-server as of now, but it's probably going to be implemented in a future OTC API code sprint. Now let's have a look at the G-server HTML representations for the OTC API features because so far we have just seen paths but not nothing visual. So from the service capabilities that you normally find in your home page, you follow the link from features 1.0 and you get to the landing page. The landing page of OTC API features points to the usual aspects, that is the API definitions, the collection, and the conformance. This is an HTML representation of the API, which is dynamically generated from the JSON. And it's also interactive, so you can also click on these paths and try the operations. Slash collections is implemented as this. It lists all the collection and providing title and description, contents, and so on. Links to queryable, links to data. When I follow the queryables, I get the list of properties that can be used to build filters. In the case of G-server, that's all the properties. As of now, we are probably going to make that configurable in the future. When you follow the links to items, you get a little table providing you all the attributes besides the geometry and the paging at the bottom. So that was an idea of how the HTML representation can look. Just know that all of these representations are driven by free marker templates, so you can go in, change the CSS, change the logos, change the contents of the pages to suit your particular needs. Now moving to another API coverage. OogC API coverage is probably the simplest WCS ever, and each collection, at least in the G-server implementation, is a single coverage. It's the last API we implemented in the last OogC API code sprint, and it's currently incomplete. Anyways, what makes a coverage in OogC API coverages? The domain set, that is the description of the spatial and temporal domain, the range type, that is the data structure, which bands and which types, the range set, the actual pixel values, and the metadata, that is anything else. And if you put them all together, you get the entire coverage. The thing is, in OogC API coverages, you have resources to access each and every one of the single components, or you can access the entire coverage. So if you go for coverage extraction, you can perform an extraction by bounding box and daytime. You can use bounding box, or you can use domain subsetting. There are basically two different ways of expressing the bounding box. Here in this case, we are accessing one coverage, which is a satellite image, and asking the output in PNG and providing a particular bounding box. So it's a pretty easy API to use for that. As a set, there are a number of missing bits. The range set is missing, so we are not describing the bands, and we are not supporting band selection. Scaling is missing, and the coverage styles are missing. We hope to implement those as OogC API coverages starts being used in contracts. Maps. We have also a small implementation of the Maps API. Maps adds on top of collections a notion of a style, which is listed as collection metadata, and an optional info resource that you can use to do feature info. So basically under collections, you got styles, and for each style, you can access map or map info. The map resource fetches a map, so it's a pretty close equivalent to getMap. But unlike getMap, all the parameters are optional. So the link that you see there, just saying, okay, give me top states with the style population, is working. I don't have to specify a bounding box. I don't have to specify width and height. All is filled in by the server if nothing is provided by the client. But the client can provide to those if they want to get more specific answer. The info resource is basically adding on top of the map resource an i and a j, a position of a pixel that you want to query, and you get back the feature info in that position. Tiles. Tiles is interesting. Tiles, like many others, is a building block. A building block means that it's not defined as a standalone service. You can implement it as a standalone service, and I believe the server does. But you can attach this notion of tiles to basically any resource that can be split into tiles. So the server implements tile data from one collection and tile maps from one collection. Tile data means vector tiles or row coverage tiles, while tile map means a rendered map that is split into tiles. As I said, it's a building block, so you can take anything, a collection, a set of collections, a set of maps, or the output of a WPS process. And if that one resource can be sliced into tiles, then you can attach the tiles building block to it and serve tiles out of it. Each tiles resource provides URL templates, and this is interesting because, well, you know that all the resources in an OJC API are linked to each other, but when it comes to tiles, we have a practical problem that can be billions of tile resources. So we cannot practically link to them. What we have instead is a URL template that you need to fill with a Z, a Y, and an X in order to address a particular tile. It's also interesting that we have a metadata resource for tiles, which can be implemented, which describes the tileset, and it can be implemented as a tile json. A tile json is an open specification by Mapbox, and by implementing a tile json, we allow clients that are used to the Mapbox world to interact with GeoServer directly. So we have made the test with Maputnik, which is an OSM-style editor, against tiles served by GeoServer. I'm providing the URL to the tile json. It works as long as the GeoServer is under HTTPS, otherwise it doesn't. Okay, enough about tiles, let's talk about styles. Styles is interesting because it's an API without collection. Because it's an API without data. It's an API that talks about styles. So GeoServer always add this internal style catalog where we have all the styles and they can be linked to the data. Well, the styles API exposes this internal catalog, allowing clients to fetch the styles and eventually edit them and update them. So this is pretty interesting in terms of delivering raw data, because you can fetch raw data either by using OGC API features, coverages, or tiles, and then also fetch the style that is suitable for that data and render everything client-side. In case of GeoServer, we have a model where we support multiple style languages. In case of a style language, which is not SLD, we also offer on-the-flight translation to SLD. So each style resource has a bunch of metadata, and then it links to the stylesheets. The stylesheets can be more than one type. So in this case, we are linking to a CSS, but also providing a converted on-the-flight converted version in SLD. We also built a styles API client during Testbed 15 as part of the map store, and part of that effort now leaves inside the map store style editor, and it's also used inside GeoNode. We have the style editor that was using the style API to locate the suitable data for display, allowing the editing of the style and then saving back the results to GeoServer. Finally, there are more APIs. I haven't included them in this presentation, but GeoServer also supports the Stack API as a community module, and I talked about it briefly this morning in my Earth Observation presentation. We implement a tentative DGGS API for distributed global grids, and there are also a number of APIs which GeoServer does not support yet, but that we would like to add either through sponsoring from customers or by participating to OGC API sprints such as records, processes, routes, and environmental data retrieval. I think that more will pop up because I have a feeling that the direction is to go towards smaller API, more specific, and more of them. That's all. Thank you very much, Andrea. I think we have one question for you. It says, any suggestions on handling subcollections, use a thematic server name, and then leave the subcollection in the collections URL. This concept is already present in WMS. In the WMS capabilities, you can have three of layers, and some layers can contain other layers. In OGC APIs, the collections are flat, but there has been quite a bit of discussion during test, but it's about nesting collections, and I think that's going to be an extension to OGC API Commons that every other API can implement in order to have nested collections and serve data through a hierarchical organization. But G-server does not implement anything like that at the moment. A question that I have, in the GeoServer roadmap, does it contemplate the support of all these legacy standards like WFS and WMS, not only maintaining them, but upgrading them along with the newer standards of OGC APIs, or do you plan at a certain point to switch from one to the other? One thing that I have to say is that it's not like we plan much, because the G-server PSC doesn't have a large bag of money to drive the development. The development is really driven by support and development contracts that companies have with their customers. Anyways, I can give you, so it's actually up to them to drive G-server wherever they want to drive it, but I can give you a gut feeling impression of mine. Right now, we are seeing only very few early adopters of the OGC APIs, and they are typically researching institutes doing prototypes. The other customers are typically returning customers. They already have a deployment with the classic OGC services, and they are generally satisfied with them. What I see happening is that eventually we will switch the OGC API modules from community status to official extension, and we will start seeing deployments that have both the old and the new services. I think that's going to stay that way for a long time, because many organizations have trouble switching all their clients that they have and their internal developments and whatnot, wholesale to a new API. I expect that they will migrate slowly, and having both the old and the new at the same time deployed in the same server, serving the same data is going to enable a progressive upgrade of the infrastructure. That means that the OGC services, the classic one, will stay there for quite a while. It's great that G-server is supporting that transition. It's a lot of standards to support. Actually, from the code-based point of view, all the OGC API implementations are built on top of the classic OGC services. Eventually we will have to take the engine, detach it from the classic OGC services, and then allow people to deploy OGC API features without deploying WFS, which at the moment is not possible. But the direction that I see us going is that one. Okay, fantastic. Andrej, we have another question. Will the user role management be present for APIs? Yes, it's already present. The user role management is already, I mean, it's built-in and a part of G-server. When you define access rules for layers, they apply regardless of how you access those layers. It doesn't matter if you use the OGC services or the OGC APIs, the same rules apply because they are applied at the catalog level, so deep down into G-server. The protocols are the highest level. They all share the same security subsystem. And also for service access, the OGC APIs already plug into the service security, so you can already say something like, okay, styles, transactional operations, I'm going to allow them only to a particular type of user, just like we do for WFS transactions. So it can already be done. Perfect. Another question, will the authentication methods will also be present? Yes, exactly the same consideration. Authentication methods are a layer on top of every HTTP request that the G-server receives, so they should already be working with OGC APIs, although I haven't tried. Okay, so if there are no more questions, I think we reached the time of presentation. Thank you very much, Andrea, for this fantastic presentation, and also thank you to all the other speakers who contributed to this very refreshing session today. And also all of the participants that stayed here, especially those in Europe, which have a late night schedule. I hope you enjoyed the session. I certainly did. I see a lot of clubs. So thank you, everyone, and good afternoon and good evening. Thank you. Bye-bye. Good night. Bye-bye. Bye-bye.
Join this presentation for an introduction to OGC API Features, Styles, Maps and Tiles (and more!), the state of their development, their extensions, as well as how well the GeoServer implementation is tracking them. The OGC APIs are a fresh take at doing geo-spatial APIs, based on WEB API concepts and modern formats, including: Small core with basic functionality, extra functionality provided by extensions, OpenAPI/RESTful based, GeoJSON first, while still allowing serving to serve data in other formats, No mandate to publish schemas for data, Improved support for data tiles (e.g., vector tiles). The presentation will cover several APIs, as well as demostrating the progress achieved by GeoServer in supporting them.
10.5446/57193 (DOI)
Hello, everybody. The next presentation is about geo-server and you have Lissando Parma to talk about this. He is a senior DevOps engineer at GeoSolutions. He designs and implements GeoSpecial Systems based on geo-server and other great open source projects. Hello, Alessandro. Hi Diego, can you hear me? Yes. Right. Thanks for the nice introduction. My name is Alessandro Parma. I work at GeoSolutions as a DevOps engineer as Diego said. Today we're going to talk about geo-server specifically, about how to deploy and operate geo-servers from a DevOps perspective. It's going to be cloud-oriented. So we're going to talk a bit about cloud deployments and how to migrate your GeoServer cluster to the cloud. Two words about GeoSolutions, the company that I work for. We're based in Italy and the US and we have worldwide clients. We comprise of more than 30 collaborators and 25 engineers. Some of the products we work with are listed here, GeoServer, MapStore, GeoNode, GeoNetwork. We offer support services, enterprise support services, deployment solutions, so we can help out with your deployments, customized solutions, so of course you can reach out for help with development and professional trainings as well on all of the products listed above. We support affiliations, so we support strongly open source, as you can tell by the list of products we work with. We collaborate and participate in many working groups, including OSGeo, OGC and USGIF. Okay, let's jump into the presentation. Here's the agenda, so what we're going to talk about in detail. Cloud computing, we're going to give a brief intro, just the terminology so that you can understand what we're talking about in case you don't know yet. Might may be relevant to you, so without the pros and cons of moving to the cloud, why you should be considering using the cloud if you're not yet using clouds and the migration process. So if you're interested in it, we're going to talk about migrating to the cloud in general as well as specifically for your GeoServer, so you may be thinking about migrating your GeoServer cluster from a non-premise to the cloud, we're going to talk about that. We're going to check what are the common pitfalls as well as give you some tips gained from our experience. We're going to talk as well about containers, orchestrators and specifically Kubernetes, which is quite relevant I think. Nowadays it's gaining more and more popularity and you could benefit from deploying your GeoServer in a Kubernetes orchestrator, why? And then two words about monitoring and logging, it's a bit different in the cloud compared to traditional deployment, so we're going to talk about that as well as how to gain insights from your GeoServer cluster. So cloud computing, what is cloud computing? Basically means computing services over the internet, so servers, storage, databases, networks, software, whatever, every kind of service or resource that is offered to you by a provider over the internet. That's supposed to hosting your own thing with your own hardware that you bought locally. And there are several pros and cons that we can talk about in terms of cloud computing. Some of the pros, the big pros of cloud computing are often mentioned elasticity, so the ability to adapt to the workload changes by provisioning and deprovisioning resources on demand. So at the time the load increases, you'd be provisioning more resources, when the load goes down, you'd be decommissioning resources. For example, this would be AWS EC2, if you're familiar with AWS or similar, an auto scaling group, so you could shrink and enlarge your EC2 instances pool based on load, for instance. Another typical example would be elastic storage, so you can get some storage from a cloud provider and it will adapt based on the amount of this space you need. Another pro of cloud computing is scalability. So some of these services provided by cloud providers can change and allocate more resources depending on the need of your application. So let's say the Vito Machine, you can change the type of Vito Machine based on the amount of core or RAM that you need, or scaling a database based on the load that your application, your own application is demanding from the database. Another pro of cloud computing, reduce time to market, so especially relevant for business and management, if you think about IIS, so infrastructure as a service, for instance, you can ask for Vito Machines, databases, whatever to the cloud provider and they're immediately available to you. You don't have to put down the money, get your own infrastructure, get your own engineers and so on, so significantly shorter time to market. Security and privacy maybe could be considered a con of using cloud services. You're using cloud services, you're basically moving your local application and resources to the cloud where they may be running along other services by other providers, so you need to be aware of that. Your applications may be running on a server that is used by other people and there are some security and privacy concerns following that. The pro would be lower costs, so you can significantly reduce the costs of hosting your applications when you move to the cloud, especially if you leverage the elasticity of cloud services. So if you pay attention to scale and book resources depending on the load of the system, then it can lead to a significant reduce of cost. If you just book a ton of resources from the cloud provider without adapting it to the load over time, it can increase the cost of hosting your services because they're not cheap usually. There are a few different deployment models in the cloud. You have public cloud, which is the most widely used cloud deployment model. Its cost effective is what I was saying before, so the cloud provider is making the infrastructure available to you and other people, other companies to run their software. They're all running on the same infrastructure. Then we have private cloud. Private cloud is basically a dedicated cloud infrastructure for your organization or for yourself. It can be quite costly and it's usually reserved to people working for government or schools or agency or other environments where you need absolutely secure environments and you're dealing sensitive data. So in the private cloud you're isolated from other services, no other people is using the infrastructure, so it can be considered safer in terms of security and privacy. Then we have hybrid cloud. Hybrid cloud is a mix of the two. Mix of the two is a combination of public with a private environment. That's typically implemented with restricting at the networking level the access and the communication between the services. So your application is running next to other applications, but they're not allowed to talk to your applications. So this is kind of combines the benefits of the two. Right, let's talk about moving geo server to the cloud. So we talked about the cloud, a brief introduction. Now we're going to talk about how you could migrate your local cluster to the cloud. There are a few methodologies. First one is rehost or also known as lift and shift. It's an IAS approach, so infrastructure as a service approach. You're booking the resources from the provider and you're redeploying the application stack yourself without basically changing anything. So you're booking your virtual machines and then deploying yourself the applications just like you would do on your local on-premise environment. In this scenario you're not really leveraging all the services offered by the provider, so managed service like databases or other kind of storage service for instance. You're just taking your local environment and uploading it to the cloud. It's a relatively easy thing to do, so it's quite common pattern as well. The other approach would be refactor or lift, tinker and shift. In this scenario you tweak a bit your architecture and adapt it to the cloud, to the environment where it's deployed on. This would be a pass approach, so platform as a service approach. You're still booking the resources from the provider, but then you're also using some of the services offered by the provider. An example would be a managed database, so you're not using the standard self-provision database, you're using one of the providers. Finally, revise or build and replace, that's a more expensive approach in terms of resources and time. You would basically rewrite your application to leverage all the services available. This requires quite a lot of fore-planning and knowledge to be implemented. As we were saying, rehost means taking your local resources and moving them to the cloud. How would you do that in a practical term? You need to choose the right kind of virtual machines. For a geoserver that would be a compute optimized virtual machine. So, geoserver likes fast CPU cores. As a rule of thumb, you can get four core virtual machines with four gigabytes of RAM, for instance, and redeploy to the cluster. You would migrate the instances, configure your application, upload your data and you're done. That would be the rehost approach. Refactor, use some of the services of the provider. As we said, manage database has quite a lot of advantages in my opinion. You can use backup and restore features. You can use snapshots, auto upgrades and so on. So, there are quite a lot of nice things that you don't need to worry about. You can leverage some of the storage options provided to you. You need to be careful on the kind of storage that you use for each component of your application. For geoserver that means you need to choose the right storage for data dear, cache, data files and so on. By the way, also, COGS are supported by geoserver. In that case, you would be leveraging some object storage offered by the provider. You can think about storage cache styles in the object storage too. Here's a small diagram of geoserver cluster deployed in EKS. So, Kubernetes as a service offered by AWS is a typical layout of how you could do it and the kind of storage that you can use for each one of the components we were talking before. FileShare, which is basically an NFS, you could use it for spatial data. So to share spatial data between the instances and you could think about using it for cache styles for instance. Has a couple of advantages. Of course, it's a shared file system. So all of the instances distributed across the nodes can use it and it scales pretty well. Block storage would be your regular local storage. It's not shared. So you shouldn't be using it for data, for instance, if you want it to be available to all the instances. Benefits low latency. So it's good fit for temporary storing files like audit files and log files. Maybe cache styles. In that case, you would have a non-shared cache between your instances, your geoserver cluster. So there are some implications about that. So unless you really need very, very low latency, I wouldn't use it for cache styles. Finally, block storage, block storage services like AWS S3 brings a ton of scalability. They scale pretty much indefinitely. It's cheap. So it's good to store lots of data. And it's shared again. So you could think about using it for cache styles, for instance, or cogs. So a quick recap. Small checklist basically about things to consider for a cloud migration, go for computer oriented instances of geoserver, choose a migration strategy, either lift and shift or refactor, consider using services offered by the provider like manage databases, and pick the right storage for the purpose and the needs of your project. Here's another topic that is linked to the previous one to Kubernetes. You can also use Helm to deploy your geoserver cluster. There are some details here. It's basically a package manager for Kubernetes. So it easily packages all of your software into a set of files that you can deploy in your Kubernetes cluster, allowing you to kind of template them and adapting to your environment. So it's a pretty nice tool. I advise you check it out. There are some resources available on the internet, like Docker images. You can find the links in here. And the Helm chart is coming too as well. So we're working on it. Keep an eye on our blog and you will find an update over there. There's also free webinar. If you're interested in running geoserver on Kubernetes specifically, that we, I hosted about a month ago, it's available for you to take a look at. Two words about logging and monitoring real quick. So it can be tricky in a cloud environment to keep an eye on everything. The environment is pretty much dynamic and distributed. So you have instances starting and stopping. You have distributed instances of your application across nodes. So it can be hard to identify and debug problems in this environment. We have some tips for you. So you should consider aggregating and centralizing all the logs to a single location. That is easy to navigate and filter. So you don't have to go around all your nodes and to check out the logs of the application. Keep in mind that nodes can spawn and also go away. So a node can die if you keep your logs on a specific node and you don't ship it over to a central location, you run the risk of losing it. And set up shippers to collect and send out the logs to the central service. But metrics, of course, very important. There are performance indicators like response time throughput, uptime error rate and so on. And auditing. Auditing is a nice feature offered by an extension. GeoServer extension called monitor. It basically tracks requests made to GeoServer and export the information about these requests into audit logs. Adlocks that you can collect and ingest and create pretty dashboards from. Here you can see an audit event example, the kind of information you have in it, performance layer errors and so on. And here's an example of a small dashboard with performance-related information that you can look at. Things like response time, slow layers, cache hits or miss and so on. A few more examples. Response time, IP of the requesters and so on. And finally, auditing. So remember to set up audits for your services. So you're checking the services being up and down, you're checking for errors, error rates, auto memory errors and so on. Remember to use different channels depending on the severity of the problem. So if it's a problem that needs immediate attention, then you should consider paging someone. If it's a less severe problem, maybe you can just send out an email and avoid waking someone in the middle of the night. That would be nice. And then you can think about automating the fix to the problem. So you can put in place some scripts and tools to try to fix the problem for you without waking up anyone. Examples of these would be watchdogs and health checks. So you're kind of probing the service and checking the response just to make sure it's healthy and eventually restart it if needed. Here you can find some useful links about what I've been talking about. The juice solutions website, the webinars, cloud optimized geotifs and health. And that's all I had Diego. So I think if there's any question for me. A moment, yes, I have some questions. The first one is, where can the dashboards with data from audit logs be viewed? So it's a web application. The one specifically the one that I was showing you is part of a stack of applications by Elastic, the X-Tac. It's a web-based application called Kibana. So you can access it from the Internet easily. And you'll find all the dashboards and visualizations. The ones that I showed you are just a subset, of course. There's a lot of things that you can do and information that you can view depending on what you're interested in. So it could be business related information, some analytics for managers or people that want to know how the service is being used or could be metrics, logs for operations and so on. Okay, another one is how to use log data to tune the application. Do you have any use cases to simplify? Yes, that's a nice question. If we're talking about audit files, for instance, the kind of information you have in them can be very relevant if you're looking for improvements in terms of performance. So you can extract information for audit files about caching, for instance. So you can realize that you're not leveraging the cache as much as you think and try to change things. So take a look at your client application if it's not configured properly to use the cache, for instance. Or you can find some outliers. So you can be very fine grained with these tools and find slow requests and from them try to reverse engineer and try to understand why they're being that slow and then fix the layer. Maybe it's a configuration issue, maybe it's a styling issue. So yeah, it can definitely be used, I would use Kibana for that. Okay, another one. What storage is recommended for J-Web cache? Can you make use of S3 as a blob storage for the cache? Yeah, yeah, yeah, that's another good question. There's no definitive answer, so it really depends on your use case. S3 can be used by means of a plugin available for your server. So GWCS3 plugin allows you to store cache tiles in S3 and that would give you a very good scalability. So you would be able to support a ton of requests per second thanks to the scalability of the S3 service itself. There are other options of course, a file share like an NFS or similar, you can use to share the cache tiles between all your instances. Maybe it would have a lower latency in that case, so in terms of pure performance would be a bit faster. But it can be a bottleneck if you have a very, very high load on your system. So if you're serving many, many requests. There's, it's possible to use the memory cache of API gateway in front of the JOSERV. It's my question. Not that I'm aware of, I may be missing something, but I'm not aware of any way to use it directly from JOSERV. Another one is JOSERV. Kubernetes is a great competition to the classical ArchG Serves. Have you also helped clients to move from AGS to JOSERV and make the revised, rebuilt approach? Yes, yes, that's something we have done a few times with different clients already. And we keep posting regularly in our blog about useful information on how to ease the transition. So if you head over to the blog, you will find some relevant blog posts about the topic. And yeah, that's something we can do. We can help out with revising and migrating your cluster to the cloud. Okay, more one. When should I consider distributed JOSERV? Is there a threshold of that usage where it is optimal? Should I consider just sharding the data source and keep a single JOSERV node? Yeah, another very good question. So if you're running a single instance of JOSERV, no matter how good is your data store and how scalable it is, at some point you will hit a bottleneck, either at the machine level, operating system level, or in the JOSERV code itself. So that's why it's useful for production systems and systems that have high load to be able to scale out. And you would need to set up more JOSERV instances to overcome these kind of limitations, especially distributed across multiple nodes, that way they would not compete for resources on the same node, for instance, CPU. They would not try to get all the CPU to steal it from each other. Okay. Thanks. Thanks, Alessandro. There is no more questions. Would you like to talk more, anything? Thank you. I don't know. We have the contact info in the slides that I've shared before. If you have any questions, any more questions, you can reach out to us using that contact information or head over to our website. We keep publishing interesting things in our blogs. Okay. You have more? One question. Yeah. Is it advisable to use a single DWC in front of a scalable cluster of JOSERV nodes? Okay. So, good question. It wouldn't be highly available. So if you set up a cluster of JOSERV nodes so that they are highly available, in case one of them dies, if you set up a single DWC instance in front of JOSERV, then you're creating a single point of failure again. So, if that node goes down, then your service goes down. And that's not good. We typically recommend using the integrated J-Web cache into JOSERV without setting up a dedicated node in front of them. Okay. It's the last. I would like to thank all the presenters and the audience. And it's the end of the day for us. And see you all tomorrow. Okay. Bye-bye. Thank you all. Thank you. Bye-bye. Bye-bye. Bye-bye. Bye-bye.
In this presentation we will share with you the lessons we have learned at GeoSolutions when deploying and operating GeoServer as well as some common patterns for the migration of on premise GeoServer clusters to the cloud. We'll share with you tips on how to: - best practices to migrate your existing GeoServer cluster to the cloud - insights on your geoserver cluster using centralized logging and Monitor plugin - avoid common bottlenecks to best set up a distributed scalable GeoServer cluster - work containers and container orchestrators like Kubernetes Cloud computing is revolutionizing the way companies develop, deploy and operate software and GeoSpatial software is no exception. With benefits of cloud based deployments range from cost savings to simplified management, flexibility, lower downtime and scalability of dynamic environments it is easy to understand why more and more companies are migrating their on premise systems to the cloud but cloud based setups have their own set of hurdles and challenges. The migration of the series itself can be challenging. Monitoring, debugging and scaling of applications are very much different than what you are used to.
10.5446/57194 (DOI)
We are going with the next talk. We are going with the next talk. It is about development and prospects of RIE. We are using this web app using Sysium. We are going to welcome to Haydmitty. Baba, who is going to talk about that. Haydmitty, if you are there and you can hear us, everything is yours. Can you hear me now? Yes, I am here. Thank you. It is all yours. Can I start my presentation now? Yes. First of all, let me share today's presentation slide. Here. This is today's presentation slide. If you could not catch well, please open that slide. My name is Hidemith Baba and I work at UKIA Inc. Today, I am going to talk about open source software that allows you to make this project with coding. Today, this OSS has been released. Yay. This is my problem. Sorry. First of all, to get you the basic idea of RIE. Let me show you demo video. This is what we can do by using RIE. You can map data on the earth with GUI and also you can play some points. Also, this project can be published. The published project can be seen by everybody online. This is my plan for today's talk. I will start off introducing myself and our company. Then, I will talk about RIE. Which is racing today and followed by an overview of the project. Including how we can develop it and its key features and technical points. Finally, I will talk about future plans for RIE. Once again, I am Hidemith Baba and I am a product owner and one of the engineers who is RIE. I enjoy communicating with people online and I hope something I say will leave a lasting impression on you. Also, I love value. Next, let's talk about us. We are UKIRE Inc. It has been more than four years since we all founded. Our business is web content creation and database development. We have 12 full-time employees. Two years ago, breaking national record in Japan, we raised 3 million yen in crowdfunding. Our company was founded by members of what is now the University of Tokyo's graduate school, Watanabe Hidenori Labo. It is famous for Hiroshima archive. Watanabe's labo specializes in data visualization using a digital earth. Through this connection, we have received many orders for digital archive protection. We have been using CISM as our main tool for this. Here are a few examples of projects we worked on. The first is a project we were commissioned to do by Miyami Apple City in Japan. It is a map that displays local information so that local residents and tourists can learn about the characteristics of the region. The second project is a joint research project between Topan Printing and the University of Tokyo. The last project is a business-based project, which was created with the intention of preserving the beautiful scenery of each region in Japan. Through many projects like this, from clients ranging from local governments, businesses, to NPOs, we have gained a success that has allowed us to build products like rears. We are also actively involved in developing open source software. This is an open source library called RISM that we developed. It allows developers to use CISM as a React component and has gained over 400 stars on GitHub with a global user base. We have had workshops at ForcefulZ in North America, as well as received entries from companies in the US, Israel, and other countries to collaborate with us through all these activities. We are also developing several other packages in the process of developing rears. The first one is QuickJS M-script in sync. It enables object exchange between the browser and the QuickJS. The next one is ReactAlign, which is a component-aligned system with drag and drop in React. And CISM D&D enables drag and drop of entities in CISM. And this way, through all this development, we were committed to sharing our technologies with people all over the world and promoting development with our contributors. Thank you for letting me get through our background. Now, let's move on to the main topic about rears. First question, what is rears? Rears is a free, open, and highly extensive web-based platform. First of all, let me give you an overview of the service. It is a tool that allows anyone to make a slated data map with a coding. Rears aims to be the most innovative web-based platform in the world. If you take away anything from today, this is the core of rears, awesome web-based areas. Okay, now let's take a look at some background to the project. Chalenges will face the software and why we decided to do this process. Through collaboration with the Wasanabe Labrity of the Tokyo University, famous for its Christian archive, we, Ucaria, have been able to successfully build the foundations of rears, provided with their know-how and data visualization. And using a previous experience, we have created something we think is very useful for the OSS community. So why is it useful? I'll experiment what society needs and how related technologies are going. In recent years, the spread of infectious disease and the development of technologies have led to the need for structure changes in society through the integration of cyberspace and physical space. There are many possible use cases such as the stabilization of administrative procedures, simulation and analysis of this as a damage, and the stabilization of infrastructure maintenance. At the same time, the basic technologies to rears them, are steadily being put to practical use, examples include IoT or 5G or cloud and AI. On the other hand, governments and other public organizations are also working on the development and distribution of 3D model data in order to realize these goals. For example, in Japan, 3D data for each city is being released under the project plot tool. In New York, for example, they are releasing 3D data on Manhattan. However, common issues can be mentioned. One common challenge, however, is that while the data and related technologies are at a practical stage, the software to utilize them must be developed by each individual. The first is present development of the program, but the development results are flagmented and efficient. And secondly, all these engineers can handle data and it is not available to all. Lastly, the system is forced to be dependent on a particular vendor. Rears will try to solve these problems in the following ways, sharing of development results since it's OSS, and projects that can be completed by UI best of commands, so it does not require coding. And to make it an open product by releasing it as OSS. So far, I have explained the development background of Rears and the problems it solves. Next, I would like to go a little deeper into some of the unique features of Rears. Here are five important features of Rears. Now calling is required. Slavery models can be imported and used. You can bring your own data to Rears and easily import it on the Earth. It allows you to visualize data with more expressions and plugins allow us to develop our own features and share it with others. The first feature is this. So you don't need to call anymore. You can do everything through the GUI. So as you can see, okay, so I zoom into Japan. And you can add data by dragging and dropping. Yeah, and a new marker has been added on the Earth. And with InfoBox, which tells the more information about this marker, I can create text block and I can type something. And also I can add an image block here. So add an image block and select the image I use. Okay, in this case, I chose Rears logo. Okay, as you can see, all of production creation can be done with GUI and also it's shareable. Okay, next feature is 3D models. It is also possible to display 3D models on the Earth. Currently, GRTF format data is supported. It also supports 3D tiles format data, which can be used for simulations and analysis by displaying 3D cities. Okay, so in this example, I add a 3D model and choose which model I use. And it also supports animation like this. Okay, so the third one is data integration. You can import CSV file and geojson file, KML file and CSV file. If you import CSV file, if it has a longitude and longitude column, Rears automatically detects that value and plots data on the Earth. And of course, the polygons also support it. Okay, and fourth feature is expressable ways of expression. Okay, so you can change tile map easily like night, Earth or other tile map. Also, you can change the color theme of widget. And widget can be aligned on widget align system like that. So this widget can be aligned freely on widget align system. Okay, and the fifth feature is plugin feature. Any developer is able to extend functionality by developing their own plugins. In fact, most of the current features have been developed as official plugins. We can develop a widget that is displayed on the screen and content block on info box as a plugin. And the feature, we plan to develop a plugin feature that will allow you to incorporate leaves on the Earth and your own eliximistic operations. So this widget and this content block is around external plugins in this example. The plugin feature allows us to support a widely volatile use case. In addition, for users, plugins developed by external developers can be used, thereby reducing development results. Here's a possible use case for Rears. Local governments use Rears to visualize other means, native activities and this many a disaster prevention information. Museums can also open online art galleries and museums by mapping the image and videos. And as numerous other users also expected to use Rears, including construction companies, logistics and the publishing industry that publishes newspaper and magazines. Next, let's take a look at some of the technical point of Rears. This is the overview of Rears and this is the diagram of our case. We use MongoDB as database and we use Golang, backend programming language and Frontend uses Rear and TypeScript and Web assembly and GraphQL is a query language, connects backend and frontend and we use all four authentication servers. Let's take a deep look. Frontend architecture adapts, partially adapts, our atomic design and that allows us to reuse components. For higher productivity, we use Storybook and GraphQL generator. Thanks to the GraphQL Storybook, we can easily develop components and also get a full picture of components. And GraphQL generator generates React, Hooks code and Type definition automatically from GraphQL schema. So we don't need to write lots of source code. And this is backend architecture. The architecture of backend is based on clean architecture, domain-driven design and the standard goal project to use. These designs, philosophies, allow the business logic to remain independent of the other layers. Also, the domain model is encapsulated in this way, parts of, in this way, parts that are not related to business logic such as database, framework and authentication services are designed to be safe and easily repressible. The final one is Plavain system, a currently available plavin features that allows you to extend the UI-based functionality over years. These UI-based plavain will run safely in an iPhone. However, the only way for React to interact with the code executed on iPhone is to use post-message method. Therefore, we are using WebAssembly to enable safe and fast execution of the APAs provided by areas. We are also planning to develop compute plavin in the future. This will make it possible in corporate processing that we as does not provide as a plavin built-in feature such as data conversion and supporting for specific data format. That concludes the technical point of view. Lastly, I'd like to talk about the feature prospects over years. The first one is Plavain Edited. Plavain's additive function, as I mentioned earlier, external developers can build their own features in two years. However, since it is difficult to test plugins currently, we will develop a feature that allows you to easily test plugins still under development. Furthermore, we are also considering outputting projects created with areas to media other than the web. For example, we hope to develop AR-BL support. We are also thinking about how to display it on other types of screens like buildables and mouse screen installations. We are also considering making the map engine replaceable. Currently, we rely on the system, but in the future, we will be able to select a map engine to use such as map of GL or Leifret so that they will not depend on a specific software. Other features include support for real-time data. This allows us to receive real-time data provided by external APIs and to display real-time data in published projects or rears. In addition, by developing public APIs, data can be collected from any API to rears and used as a data platform. Finally, there is a real-time collaborative editing. This will allow multiple people to work with this data conflict, just like Figma, Google Dotman, and MyLo. If we can do that, I assume it will be the first app that allows us to real-time collaborate on GRS. With these features and the help of an ambitious developer community, we are aimed to become the most advanced web-gears platform in the world. Lastly, we also run CrowdService hosting, the OSS Rears. We are currently working on the pricing plan, but until it is decided, you can use it for free. If you would like us to issue an account, please apply using this form, this URL, and also try this QR code. An account will be issued at a later date, and you will receive an authentication email. Also, here is a website and the public repository of rears. In the repository, you can find the up-to-date roadmap of the future development, task management, Kanban, and so on. Please have a look, and I hope you will be interested in rears and become a part of the community. Please send us a progress and issue or develop a plugin. Additionally, don't forget to give your staff on public repository. This is the end of the presentation. Thank you very much. Grash F. Sorry, Maro, do you mute? Sorry, here we go again. Thank you very much for your talk. Very interesting too. I was saying to you that we have one question, so I'm going to read it to you. Here it is. You said that technologies for developing digital cities exist, but software that interets their capabilities need to be developed. But what about the concepts needed to guide aims and developments? What are those conceptual frameworks that have to cover with such a kind of software development? I think that was really nice. But what I wanted to say is if we can develop one product together, we can reduce it a little bit more efficient and we can save each development result. And we can develop the best app in the world. So that is our idea. I didn't mean it. You can hear me now. Yes. Okay. We have one more question. Yes. Thanks for your answer. Let me see. The next one is how will the project be maintained and supported financially going forward? Sorry. Mara, can you say the question again? Yes. Let me read it for you. Just one second. Okay. Here we go again. How will the project be maintained and supported financially going forward? Okay. Since the product is OSS, we can't make any money from our software. But we also run our cloud service. And now we are planning a new subscription program. So we open our account for free. In the future, we will start a new subscription program as cloud service. So as GitLab or other like, yeah, as GitLab does, we will have a subscription plan for cloud service. And our core maintainer team will get like ARM fund to run our project. But software is public. So we hope our product can be used by everybody. And also, when like some company, we run cloud service, but some company would like to deploy their years by themselves. So we will support them to deploy their years on their server. Or sometime we get ordered to develop their plugin. If they are special, like professional engineers, they can develop a plugin easily. But some are not. So to develop additional their plugin, you will get ordered. That is the way how we run our team. Okay. Thank you very much. We are run out of time. So we thank you for your presentation and for your talks. Again, you can follow the questions that people might have. We have a nice break. So everyone, maybe it's going to ask you the other questions. Thank you. Thank you very much for your presentation. Thank you very much, Gracha. Thank you very much to everyone. So we are going to have a break. And then we're going to continue with the post. Thank you to everyone. Thank you. Thank you again. Thank you.
This year we released Re:Earth, our no-code web GIS tool that uses Cesium under the hood, to the OSS community. Re:Earth's aim is not to rewrite the wheel, but rather to harness the power of the 3D globe and allow absolutely anyone to visualize and share their geospatial data. Users are able to import preexisting data and build projects off of that, or start from scratch and then easily publish the project or export the data in a variety of supported formats. All without the need of an engineering team. The Re:Earth team is currently recruiting OSS committers and plug-in developers to help expand Re:Earth's potential and build a digital earth community of users and developers. The Re:Earth project grew from the idea of, "What would be possible if anyone, anywhere could access the digital Earth's potential?". To make this a reality, we knew Re:Earth needed to be no-code, but more than that we needed to make sure hardware or OS requirements wouldn't get in the way either, so that is why it is a fully web-based application. We also knew projects as well as data would need to be shareable so we have both project publishing and data exporting. Publishing a project is easy and gives users the chance to opt-in or out of SEO, change their URL and setup publishing to their own domain. Exporting data is easy and supports many of the most common file formats seen in GIS. Our hope has always been to open Re:Earth up to the OSS community and build a global community around it and what it stands for. The first step to making this happen was Resium, a popular OSS package that allows developers to use Cesium with React. With Resium we have been able to write Re:Earth's codebase with React and Typescript on the front end. As the main backend language we chose Go. By using these modern languages we have kept Re:Earth highly maintainable and scalable and hope that other developers will find contributing to it easy. Beyond the code, we have already begun our global community with the core Re:Earth team coming from around the globe. We especially want to help lift talented people up from areas of the world with less opportunities locally, and that is why we have been focusing on finding talent in Syria. At the very least, we hope this project can bring some hope to the people around the world facing difficult times and let them know that there are opportunities out there.
10.5446/57195 (DOI)
I get the knack for our interface so that we can just speak that we're going to be meeting in a moment. And that is Joseph Sitchar. And Joseph is a geographer and master in environment analysis and landscape management. He's specialized in GIS and web map development. And he's currently at the GIS Center for the University of Yirona. He's been a visiting professor in several seminars and short courses in GIS and is professor of remote sensing and web map development and various subjects at the UniGIS Yirona master's course. And he's going to be talking to us today about eduSAT and remote sensing as a learning material. So I thought I'm going to hand over to you, Joseph. OK, thank you. I will share my screen. OK, I think you should see my screen. And also I will put the full screen. OK, so I start. You can hear me. I suppose, yeah. So start. OK, my name is Joseph from SICTE, which is the Geographical Remote Sensing Service from the University of Yirona. And I will present eduSAT, which is a learning platform with open educational materials about remote sensing. This platform and the contents of eduSAT have been developed during the Navas Mons, my own team. And thanks to support of the geography department and the environment Institute of the University of Yirona. And also thanks to a coordinated program. And what motivated eduSAT? So why we developed the project? Well, basically, each year at SICTE, we receive multiple courses, demands from secondary school teachers and also from different studies at our university to teach about geographic information and air observation, especially in the context of climate change. So in that sense, each year we organize and develop multiple training materials, workshops, and presentations adapted to these course demands. So with this experience and also with the idea to promote the use of remote sensing images and to release the materials, this decided us to put order to all these work done and open it to the public. So eduSAT bonds with this idea with the objective to share our experience and materials about remote sensing trainings at secondary schools, but also to facilitate the self-learning to teachers and make them autonomous in order to prepare their own lessons without our direct help. And also because of the scientific evidence on climate change and the degradation of natural systems, it's a common demand in our case to focus the workshops on that topic, on the climate change. And it's also obvious that the scientific community, but especially between young people, has a strongly matched the commitment respecting the natural environment. So through various platforms, entities, slogans, so students in that sense from over the world and belonging to different disciplines so are coming together to defend their right to have a planet with environmental health. And in that sense, we think that that's very important to provide young people with empirical and quantitative learning tools to strengthen their ecology message. So in that context, remote sensing is a very powerful technological and transdisciplinary resource that provides young people with scientific arguments to censure the current relationship between human societies and nature. And observing the earth from the space and using the different available sensors, this offers an objective perspective about how humans influence over the climate change, but also to clearly visualize its sever consequence. So for students and especially for secondary value students, the analysis of these images is very motivating. Even more when they can discover, visualize, and analyze recent phenomena in which they may have been directly involved. But all these workshops and training materials can be prepared thanks to the ability of open satellite imagery from around the world. Only thanks to the ability of open satellite imagery, like Copernicus imagery or the Landsat satellite, it's possible. With all these images, it could be very, very difficult to make that kind of workshops. And provide the students with this knowledge, because it's very easy to access to that open images and to prepare the workshops. So it's a thing that we have to take in consideration. And OK, this theoretical knowledge, so to work with remote sensing images and to perform some basic analysis, students need to learn some concepts. It's basic about this discipline. So they need to know aspects like the different types of satellites, concepts about resolution, understand the basics about electromagnetic radiation, how this is captured by sensors on board satellites. Also, what is a band, how it can be combined to create an RGB composition, how to calculate an index. OK, all these are specific basic concepts that young students should know before they start, any kind of activity related to remote sensing. So the problem sometimes is that these theoretical concepts are complex and sometimes are not easy accessible for young people, which are not specialized in this discipline. So in that sense, one of the main objectives of EdoSAT is to present remote sensing to a known specialized audience and offer a user-friendly tool for the analysis of the answer for changes and also a tool for the dissemination of results. So most of the efforts dedicated to EdoSAT have gone in that direction. And as you can see in the website, there are specific sections dedicated to these proposals. I can show you what EdoSAT is. This is the website platform. And you'll find different tools. We have that website in multiple languages, I will put in English. And one of the main tabs, or the main contents of the website, is precisely this tab, the remote sensing, where this is a specific section dedicated to expose, in a simple and clear way, the principles of remote sensing, and facilitating these running contents to students, trying not to bore them with unnecessary aspects. It's probably usually that if young students can learn and internalize these basic concepts, they will be able to carry out the practical work with no problems and get the most of the activity. And all this content is also useful for secondary teachers, not only for students. So teachers who have never been familiarized with remote sensing discipline, these materials can help them to prepare their own lessons and help to teach these principles of remote sensing. And as you can see, these are very basic contents. This is explained in a plain language. And there are full of images that we think that's very useful for students in order to learn these basic concepts. With not, obviously, in a simple way, but that we will be very useful for them and for the practical work. OK? And derived from the objective to present remote sensing to a non-specialized audience, the objective of EDUSAT are focused, first of all, to design a teaching resource for educators in order to incorporate these competence in the curricula. Also to create an educational resource for young students, finally, to develop a transdisciplinary resource for young researchers that came from various disciplines in order to analyze data and disseminate the results. This is the URL of EDUSAT. This is the page that I shared before. OK? And anyone who access to EDUSAT will discover, basically, the teaching materials, design it to be in two blocks. First of all, the ones that I exposed, this is the theoretical explanation about remote sensing, but also in a very simple manner with images, graphics, and all that. And also a set of practical exercises that to carry out with remote sensing images and with the objective, basically, to identify natural or anthropic processes, such as forest, fires, flutes, droughts, deforestation, last year recession, air pollution, volcanic eruptions, and many others. So these are the two basic contents that you'll find in the website, in the platform. And also EDUSAT includes an example of teaching activity that teachers or educators can take in consideration to prepare their lessons. So the teaching activity is oriented to identify the effects of global environmental change using remote sensing images and is adaptable to many contexts and students groups. OK? So first of all, for example, the activity that's available on the website, it's very adaptable from one hour to 15 hours. So in that sense, we prepare because we receive demands from very different typologies. And we are used to create workshops from many different times. So in that case, we try to package all these experience and prepare a very flexible activity. So first of all, you can have an activity from one to 15 hours. OK? Obviously, the workshops can be organized on a single session or multiple session. Workshops from one hour to four hours will be prepared to be teaching in a single session while workshops from more than four hours. It's better to do in multiple sessions. In case of a multiple session workshop, we have defined it in the platform an example of what could be a teach on each session. In the first one, it could be dedicated to a detailed explanation of satellites, sensors, and their potential. The second session could be dedicated to show multiple study cases about natural and tropic events and with the objective to show the possibilities of remote sensing. So in the first session, students could form groups to analyze a chosen natural or tropic process of their election. And at the last session, it could be dedicated to present and you'll take it, uh, to those two day classmates. OK, this is our proposal, obviously, and anyone can adapt to their different situation. But after sometimes teaching in different groups of that characterizes, we find out that it's a very, very practical way to explain about or to teach about remote sensing at secondary school. OK? And we have one of the main works done during the preparation of the platform has been the documentation of several study cases to graphically and visually show the consequence of different natural and tropic events. And for each case, the study, we have documented the causes and consequence of the study phenomena and how sentinel sensors can detect it, the satellites, types, indices, band combinations, et cetera. OK? And these are one of the, or some of the examples that we have described. In total, there are now, nowadays, there are nine cases of study which are accessible through the map of the main page. If you go to the main page of EduSat, you'll find out on the first page a map with the different use cases. And you can also filter by the topic. And for example, if you search by fires, you'll find one example, which is a fire that plays in Catalonia a few years ago. And you go, if you click here on the study case, you can access to the website or to the page of this specific study case. And there you can find, first of all, and this is common for any study case. You can find the dates where it took place, the satellite that we proposed to analyze that phenomena, and the category where it belongs. Also, you can find a description of the event, the dates also again, and different kind of information important, or also which need to be in consideration to analyze the phenomena. And then you can find a map where it puts or places the event in the world. Then you find information about this is information about how you can analyze, or how can students analyze that phenomena with remote sensing images. And in that case, for example, you can see how to visualize the past and the post of the fire using a Sentinel-2 images, using a natural core composite. You can see the difference, obviously, but one thing in that case that students should identify is that using a core, natural core composite, it's not very easy to identify the area barred by the fire. But then you can explain, during that activity, that using a false core composite, you can obviously identify in a better way the barred area. And also, you can compare both images, and also you can identify where it started, in that case the fire started in a farm in a small village here in Catalonia. And then the second image from the Sentinel is a few days later, and you can see all the barred area. So this is a case that students can analyze and identify the effects in that case of a fire. Obviously, there are many other cases, for example volcanic eruptions. We have identified a case, the Kilauea volcano at Hawaii. And the same, you have the page for this study case with the dates, the satellite that we could use to analyze that phenomena, that category where it belongs, a small description, the location of the event. And also, you can find some images. And in that case, you can see different band combinations or to re-natural color, et cetera, to identify the volcano, the lava in that case. And also, using a rather image, how the area changed their morphology. This example, for example, could be now applied to the volcano here in La Palma, in Spain. So it could be an example of how from the secondary school it could be teached about remote sensing and analyze a phenomena that is current, present, and with a lot of activity. So maybe it's a new case study to introduce to the EdoSat web page. So we have many different use cases that any teacher and all the students can document to analyze the global change phenomenon using remote sensing images. OK, and all these materials, all the website is open. Presentation here. So anyone can access to this data or this content and get the materials. OK, so it's available free for anyone in three different languages, as I commented in Spanish, Catalan, and English now. And just to finish some conclusions, is that, OK, far from being a discipline reserved for a specialist and reduced to a certain group of professionals, so remote sensing is also affordable for a high education level. This could be a first conclusion. Then that thanks to programs like Copernicus or EdoSat and the tools for former Indeed, like, for example, new browser or other tools, it's possible to practice with that images and to make that kind of workshops. So for teachers, it's easy to access to that images and prepare their own lessons. And also that the didactic and educational platform presents an objective and analytical vision regarding territorial process resulting from global environmental change. OK, and that's for remote sensing, the ecological discourse emerging from young people is given an empirical and experimental content. So they have an empirical data to analyze that phenomenon and to take their own conclusions. So that's all. Thank you for your attention. EdoSat, as I commented, is open. And we are open to suggestions. Obviously, it's for teachers to prepare their lessons, but anyone can suggest us anything. And we also will try to update the study cases with new phenomenals. For example, this one of the book at Wapama could be a candidate to be in the website and many others. Because we think that's important in order to keep the attention of the students because they want authority. And this is a very good way to keep their attention. And this is my contact. So thank you very much. Thank you very much, Joseph. This is really interesting. We've got a question for you from the audience. And it's about what the future plans are for the platform and whether you have plans to translate it to even more languages. About the languages, we don't have any demand, but it could be a possibility. And also, what we will try is to keep updated with new cases. We will try to, for example, remove the old ones and update with new cases. Because we think that's very important to keep the attention of the students, which need to be in contact with the authority. And they keep this keep their attention. So it's open to anyone. So we will try to get ideas. And we'll try to improve them and to apply to the website. Fantastic. Thank you. And I've got a question of my own for you, if I may. I'm curious if you have any anecdotes about using this yourself in a classroom setting. Yes, we have applied in classrooms. Because this is. Have you had some nice student reactions? Can you share with us maybe a student reaction? Yes, in fact, Edu Satt borns from our experience and from many years teaching about remote sensing in that kind of groups that we receive demands from. We are at the university, but we receive demands from secondary schools. And the experience is really, really nice. We have the students maybe in the official curriculum they are not used to work with that images. And this is a very practical way to analyze different phenomena in the case. Most of the cases, the global change. But they discover a new world, really. And their reaction is very positive. Fantastic. Thank you, Josef. You're also getting a thumbs up from the audience. Thank you for sharing this great work.
The intensification in recent decades of scientific evidence on climate change and on the degradation of natural systems has led to increasing public awareness about the environment. In recent times, this commitment to respecting the natural environment has emerged strongly among young people. Through various platforms, entities, and slogans, students from all over the world, and belonging to different disciplines, are coming together to defend their right to have a planet that enjoys good environmental health. In this talk we’ll present the platform Edusat, which aims to provide young people with empirical and quantitative learning tools to strengthen their ecology message. By means of remote sensing and through the data generated by the Copernicus program, an educational resource that analyzes the consequences of global environmental change is presented. In this context, remote sensing is a technological and transdisciplinary resource that provides young people with scientific arguments to censure the current relationship between human societies and nature.
10.5446/57197 (DOI)
I am happy to invite Carlos Palma. He's working with Guadaltil, I hope I pronounced it correct, a company which collaborates with the National Center of Geographic Information of Spain. He's a developer and analyst and today he's going to share about some mobile, some enriched mobile apps for trails. So Carlos, you have the floor. Okay, thank you very much, Fabriana. Let me share the screen for a second. Okay. All right. Okay. Good. So hi everyone. My name is Carlos, I'm a software developer and analyst for the company Guadaltil. We work, well, we have a lot of background experience. We have been working for 30 years and we work mainly on the areas of ego band and also geographical information systems, which is my main area of work. We have multidisciplinary teams for developing products and solutions to our clients, normally from the public administration, ranging from European to local institutions and even in collaboration with all the foreign countries, mainly coming from South America. Indeed, we have an office on Santiago de Chile, so we have an international presence. And well, this product presented here is precisely the result of a collaboration of Guadaltil with the Spain National Center of Geographic Information to reuse and redistribute, use special information in an easy manner, something for everyone to get access and get a usefulness from it. Well, with this intent, the mobile app, Mapo de España Vasico, was born. The main features of this mobile app is to provide a mapping plan for field tracking and it is intended for non-experimental users, as I said. The application offers different categorographic representations of all the Spanish territory and displays it in a very streamlined fashion, adding the most common tools that a user will need during a field trip, road trip, or getting around in a city. Measurements, scales, points of interest, GPS location, all those things that you will let on a navigation app. So before going more into the functional details, let's talk a little bit about the technological aspects for the backend of this app. Well, we based out development on different visualization services for raster information, ranging from MB tile services for quicker download times and also for storing information on the device itself. Also WMS and WMTS services for various background layers on the visualization. And we also integrate information for the weather forecast coming from the IMED, the agency of Spain for weather forecasting, which they offer an API to consume and to be integrated with other services. All these sources are available for the user to select, which one does he want to use at any time he can configure the application in which way the information is going to be displayed. Also the integration is intended for standard services and this makes it very flexible and extensible to accommodate a greater array of data sources and if it is needed in the future add more sources, more layers, different services. So this is about the backend, then going on the application side on the front end. The architecture here is a composite of different components. So the base component for building the application is the Ionic toolkit, which is a toolkit for creating native apps on the main system for mobile and read files and make it run natively on JavaScript. On top of that, the framework that was used to develop in this case was AngularJS because of that way we can ensure the modularity of the code and the extension of components if the needs arise. And then for the map visualization, we are using the API FENI that is the open source tool that was developed by the National Center of Geographic Information to generate map visualizations, consuming very different sources. You can consume WMC or KML files, you can consume view services, you can consume download services in different formats in GML, GeoJSON and then mash up all the layers selecting which one you want to use from the service if you are only interested in some of them. And make it an interactive map with the addition of controls and customized plugins to be run over the visualization. So you can extend the main functionalities for measurements, you can extend it to draw a polygon, some measure area, something like that. Well that is about the technological aspects. Also to point out that the development was done with a user-first mindset, making the accessibility a main objective, having multi-language capabilities, ensuring high contrast to better visualization of the application, voice cues on the application. So that's a little bit about the architecture. And then, well, that's not that we have covered that, let's go a little bit deeper on the features and functionalities of the application. So the user can select for the online background layers the IDM-based cartographic layer, that is the one provided by the National Institute of Geographic Information, a strip-up layer with the road names and addresses, and also a satellite image coming from the Penoa. All these layers are provided by official sources, mainly from the ATM. And then you can also use the OSM layer to map areas inside and outside of Spain. In the case of inside Spain, then you're going to get a style more like a strip map, and for outside Spain you will get the usual style for open strip map. But the main feature of this application is the capability to download areas of interest at the regional level to have access to these maps. You can also get maps from the Spanish National Park base for your tilt tips, for your filter. So if you are going to go into a region that you are not sure to get network connectivity or you don't want to activate your 5G on your mobile phone, then you can previously download the region that you are interested in and then just load the layers from there. Then talking more about the tracking functionality, the user has the capability to define waypoints and create a route to follow. These waypoints are stored on the device itself, so they can be retrieved on a flight mode, you don't need to upload anything. And then you can have a library for all your tracks that you have created. The application is also integrated with the collaborative track service Wiglock. I don't know if there is anyone familiar with this service, so hiking should be there. And load them from the user security tracks to your own application. Also you have a set of predefined tracks to load that were built in the application. They have the Camino del Santiago, all the tracks on national parks, the green routes that are the old railways that now they have been out of use for a long time and they have been recombined to tracks. And also Camino del Fib for the historical perspective of the routes. Then once you have your track, you can activate the GPS on your device and start the follow-up of a loaded track or even record a new track as you go, if you are just hiking on the field, then you can just start a new record, go on your route as you want. And once you have finished, then you can stop the recording, store it on your device, or even exporting it and share it. While you are on the navigation mode, you can have the usual features on this kind of application, so you have waypoints, alerts for a loaded track, for any time you are near a point of interest, then you are going to get an alert. You also get visual and voice alerts for distances and directions on where you go, where to turn, which way you need to follow. Then the waypoints, you can generate custom points of interest to add for your tracks, setting your own legend, your own symbology, your own icons. You can create points of interest for your tracks, for animal sightings, resting area, restaurants, gas stations, or even places to eat while you are hiking. These waypoints will be integrated on the navigation mode as you go near them, and they can be also accessible completely offline on your device. Again, you don't need to upload anything. And also other useful features for these applications are the weather forecast to be safe on your way, topographic profile of the area for planning your field trip. So you have more insights on how the route is going to be, and also very specific for your route, so distance that you have covered or you need to cover, the time of your route. You can even track your proper sample performance on the track. So this is the wrap up. Then again, please, if you are considering go hiking or making a road trip on Spain or visiting a city and you want a source of information about that, then give it a try. To summarize this, it is a simple and useful way to have official information that is important because all the information is from institutional sources. And then you have it at hand that now goes because the application and everything added on the application or all the information is completely free. So for anyone that is interested in the application, you can download it already from the Play Store and the App Store. That's all. Thank you very much for your attention. I will answer any questions that you may have. Thank you. Thank you very much, Carlos, for the presentation. Let me first check if there are some questions. I think there are no questions at this point, but I would have one. I couldn't help noticing that it is in Spanish and is there any plan to have an English translation maybe? Yeah, sure. Maybe there are some questions that are a little misleading, but we have the multi-language capability for the application. Oh, yeah, okay. And that's perfect because it seems like something that I would definitely like to give it a try when I go hiking because you also mentioned that it is possible to use it outside Spain as well if you just have the open street map layer available. So okay, that's really helpful. I will give it a try. Yes, it is very easy to use. The functionality to have everything offline is great because sometimes you don't have a TMP anywhere, so it is very useful. Especially in the mountains. We have a question. Can we export tracks and waypoints and what formats are supported for imported tracks? You can import tracks in KML format. For example, the ones that you download from Wikiloc are in KML format. You can also export it. So you can have your library there. Thank you. Let me check. So I see no more questions. We have a little bit more time at our disposal. So if there is nothing you would like to add more, then we can have a very, very short break until my time, 9 p.m., until the next presentation. So nothing else. Just this application and this kind of collaborations with official organizations are a way maybe to have a more democratic way to access information that it is not only a monopoly of Google and those kinds of companies. But to have an answer to support your own institutions and to have free access to information. And we have a new question. Could you please provide the GitHub URL? Right now I don't have it at hand. So we have the GitHub URL for the API CD. You can access there and you can find everything. So I don't have the link for the app. I don't have it at hand right now. Okay. So if there are no more questions, we'll just wait for a few more minutes until it's time for our next presentation. So I'll just check one more time. Okay. No more questions. So Carlos, thank you very much. And keep in touch and of course enjoy Phosphor G. Thank you very much, Kodrina. Thank you everyone for attending.
The use of mobile devices for outdoor activities in nature has increased significantly over time. These mobile applications are of great use for this type of activities and, in addition, they give great added value to detailed spatial information in natural environments, far from urban centres, in comparison to commercial applications, which are not oriented towards providing this type of services. Thus, two use cases of ready-to-use applications are presented for general use (Basic Maps of Spain) and for a particular use (Camino de Santiago). These applications are designed to be very easy to use, without having to make any configuration to connect to the official map services of Spain from CNIG (National Centre for Geographic Information) and its download centre to obtain maps and routes. With these applications you can follow tailor-made natural routes throughout Spain, stages of the Camino de Santiago or use your own tracks, plan excursions using maps, navigation and guided tours..., all offline, without the need for an Internet connection after downloading data. All the maps and routes used are free and allow you: * GPS location, even without mobile coverage * Offline map mode, saved in advance * GPS tracks on the maps of the National Geographic Institute * Save and view tracks in gpx, kml and kmz format * Positioning display with coordinates, course, speed, altitude * Calculation of distances It should be noted that the development has followed a multi-platform approach, where the implementation has been carried out with HTML and the specific mobile applications for Android or iPhone have been generated from these developments. These use cases show the community an attractive way to implement mobile applications using OGC standards and Open Source libraries, from which to adapt and enrich the contents to be consumed.
10.5446/57198 (DOI)
Okay, I think it's a good time to be presenting the next talk. So hello everyone, welcome to First 4G 2021. This is Wednesday and this is the Akonkawa Storm. Next Julia Dach will be presenting forecasting the future of weather data with GOES R and DileDB. I'm sorry if I didn't get the GOES R flight. It's okay. So, is that yours? Will you share to add to the stream? Okay. Alright, so thank you everyone for coming to my talk today. There's a couple things that I hope you get out of this talk. If nothing else, I want you to just be aware of the GOES R satellite series and it's publicly available data. You can get it off of Google Cloud, off of AWS and it's a lot of fun to play around with. So I hope you can get a basic understanding of what TileDB embedded is and a basic understanding of how to use CDF-like data in TileDB. I'll get into a little bit more later about what I mean by net CDF-like data. I want to introduce you to the TileDB CF Pi Library and its functionality. I'm going to give you a concrete example of storing net CF-like data in TileDB using the GOES R data. Alright, so first off, what is TileDB? TileDB embedded is an open source universal storage engine written in C++. It stores data in arrays with support for both dense and sparse arrays and it implements very fast array slicing across dimensions. So just to get into this a little bit, with the dense array you have dimensions that you can define your array over. So this could be one to many different dimensions which you do contain inside a domain. Use that domain to define an array and the array can contain multiple attributes so that you could have different integer values or character values or floating point values that you store in an attribute inside this array and then it also supports metadata. On the sparse side of things, it's very similar. The difference being that instead of actualizing the entire array, you store just cells and then it will also store the coordinate values for those cells. So for example, in this case here, you store the values of the attributes at that cell but also the two and four. And again, you have the array metadata, multiple possible attribute types and in the sparse case you also have the potential for multiplicity. So this can be turned on or off. Because you want to be able to have a cell that stores multiple different values at that single cell and sometimes you want each cell to have a unique value. So kind of at a glance, wide-tile DB. There's a couple of things here I'd like to highlight. So it's built in C++ but we have APIs for a lot of different languages. Again, all open source. Some that might be of particular interest to this community is we have a Python API and an RAPI. You have these R trees for sparse arrays which gives you very fast sparse lookup and a lot of time in the geospatial database you can have good sparse support or good dense support. It's rare that you can store both of those different kinds of data together in a single place and that's one of the really powerful things about TileDB. And then we have immutable writes that are lock free, they're parallel and they allow something called time traveling. I'm going to dig into this a little bit because I feel like it gives you a good understanding of kind of what's going on underneath the hood. So when you write to TileDB, what it's going to do is create a fragment. So at each write, this right here, so you're writing at one timestamp, it creates a fragment, stores that timestamp and then you write again at another timestamp, it will then store this data to a fragment at a new timestamp. Then when you go to read the data, it will take the fragment, it will find which fragments have the most recent data and it will just bring you back that most recent slice. But you can actually query just specific ranges of timestamps. So in this case, say I just wanted to read from that first fragment, I can do that. If I read from the entire fragments that I have written here, it returns the most recent values or I can just look at that last timestamp alone and it will turn back only the last value from that timestamp with just fill values for the unwritten variables. It's pretty similar in the sparse case, the main difference is when you do allow duplicates. So if you don't allow duplicates, this is the same sort of idea. You have the data from that first timestamp written in a single fragment that you can query. If you read everything, those values from the second timestamp will erase that empty cell there and overwrite the value of four with 40 or you can get just that last fragment. In the case where you do allow duplicates, the only difference is instead of overwriting the four with 40, both of those are valid values so you get both of those back. All right. So let's dig into a little bit about NetCDF and TiledDB and what does it mean to use a NetCDF data model in TiledDB? So this right here is a NetCDF data model in a kind of UML-ish format. This is from the Unidata website. The way the NetCDF works is it's a file format. And then each file, you can have multiple groups. And in a group, we define dimensions, which is a name and a length or size. You can also have metadata, which they call attributes, and you have variables, which is multi-dimensional arrays. And so right now, you can already see that this is going to fit well with TiledDB because both these fundamentally are looking at arrays. We're here on the TiledDB side. We store things a little bit differently, but you still have these multi-dimensional arrays here in your attributes. You have your arrays that are defined on dimensions, and you have the simple key value metadata. And so when you're mapping the NetCDF data model to TiledDB, what you want to do is you want to just map your groups to a group. You map the attributes, the metadata, and the NetCDF side to metadata and TiledDB, a little unfortunate that we call attribute different things. The dimensions go to dimensions that's pretty straightforward, and then the variables go to attributes. I should note, too, that if you want to, you can move away from the NetCDF data model. If you're mapping NetCDF into TiledDB and you had sparse data that you were compressing somehow in the NetCDF data model, you can actually, for example, map your variables straight to a dimension and represent that sparse data more directly, and I'll get into that a little bit more later. Another thing I want to mention here is NetCDF is more of a file format. They have a data model that goes along with that format, but the file is inherently part of the data model, whereas in TiledDB, it's a storage engine, so it handles the file management for you, but the file isn't an inherent part about how you think about the data, which is a lot easier when you're handling large data on the cloud, where file management is actually very painful. You don't want to necessarily be doing that manually. All right, so there are some special things in NetCDF that I want to touch on before we move on to the next bit. One is coordinates. There's a convention in NetCDF files that you name a dimension and a variable, the same name, and you use that to signify that these arrays, these variables in NetCDF that are defined by this dimension, you can map the value directly to it. For example, maybe you have a time value where these are times and seconds from some fixed time, and you have your indices for that value. When you map that to TiledDB, you can keep it directly in the NetCDF-like way, where you map that NetCDF dimension directly to a TiledDB dimension, and you take that data and you match it to a TiledDB attribute. This is the most straightforward thing to do. You actually have other options. Another thing that you could do is you take that NetCDF dimension and you still map it to a TiledDB dimension, but you also add in another sparse array to your TiledDB group that matches the data to a dimension and the index to an attribute. This allows you to do that quick sparse lookup with the archeries that allow you to query the other data that you have stored that is defined on that variable. The third option is just to move completely away from the NetCDF data model and just drop the index all together and go straight from the data to the dimension. Another thing you'll see in NetCDF is unlimited dimensions. This is something that we can also easily store in TiledDB. Your first option is you just use a large domain for the dimension. When you're defining dimensions in TiledDB before you write anything to an array, you define the type of the dimension, the name of the dimension, and the domain that it's valid on. You can make that domain as large as your data type stores. If you're storing with unsigned 64-bit integers, you can make that the entire possible range of unsigned 64-bit integers. If you do that in a dense array, make sure that you use any compression filter. You'll have a whole bunch of fill values, the same value. They compress very nicely, so that's not an issue. If you don't add in some sort of compression, you're going to have some issues there. The other option is sometimes you just want to take the current data in the NetCDF file. You can do that too. Maybe the data was being processed and they didn't know how large things would be when they were first writing it. Now you know, and you can just map that size directly to the domain in that case. All right, so next I want to move on to this library that we have to help with Climand and forecast data that's written in Python. This is just adding some different functionality in to help with either getting data from something like NetCDF into TiledDB or to accessing something that kept that NetCDF data model when you moved it to TiledDB. As a couple of different things in it, one is additional support for accessing TiledDB arrays from groups. NetCDF depends a lot on groups. This can be very convenient just to make it a little bit more similar to a workflow you're used to. We also have an engine for ingesting NetCDF data into TiledDB using NetCDF4 as the reader for NetCDF. We have a TiledDB backend engine for X-array. Only this engine only supports arrays, but we're working on group support. We do accept feature requests in PRs. This is on GitHub. You can go ahead and check it out directly. All open source. Happy to answer your questions about it at any time. So about doing the actual ingestion step. So when you convert to NetCDF to TiledDB using our converter engine, you follow a couple of different steps. First is you define a conversion recipe. You can either do this manually or auto-generate from a NetCDF group. Once you do the auto-generation, you still have the opportunity to go in and manually change things. Maybe you want to change the names of variables as you're mapping them to attributes or you want to change the names of arrays or you want to add additional compression filters. You can do that before you do the actual conversion. Next is you create TiledDB group or array schema. Then you copy your data from your NetCDF file or files into TiledDB arrays. This can all be done at once, but I find generally when you know your data, it's useful to spend some time modifying things and just taking the opportunity, whatever you're converting from one storage system to another, to refine how you're storing things, think about if you have the best possible representation, and just clean things up a little bit. Then sometimes when you're adjusting NetCDF data, you want to do things a little bit more manually. I don't want to make people feel like this is your only option to get your data into TiledDB. Maybe you want to process and analyze your data first. You don't want to go straight from NetCDF to TiledDB. You want to do some analysis. You want to create some other product and you want to put that into TiledDB. That's always an option. Maybe there's a feature you want for your conversion that's not yet implemented in TiledDB CFPi. We haven't really gotten too far into handling metadata yet. Maybe you need to do something special with your NetCDF metadata. You want to dig into that more manually. Maybe you want to use a programming language other than Python. You're just not a Python programmer. You have some options here. You can create a spoke converter and any language that uses both TiledDB and NetCDF. C++, Java, Python are both supported. Or maybe you want to convert your data directly into TiledDB first. Then do your process and analysis after conversion. You can do that too. The TiledDB CFPi, if you're just directly converting to NetCDF without any modification, is very fast. You don't have to do any of that manual tweaking that I was mentioning. All right. Not for the fun part. Let's get into the Gosar. One second. The Gosar Satellite series is a four satellite program. There's two operational satellites at a time. One is Ghost East, which has an ongoing imagery of the North and South America, the Atlantic Ocean, and a little bit of West Coast of Africa. Then you have Ghost West, which is the Western part of North America and the Pacific Ocean. It has a bunch of different devices on it. These two right here are what I'm going to be focusing on. These are taking Earth imagery. You also have some imagery of the sun from these two sensors. Then some NCDO data as well that's just from where the satellite is located. Let's look at the Radiance 1B data. This is the kind of imagery you might expect to see. This is the full disk representation. This is defined on a fixed grid from the satellite's perspective with dimensions of y and x. It has one file saved for each band and each scan time. The scan times have changed a little bit over the lifetime of the Gosar Satellite series. Right now, I believe they're doing a full disk scan every 10 minutes. They do them for each band and there's 12 different bands or channels. There's a couple of different ways you can store this in TileDB. One that I thought was particularly useful is you combine the files by adding dimensions for the band and time index. Then you add that access label that I was talking about earlier that maps from the scan midpoint timestamp or maybe you want to map from the beginning of the scan or the end of the scan to that time index. This allows you to do some really nice scoring of the data that doesn't require a lot of manually trying to figure out exactly which files you need from a S3 bucket. This is what that might look like. You define what timestamps you want to get the region over. In this example here, this would be pulling just the data from that early morning timestamp. So I said that it was about one every 10 minutes right now. What you do is you can specify which data you want out of the TileDB group, either by the array name or by the attribute name. Then you just go ahead and you query the time index from your first array and put that directly into your next array to get out your final radiance values. Anyone who's actually gone in with the GoSar data should know how painful it is to find the files you want from that original LetCDF data. As I mentioned before, these are saving lots of files in each band. It uses a complex directory system to map them. It will be the product name from the GoSar and then the year. Then you have the day and the year where it's not month day. It's actually 265 or something like that that you need to calculate. Then the hour of the day. Then there's a bunch of files with complicated names in that folder. You have to dig that in versus here where I want to know, well, I just want this band here. I don't need to do any special processing. You handle that for me. The next one I want to talk about is the GLM dataset. This is a slightly different dataset. In the radiance case, it's very clearly a dense array. This lightning data, on the other hand, is definitely something that spars. The way it's stored in LetCDF, which really only handles dense data, is it just has an ongoing list of events, groups, and flashes. Here, an event is just a light that was detected from one of the sensors on the GLM device. A group is all of the events that are next to each other at a particular time stamp. A flash is all of the groups that were together at subsequent time stamps. They use that to build up this lightning data. Then there's mappings from the events to the groups and the groups to the flash, which use IDs local to the file. This data has created every 20 seconds with data from overlapping time spans. Maybe the lightning flash you want to look at was from the previous 20 seconds, but it's in the next 20-second bucket, depending on how fast the processing happens. This means there's 180 files per hour. That's a lot of files to try and handle. One possible way to represent this entire DB is you convert that index dimension. Event 0, 1, 2, 3, etc. into the latitude, longitude, and time dimensions for the event. You then can replace the mapping from the latitude and longitude and time coordinates, or to the, sorry, you replace the ID mapping instead of going from event, whatever, to group, whatever. You can just do directly queries on the latitude, longitude, and time. You can use this to create a single array for the TalDB events, a single array for the groups, and a single array for the flashes. In this case, rather than having to have 180 different files per hour, you're just always adding to the same TalDB arrays where you have the sparse representation and sparse arrays that your data really supports. Then just tying these two together. Suppose I wanted to grab that full disk image I had before, and I want all of the lightning events that happened during the time span of that scan. It's really easy to do. I just grabbed the time-bounds attribute that I stored previously, and say, maybe I want event energy, and I can really quickly just go ahead and query that out. So kind of bring this all back together. Looking at, again, this net-cdf data model, this fundamental part here can be really handy for storing dense data. You can map that directly into TalDB. Sometimes maybe you fit your data into net-cdf because it was the tool you had available for you at the time. You can take that and generalize it further. Then one of the real powerful things here is that you get to move away from this file system. You let the data engine handle the files for you, and you get directly into just using arrays and accessing directly the data that you want with all of the fast-coring power of TalDB. All right. I think I wrapped up a little bit early, so we have time for questions now. Okay. That was an interesting talk. Thank you very much. So let's go forward with questions. So first one is this one. Is it possible to publish TalDB data on a dashboard or a map? I'm sorry. What was that? Sorry. I will repeat. Is it possible to publish TalDB data on a dashboard or a map? Yes. So TalDB is a company, and we have a TalDB cloud that has a lot of built-in ways to set up dashboards with our cloud product. But the core engine is all open source, so if you want to set it up yourself, we have REST APIs that you can set up. You can access it and maybe using Go make it easy to set up your backend. We have a lot of different interfaces to help make that easier. Okay. Thank you very much. The next question is, is TalDB very different from S, Y, and ZAR? From which? I think I will copy it because I'm not sure how this is pronounced. Oh. X, Y, and ZAR. So TalDB is fairly different from ZAR. Where they are similar are both array-based cloud-native storage systems. I haven't spent a whole lot of time playing with ZAR, so if anyone is a ZAR expert, they can correct me if I'm wrong here. But right now with ZAR, the functionality is more based on just the dense arrays. It doesn't have unlimited dimensions, it doesn't have the time traveling, and it doesn't have any of the sparse support. So if you want all of that extra functionality, TalDB is going to allow you to do that. On the other hand, because ZAR doesn't have all this extra functionality, it's a lot simpler to get up and running. With X array and ZAR, ZAR is pretty closely integrated to X array. I think there's a lot of overlapping developers there. But X array supports a lot of different backends. X array is an in-memory tool for anyone who's not familiar with it, or in Python, that uses an at-cdf data model. So part of this work is creating the X array back-end. One of the nice things about X array is it allows you to do the lazy loading of your data. Those coordinates I mentioned earlier, it will greedily load those into memory, but then leave all of your big arrays on disk, and just load them as you actually do computations or plotting or whatever else you need. So, pardon me, we have the X array back-end for TalDB, so you can do all of that with TalDB as well. I see. Thank you very much. It seems like there are no more questions. So we can leave it here if it's okay. Okay, yeah. Thank you for your time, everyone. Thanks for checking out my talk. Thank you for your talk, Julia, and see you around in the first one. Bye-bye.
The Geostationary Operational Environment Satellite-R (GOES-R) series provides continuous satellite imagery of the Earth’s eastern hemisphere. GOES-R series datasets are made available through multiple cloud service providers via NOAA’s Big Data Program. The datasets include Level 1b and Level 2 satellite data split into directories of NetCDF files stored for consecutive time periods. This talk will show how to use TileDB Embedded, an open-source universal storage engine, to combine data from multiple GOES-R products into a single easily-accessible dataset. In this talk, I will show how to ingest data from the GOES-R Advance Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) into cloud-ready storage using TileDB Embedded. I will discuss the pros and cons of keeping the original NetCDF data model, and show how to combine datasets that consist of both dense and sparse arrays. With the arrays stored in TileDB Embedded, I will show how to efficiently slice weather data, locally and remotely on cloud object storage; how to use data versioning to time-travel across any changes to an array; and give an overview of some of the open-source tools that integrate directly with TileDB Embedded.
10.5446/57199 (DOI)
time to start back. Welcome everyone to this next talk of the academic track of Phosphor G 2021. My name is Marco Minghini from Italy. It is my pleasure to introduce the next speaker of this session, Philippe Ruffin. He holds a PhD in Geography. His research aims at mapping agricultural systems across large areas and long-time frames to improve our understanding of food production, water resource consumption and land management. Philippe has also several years of teaching experience in applied earth observation classes using phosphor G tools. Recently, he and his colleagues decided to develop a QGIS plugin for facilitating the integration of Google Earth Engine and QGIS. This is exactly what he's going to tell us in his talk titled GEE Time Series Explorer for QGIS Instant Access to Petabytes of Earth Observation Data. Philippe, the floor is yours. Thanks a lot Marco. So yeah, you already introduced nicely what I'm going to present today and I have to say that I'm not a developer by training but you know I'm doing research and teaching so this presentation is the result of kind of me needing a new tool to do my research and do my teaching but at the same time I was lucky to meet Andreas Waabe, my very appreciated colleague that helped me to discuss the ideas and put them into practice so he was the developer behind this work that I'm going to present here and also my colleagues Leon Niel and Patrick Costa were supporting this work a lot so well if you're interested in reading more there's a short paper at the proceedings and I put the link here on the title slide which you can find it in the proceedings and in the conference program as well. Okay so let's see if this presentation works. So I guess everybody here is aware with the onset of the open data policy there was kind of a shift in paradigm in terms of Earth Observation Analysis workflows so we have an increasing number of image archives available that for example MODIS, LANSAT, the Copernicus programs sentiment 2 and 1 archives that people can access really easily and free of cost so research and Earth Observation kind of changed from the analysis of single images or multi-temporal time series analysis with a few images across maybe 15 years or so to what something that is more dense time series analysis where we really look into phenology and this really fertilized a lot of applications like forestry, agriculture, ecology, mapping of land use or land power change so here for example we see a time series of vegetation index derived from LANSAT where the dark green are basically high values meaning high vegetation activity and the white values are basically low vegetation activities so if we poke into a single pixel of this time series we can extract the kind of time series profile that shows us basically how vegetation changes over time so we can derive from this information on crop types or land management. Very nice stuff. So I mentioned this change in paradigm and this can be nicely illustrated with the example of LANSAT where there has been quite a long phase where LANSAT data had to be bought for varying prices and then after 2008 the data became free and we can see with this gray bar here that there was a really sharp increase in the number of images downloaded but also the blue curve, the number of publications that use LANSAT data and the orange curve that really well is deeply upwards that uses LANSAT data in combination with some kind of time series analysis. So this is kind of the background of this talk. So there is an increasing number of methods and applications that want to make use of these time series but the starting point for further developing research and pushing the boundaries a little bit is most often explorative and it asks typically a couple of questions that are mostly which data do we have, how much of it is there and does the data availability really meet the requirements that we have in order to see what we want to see in time series and use the appropriate methods. So it's a common thing to have to explore basically data before we actually want to use them or before we can make sure that our research is working. So exploring time series can be done in a variety of ways. There are several options that also come with a couple of constraints. So for instance we could think about exploring time series locally but then the obvious problem is that we have very large data volumes that kind of create constraints for accessibility because a time series of for example Saturn II for only one tile for one year is already nearly 60 gigabytes worth of data and it becomes already hard to process on a working machine like a laptop or a workstation. We want to expand this a little bit more to an entire country for example Germany we are facing four terabytes worth of data for only one year so that's not really an option for most people. Then of course we have a lot of cloud-based services like Earth Engine and they are very powerful and provide already very different image archives that we can access and tap. However most often we are facing the hurdle of having to code a specific script or a specific application or a Jupyter notebook in order to really access a time series and look into this signal that I was showing earlier. There were also a couple of browser-based platforms for example the Web Earth Observation Monitor by colleagues from the University of Jena which was really great. It was deprecated due to lack of funding but also these type of tools have the limitation that they most often are kind of offering access to a single archive in this case it was the modus archive and it's really difficult to integrate with external datasets. So that brings us already pretty close to the motivation for this talk. So what we wanted to do or what I saw that there's a need for is a tool that allows us to go instantly explore time series from Earth Observation Image Archives and not just one but many and those should be somewhat compatible with local data sources like a vector dataset of field data for example. Then many applications it's really useful to have the opportunity to download for a specific location just a time series not an entire image but just time series for one location so sample-based data download. And ideally we wanted to do this with very very low accessibility constraints in terms of the things I mentioned earlier so not so much need for coding and everything quite accessible and wrapped in the user-friendly interface. So this should be useful to researchers but also very much useful in education in order to teach students the analysis of time series for example. Okay and this is already kind of the result of what we wanted to do and it's called the QJS time series explorer plugin. It has a couple of panels that you can see here on the top of this QJS interface. It has a couple of general functionalities, a graph or plot window where you can see time series, a couple of settings you can make as well as a point navigation. This entire plugin I want to highlight that specifically is built on the Earth Engine QJS plugin that links QJS and Earth Engine through the Earth Engine Python API. So it's really cool that this plugin exists and we can build things on top of that. So in the following minutes I want to present to you a couple of video recordings that demonstrate the plugin use. So to start let me launch this video. We can name the plugin and we can select from a list of predefined collections something that we want to dig into. So here we choose a predefined Landsat collection that features all the Landsat sensors and we added vegetation indices like EDI to this custom collection. We can create a date filter to you know only look into a subset or time period of interest for our analysis. So here we said 2010 January until the end of September. So very recently and we activate this cursor and we can click into any location on the map anywhere in the globe where there's Landsat data available and within a couple of seconds we retrieve this time series of EDI data. You can click to the next location again we wait a couple of seconds and we retrieve the data and you can see the updated graph. So this graph is interactive so we can zoom on the different axes we can pan around and scroll around as much as we want in order to really explore the time series in depth and detail. Okay then we have a point browser so we can select vector data set like a field data set with point locations in it and the plug-in will automatically jump to the first location and we can skip point by point to the different locations contained in the shape file and retrieve the time series of the selected spectral balance or indices automatically. All right so some users maybe want to access the data directly and for this we have the function to just copy in plain text format what we see in this in this graph panel so we can copy paste the data directly and for the larger data sets there's the option to download time series for each feature and the spectral data set that we've selected down there. So if we have a couple of hundred points a single text file will be created for each point with the same type of text information that we saw before. Okay so that was my use case number one. The second thing I want to present is the visualization of image data that we can also do in this player. So there's also a short video starting in the same location so we can define basically like in the standard QGIS rendering options in different ways of visualizing imagery and here we can select from the spectral bands that are available apply the center stretch and the plugin will retrieve after activating this image functionality WMS layer from Earth Engine that is being visualized and pretty straightforward in the map canvas. So we can explore the images and we have the chance to navigate through time navigate backwards in time with these arrow buttons up there to the previous image and one image further back once in time to really go through the time series also visually in terms of images. But you can also navigate directly in the graph and click anywhere in your time series profile wherever you think there's something interesting. Okay you have the chance to store these layers in the layer list. There's a small button to do exactly that so this layer that we see now will be retained and we can jump to another observation and store the same image which allows us then to flicker back and forth and really directly compare without having to wait for knowing times and so on. The next nice feature which is my favorite feature really in this plugin is the temporal binning. So what we can do here is determine a temporal window that we think is suitable to aggregate all the observations that we have. So we can say we want to create for example a mean value across 60 days of time series that we have. Calculate the reflectance for these different bands in the given time range and visualize this as a bunch. So here we are selecting a medium image for our two month period in the year 2015. We wait a couple of seconds and again this will be retrieved as a WMS layer and will be visualized. So maybe now you're just sitting and jumping through the time series seeing the exact same time interval in every year jumping back and forth. There's also a function to do that so we can define a temporal increment that allows you to jump a predefined amount of time forward or backward time. So now we can jump to the next year like this and the next year again. Okay. All right. Okay. So we have a couple of predefined collections as I mentioned. So most of them are the stand-up collections like you would see them on the Earth Engine but there are these more interesting, a bit more customized collections. For example the Landsat collection that includes cloud masking, integrates the different sensors and then also adds vegetation index layers or other type of spectral features to the collection that you can then directly visualize. With our collection editor you have the chance to modify these collections directly in QGS. It means you see the Python code for the specific collection, you see the functions that are being defined there and there's also a description containing those which you can then directly modify, reload the collection and see how the changes apply. So you might be interested in creating your own custom collections which you can then add to the directory of the plugin and have a variable in the list of predefined collections. And we would also like to highlight that this is somewhat of a community effort so we would invite you in case you have a custom collection that is really useful for the community and we didn't include it yet to contribute by a bit bucket your collection. I was taking a look at it and invited you to join the team and really contribute to this plugin so that would be cool. I think there are people in the audience also did that already so by now since the release of this plugin we have two new collections which is the era five daily climate collection as well as two set-in-one collections that we didn't include previously. So thanks to you guys if you can hear this talk. The plugin is quite efficiently so you saw this in the video now this was recorded in my home so it's not a crazy fast internet connection. We implemented a parallel downloading capability when we want to download samples that accelerate things really much and the WMS technology is used for image visualization and it kind of resembles the speed as if you would use the Google Earth Engine browser based visualization basically. So we did a couple of performance tests and we for example download 35 years worth of Cloudmask Lancer time series for 1000 locations distributed across the globe. This takes on average four minutes and two seconds. So it's a price really it is really affordable in case you want to for example investigate the use of time series in your modeling framework or for your classification approaches. Then we have a couple of visualization tests so Cloudmask Lancer image takes on average five seconds to show up. We calculate a monthly mean reflectance across Germany this is 17 seconds and if you go a larger scale continental scale you have wave times of 47 seconds in this case for Europe or sometimes more sometimes less depending on what you actually do. But I think this is a kind of benchmark value to look at. So I can demonstrate this also again using the video so we will define one of those temporal wins of two months again. We have a couple of presets in terms of the band rendering and then we activate this image functionality load an image we can see that one image frame is being loaded in the settings that we wanted to. So that's commonly the image that is overlapping with the time series location that we loaded previously but we can extend. So we enable this function that is called show for map cameras extend and this will trigger our request to Earth Engine asking for basically the entire extent of the map cameras that you can see and the WMS will be retrieved to for example look into the monthly, minor three percentile reflectance across half of the Amazon biome in a couple of seconds. So I really like this and we can zoom in so this WMS layer is reactive we can zoom in further and especially resolution we will update and we see with sufficient detail the processes that we want to highlight here. So then again we can install this layer jump 10 years forward in time store the same layer and flicker back and forth in order to really assess land-coupled changes pretty instantly. So that's I think from the demonstration side is everything. If you're interested in using this plugin we have a read the docs page with the documentation on how to install it and how to use it with a new version it should be a little bit easier as compared to before and yeah I'm happy if you like this talk. I want to thank you for your attention and hope you can check out the read the docs if you have any issues or comments you can either leave us a note or an issue on Bitbucket or contact us by email or Twitter. Okay we're then able to thank you for your attention. Oh sorry I realized I was muted I think I'm not sure you hear you heard me so I was thanking you for the presentation and for the impressive work and I was saying that the audience was also impressed as I as I've seen a lot of applause is coming while you were speaking. We have questions the first one is a simple one it came while you were showing the first video and the question is can we also select areas and realize that was muted I think I'm not sure you hear you you heard me so I was thanking you for the presentation and for the impressive work and okay sorry. So if we can select areas for extracting the time series that's not yet possible so I guess the question targets the aggregation of areas to extract kind of an average profile so we can't do this it. It's a nice comment though. Thanks and we have another interesting question how to initialize Google Earth Engine in QGIS so do we need to add a Gmail account and credentials in QGIS as well so how does the connection happen? So the connection happens through the Earth Engine for QGIS plugin that I was mentioning earlier so the entire authorization is being done in the background before you firstly can use this plugin and it works I think the most recent version of the plugin it works through the Python console directly in QGIS which then launches browser window you log in into your Gmail or your Google account you treat an API key which you then pass into the console and this should be working right now so the authorization can be done directly from QGIS. Thanks a lot this is very important to know I invite the audience to still ask questions of course while we go through the ones already there. There are two additional questions I can merge them into a single one and it's about the data and the data catalog so how does the plugin update itself so is it updating at the same time as the data sets are updating in Google Earth Engine does the plugin include all the catalogs of Google Earth Engine? Not yet so we did not include the entire catalog because some of the collections are really kind of suitable with this kind of concept that we're using so this time information is crucial that for now we are staying in the you know the highest detail in the daily temporal resolution but I know there are also collections that feature for example hourly data that we for instance the climate data sets that are available there so what we have here is really a predefined set of collections that are defined as small Python scripts in the folder of the plugin and you can change those you can amend them and that more if you want but we have a kind of a limited set that features mostly modus collections, lancet collections, the set of two collections now set in a one in era five daily climate thanks to the community contributions but this list is really up for expansion if you wish. Thanks a lot Philippe for the answer I don't see additional questions so while waiting for new ones I can maybe ask one myself and it's about the software project itself so you said that you started to develop these with your team basically but given that this probably is interesting for a huge amount of users in general I just want to know what is the status what is the maturity of the project I mean is it still on you and your team and were you able to create a community around the software what are the future developments that you foresee at the moment are you discussing this with the community or is it still a local project so far it's quite local because we haven't really presented this to the community I mean the response was quite positive from from our networks basically but we haven't really put this out there and now it's the time to do it so I'm happy to receive any type of feedback so far as principally this is kind of a it's really kind of a community effort that you know there's no real funding for this so we're doing this in our spare time but also my colleague Andreas Rabe has been developing a couple of plugins that are really nice to use and specifically for the work with raster datasets that I was checking out and he has also a personal motivation to to kind of amend this list of plugins that he's using and for the future developments and the idea was to to think this a bit further and in terms of the download capabilities and not only consider downloading sample-based data but also think about really downloading image chips which then can be used locally for for further you know investigation of time series for example downloading a stack of images that we can directly put to disk and investigate in QJS from from your local heart right so that's some of the things that we can do yeah indeed I think this was the right moment given that the work looks mature so thanks a lot again for presenting it at the Fosford G and for the for the audience please take note of the links that we see now in the in the slides so if you're interested in in the documentation or in using the software or clearly in opening issues here are all the links and feel free to contact that Philipp I'm sure he will be happy to share with you more about this work and we have an additional question that is about one of the previous questions the the person asking says I don't think this was answered is the plugin updating at the same time as the datasets are updating in Google Earth Engine maybe you want to elaborate more on this Philippe so the updating you mean the data catalog that is updating so is this the question or are you talking about something else this is my my understanding so whether the when the datasets are updated in Google Earth Engine if what you're finding QJS is automatically updated or not I think this is the question yeah yeah it's automatically updated because every time you launch the plugin there's a new authorization and everything all the requests are sent to Earth Engine and you know tapping the the collections and the image archives that are there at the moment so there's no verifying or anything it's always the most recent version of the datasets and all the collections that are available at the moment are also available through the plugin okay thanks a lot Philippe for the for the clarification and we should close the talk now but once again I invite the audience to get in touch with Philippe the resource of paper published in the academic track proceedings of Postworld G 2021 I invite you all to have a look if you are interested and to contact Philippe for any further information you may need thanks also to the audience for the questions and for the input and if you like the talk please let Philippe know with a with a virtual applause thanks again and see you in the next talk and I wish you a good continuation of Postworld G 2021 thank you
Current Earth Observation applications heavily rely on analyses of dense intra-annual or inter-annual time series. State-of-the-art analysis workflows thus require mass processing of satellite data, with data volumes easily exceeding several terabytes, even for relatively small areas of interest. Cloud processing platforms such as Google EarthEngine (GEE) leverage accessibility to satellite image archives and facilitate time series analysis workflows. Instant visualization of time series data, though, is currently not implemented or requires coding customized scripts or applications. We here present the GEE Timeseries Explorer which allows instant access to any Earth Engine image collection from within QGIS. It seamlessly integrates the QGIS UI with a compact widget for visualizing time series from any GEE image collection graphically as an interactive plot, or spatially as images with customized band rendering. The GEE Timeseries Explorer offers flexible integration of any GEE image collection, such as the MODIS, Landsat, Sentinel-2AB, or Sentinel-1 product suites. Image collections can be modified in a collection editor widget to include, e.g., quality filtering, cloud- and cloud-shadow masking, or adding band indices or transformations. We added a set of pre-defined collections, including quality-filtered MODIS VI products, integrated and cloud-masked Landsat TM, ETM+, OLI, or cloud-masked Sentinel-2AB surface reflectance products. Users are encouraged to contribute to the plugin by sharing custom collections through the plugin repository or the plugin homepage.
10.5446/57201 (DOI)
Did you have a screen to share? Yeah, I can share the screen. OK. Cool. So we'll wait for you. I'm seeing your screen. So I go to the background and give the floor to you, Krishna. OK, thank you. Yeah, so hello, everyone. This is going to be a very interesting talk on geospatial analysis with Python. So on, I guess, the day before yesterday, I had the same topic for the workshop, and we spent amazing four hours dealing with, starting with what exactly is Python all the way up to using some basic libraries to do geospatial analysis with Python. If you're interested, you can check out the GitHub link for the same. I have all the notebooks and then all the basic introduction about those notebooks in the GitHub link. A little bit information about me. So my name is Krishna. You can visit my website. I am a freelance web GIS developer. I do a lot of content creation as well on YouTube. So if you are interested into web GIS, you can definitely check out my channel over there. And today, we'll be focusing on geospatial analysis with Python with the consideration that I have in my mind is that most of the viewer have just started learning Python, or they have no information about Python. So they come from a non-programming background, and they're not sure that maybe they should learn Python or not. So this is something that we'll discuss, and then we'll move forward. So basically, when you think about Python, it is not just useful for the developers, but I think it's a very important tool also for someone who are working as a GIS analyst. The reason being that is that once you understand Python, it can help you to automate a lot of your tasks that you are doing right now, maybe when you're working, or even using cron jobs, you can automate it entirely. Python at the same time has application almost in all of the fields, let it be health-related education, GIS, everything like that. So understanding Python can definitely earn a new bread and butter. So for this talk, I have divided the entire learning Geospatial Python into a couple of roadmaps. And of course, this is just scratching the surface. There are so many things that you can do with Python. There are millions of libraries and then millions of use cases. But then what we are trying to do over here is that if you have never worked with Python, if you have never did the Geospatial Analysis, this might be a good starting point for you because then by doing this, you will have some sort of idea about the capabilities that Python holds. And then after learning this, everything is just depends upon your imagination. You can take it in any dimension that you want. So of course, if you're starting, the very good thing for you can be to just brush up the knowledge of GIS concepts. So this can be very basic things like understanding the types of data and also what are different Geospatial operations or point. I mean, the buffer contains everything like that. And then of course, we have to start from the very basics understanding Python at a basic level. Once we understand this, we can then move forward with understanding more about GDAL and OGR and its command line tools because this is where most of your tasks can get automated and can make your life very easy. So even a simple thing like taking a shapefile converting into different formats or let's say different projections, if you want to do it for once, you might use QGIS or ArcMap or anything like that. But if you have tens and thousands of such files that you want to convert, maybe using command line is the fastest way to do. Once you understand that, then we move a little bit forward and understand the vector data and how to deal with vector data in Python. So again, when we say vector data, there are so many ways from where we can load the data, let it be file, let it be post GIS, anything. And once we have the data in the Python, then comes the main part where we actually perform the special operations on it and make sense out of the data. So whatever data that we are loading, we have to be sure that Python understands that it's a location and not just random numbers and then make sense out of it. Once we do that, then we can move into the visualization where we actually understand that how this data can be visualized in a way that it makes sense to someone who looks at this data. This is where a lot of the data science and researchers and data scientists actually comes into the picture because by doing this, they can present their data in a way that they want to. This is how we play with the vector data. We load the data. We do the operation. We visualize it. We can maybe also export the data. Then we can also understand how exactly raster data can be treated with Python. One of the main reasons why we use Python is because of its speed and its computation powers. And of course, when it comes to raster data, we have data in each and every pixel that we want to deal with. So obviously, Python becomes a sensible choice to use as a programming language to deal with the data. And finally, we will also see how to create the interactive maps. So the difference between visualization and this mapping type can be that in visualization. It is more or less like a static data. So you can create the visualization and then export it as PNG. But then that's the end of the story. When it comes to interactive map, you give your user an ability to play with your data, to play with your map, and then make sense out of this. So we'll see a couple of libraries like Folium, iPyLeaflet, PyDec, and things like that. So if you follow this road step, you will have a basic understanding of Python and, of course, the geospatial component attached to it. And then from here, you can take it in any direction that you want. So we'll just go through each one of these roadmaps in the next 25 minutes. So as I said, the GS concepts, I don't think I need to spend much time here. Everyone understands vector data, raster data, spatial operations. All of these concepts, once you understand, you can then try to utilize this with Python. Next big thing where you want to take stop and then you want to spend more of your time is making sure that you understand Python correctly. So it can be starting from a very simple thing, like understanding variables, what are different data types, list dictionaries in Python, and then understanding what are functions in Python and how you can execute specific commands in Python whenever you want to using functions and arguments, and then how you can connect this with loops so that you can repetitively call this function on each data as you move forward. You can also learn about conditions, and then you can decide how exactly your Python code is going to execute depending upon how your output or your input is behaving. Once you clear these basic concepts, you can definitely move into more advanced level concepts, such as creating your own classes and objects, and creating decorators, which helps us to change the function without actually changing the functions. So Python in itself is quite huge, and just learning Python is also enough for you to earn bread and butter. But once you understand Python and if you like it, you can then just take it one step forward and understand different Python packages. So one of the reasons why Python is so famous is because of the packages available, you don't have to reinvent the wheel every time. You don't have to write the code from scratch. You can simply use the packages which are already available, and then you can build your business logic on top of that. So some of the most important packages which are used by almost all data scientists, machine learning developers, and artificial intelligence people are NumPy, which deals with basically a list of data. But then the advantage of NumPy is that it's significantly faster, and it provides more way to deal with the data rather than creating a traditional Python list. Then we can also study Pandas, which is basically a package built on top of NumPy, which helps us to deal with the structured data. So you can consider Pandas as a combination of flexibility of Python and the power of something like Google Sheets or Microsoft Excel or something like that. And then finally, for visualization, we understand Matplotlib and Plotly as these are some of the very famous packages that people use, let it be a special data, let it be non-special data in order to show the data. So by using Matplotlib, you can create graphs, you can create scatter plots, you can create histogram, pie charts, and things like that. And the very interesting thing is that all of these packages are connected with each other so that even though you have your data in Pandas, you can actually take that data and show it in Matplotlib with the way you want. So this is one of the advantage of learning these packages. And then once you do this, then you can move one step ahead where you learn frameworks like Django, Flask API, Fast API, and things like that to build more scalable, secured, and stable applications. Then we move into the world of GDAL and OGR, where we understand what are all the capabilities that these libraries hold. So for example, I have some of the basic command line tools that I use every day to treat the data. So for example, if I want to add multiple tips together, I can do that by using GDAL merge. If I want to create the tiles for a specific zoom level based on the geotip that I have, I can do that by using GDAL to tiles.py. I can also use OGR to OGR command line tools to convert data from one format to another to change its projection from one to another. And even to query the data and then save the data. So for example, if I have data for entire world, and if I'm interested in data just for India, I don't have to actually go to any software and then scrap that data out. I can simply do that by providing a where clause in my OGR to OGR command line tool. So by understanding these command line tools, our life becomes very easy, and then it saves a lot of time for us as well. Once you understand that, you can actually use the same knowledge into Python as well by using GDAL and OGR Python packages. So as I said that in vector data, we have to actually learn few things in order to get a complete picture about how vector data can be treated in Python. So we start with the very basic of loading the data and then making geometrical data on the fly, then doing some special calculations, special operations in Python itself, and then of course visualizing the data. So one of the most famous library for geospatial analysis for vector data is Geopandas, as it combines the power of pandas and numpy, and then it mix it with the other packages, such as Shapely and Fiona, which are dedicated for geospatial work. So if you consider Shapely, so it allows us to create different geometries, let it be point line, multi-line, multi-polygon, anything like that. And then Fiona is basically a Python wrapper of GDAL and OGR. So Fiona helps us to load any special data and then also export any special data, read more information about that special data. So as you can see that all of these libraries are good at doing something. But when we combine all of these libraries, we get a library where we can process the data very fast and then we can also make the special operations. So as we can see that by using Geopandas, we can load the data which is available on the disk. We can also connect our own database and then load the data from there. We can also load a CSV and then create the geometry on the fly. So I had this CSV known as stadium, and then I had a longitude and latitude column, which had the actual coordinates. So instead of just treating those coordinates as numbers, we were able to convert them into the actual geometry, which makes sense for Geopandas so that we can do these special operations on that. Once we understand this, we can actually get into the special operations. So here we can also see that we can also create the geodata frame from scratch. So if I have no data to load, but if I want to create it from scratch, I can then use the shapely geometry to create the actual geometry. And then I can add all the metadata that I want to add, and then I can create the actual data frame, which then I can, if I want, I can export as a shape file, as a geojson, or even I can push it back to the database. So once we understand that how to get the data and how to put that data back into the different stores, we can then see that what are the actual capabilities that Geopandas holds. So for example, doing special operation. So in this example, which we see on the screen, what I'm trying to do is I have two world level shape files. So first is countries, where I have polygon for all the countries. And second is airport, where I have all the point data for the airports across the globe. Now what I want to do is I want to assign that which airport actually belongs to what country. And to do that, basically what I do is I use the shape of country as well as the airport. So first of all, what I do is I take the India out of all the countries data, and then I squeeze the geometry out. So squeeze is basically a method available in Geopandas, which allows us to get that geometry in the shapely format. And then what we do is we use that shapely format geometry to treat it with the geometry of each airport. So what we do is we do airport.within, and then the shapely geometry of India. So by doing this, we get the list of each feature. That means each airport, whether it is in India or not, so I get true or false like that. So based on that, I can get all the true values, which will basically give me all the airports which are within India. So like this, we can do a lot of special operations by using Geopandas. One of the other example is using overlay, where then we have the base data, and then we put our data on top of that, and then we can either do union, we can do or the difference to get the difference geometry or the combined geometry. So all of these things we can do inside the Geopandas directly and then export the result the way we want. So this is what actually makes the Geopandas so famous and so useful is that we can do all of these things inside the Geodata frame itself, and then at the end, we can export the data frame. Finally, we also have to visualize the data because by just looking at the data on screen or in a format of CSV won't gonna, will not make any sense at all. By doing visualization and using existing libraries like Matplotlib or Plotly, we can actually put the data on the graph. So even though it looks like a map, at the end, it is just a graph because as I said, shapely geometries don't care about the coordinate system or projection, they only care about the numbers of the Latin long. So we basically use shapely geometry in Geopandas, and then Geopandas, the data that we have in Geopandas, we pass it to Matplotlib, and then Matplotlib tries to print it out on the graph. So it looks, even though it looks like a map, it is actually a graph, and then we can do a lot of things here. So you can see that in the first map, I have actually colored each one of the continent. So you can see that it becomes very easy for us to do all this coloring and creating an image which we can export and then put it in our reports or anything like that. So we can add the title to this, to this, to this graph. We can add the X label, we can add the Y label. We can also decide whether we want to see the numbers on the X and Y axis or not. And if you want, you can also decide what numbers do you want to see on the axis. So all this freedom that we get when we use Matplotlib, just like that, we can also create a graphs where we, graphs or I can say maps, where we combine different geodata frames all together in order to create one map containing all the layers. So in the second example, I have a base map where I have plotted all the countries. And then I have plotted cities on top of that. So here you can see that we can play with the marker, we can play with the color, we can play with the marker size, and there are many more properties. Matplotlib documentation has all of this information given in a very easy way. So if you just refer to the documentation, you will get a whole lot of idea that what are the capabilities of Matplotlib. Now, plotlib is another great example of visualizing data on the map. The advantage of plotlib is that since Matplotlib actually creates the image, you cannot do anything with it. It is pretty much just there. With plotlib, you get the advantage that you can actually pan on the map, you can zoom in, you can zoom out. Even though it is not 100% interactive, but still it gives you a very good look and feel. Then we can see that how we can deal with the raster data as well with Python. So as I said, at the end, Python is good with numbers. So what we try to do is we try to convert the raster data into numeric format, which we can then process. So libraries such as rasterio, earthpy, matplotlib, and numpy actually comes into the picture when we deal with the raster data. So for example, if we want to load any tiff file, we can use rasterio to load that file. And then we can get all information about that file. We can get the CRS, we can get the number of bounds, the bounds, and all other information. We can get the width, height, and everything like that. All this metadata is already available in the tiff file, as we all know. And then rasterio also at the end of the day uses gdel to get all this information. So we can get all of this information from gdel as well using command line tool, but we can also do that using Python. And as I said that at the end, what we are trying to do is we are trying to convert the raster data into an array of pixels and then each pixel's values so that we can utilize it. So here you can see that in this example, I have just taken out the first band of the raster and then you can see that that is in fact an array of numbers. So you can see 0, 0, 0 because as you can see all the four corners of this image has no data and that is why we see the 0, 0, 0. We can also plot this data because this array will contain information about each pixel and then based on the pixel value, we can just keep on adding the data to the raster and then we can show the data. We can also clip the data based on the polygon or even based on the pixel that we want. So here you can see that this raster band actually has, as we saw, the width is 8971 and the height is 8961. So I can give any sort of numbers between that to clip the data. So in this example, I have taken from 3000 to 6000 on the x-axis and then 3000 to 6000 on the y-axis and then I have just gotten that data. So you can see that now this data actually shows some numbers and then based on these numbers, we have plotted the TIF file. Once we have these numbers, then we can do whatever we want with it. We can do NDVI calculations and everything with it. So here is the histogram created for this clip raster that we just created. And this histogram now makes a lot of sense. We can see a lot of different information, the frequency, the DN and everything like that. So this is a perfect combination of using raster.io and then gdl and then also matplotlib to actually create the histogram. So you will get an idea that you don't actually need to just learn one Python package, but it is actually more like a combination of different Python package, each Python package focusing on just one task. And finally, we can also learn more about interactive maps. So if you guys have worked in the Web GIS domain, you might be familiar with leaflet or open layers or any sort of library for putting interactive maps on the web application. So generally, we have a map object inside which we have smaller object like the view that where the map should be centered and what should be the zoom. And then based on that, we add layers. So it can be tile layer, JSON layer or anything like that. And we can add the marker. So same sort of chronology we can follow for this iPly leaflet, which is again a very useful Python package, which allows us to put the interactive map actually in the notebook. So here you can see that I have created this notebook, which shows all the cities and then I can click on the city to create the markers. So this actually creates more interactive interface for users to have a look. Just like that, we have one more package known as PyDoc, which is actually the Python package for deck.gl, which is again a very famous JavaScript library for visualization of geospatial data. So by learning all these basic technologies, it gives you idea that from where exactly you want to go in the geospatial world. So there are vector data, raster data, interactive maps, and so on and so forth. So if you guys have any questions, feel free to ask. And if you want to find this code, head over to GitHub. And you can go to my repository and my profile and then find phosphorgie-geospatial to get this repository. Thank you. Well, thank you, Krishna, for this great presentation, giving us a roadmap and then doing analysis and everything within 20 minutes. And I thought I knew something about Python. I learned a lot in my posthumous. And we have some questions. I'll put them on screen. The one the most upvoted. Oh, there are even four. But the one the most, it would also be my question. You can read it, right? I'm going to work with Python. So for people just listening, what are good environments to work with Python and with geo libraries, I mean, Anaconda notebooks as good as alcohol. So honestly, I have worked with all of this. So I have worked with Jupyter notebooks. So I would say that Jupyter notebooks are a great way for you to develop the code because of course you can break your code and just focus on one piece at a time. But then of course, when you're when you're building the application. So for example, when I build my application with Django, of course, I have to then put my actual code, whichever I have built on the notebook inside my Django, for example. So I would say that I think you can use Anaconda because a lot of these a lot of these packages, let it be G dial, Gopan, as everyone has their quanta installation. So that is also a good way to go. But then if you are just trying to just trying to practice or just trying to play around, then the notebooks are fine. Okay. And probably people should always use a virtual environment. Oh, yeah, yeah, yeah, whatever. There's more questions. There the second one about it is that was one of the first questions. I read it for listeners. Do you suggest Django or something else for development? I am a big fan of Django. I've been using it for 99% of my projects. And then if you look at if you look at, you know, very, very big projects, for example, GeoNord, it also uses Django. So there must be some reason, right? So if this for just because these guys are using it, I also started learning Django. And then I found out that Django has a complete, you know, the community is great. And then it also allows us to create the rest frameworks, APIs and everything like that. So I would suggest that Django is good when you want to build the entire project. But if you are just looking forward to create a couple of APIs and then and then that's all, then you can definitely go with fast API or something like that. Okay, you're not mentioning Flask. Oh yeah, Flask is also there. Yeah, I know what's but fast API is. Yeah, you're right. Yeah, yeah. All right, I should let's see. That's another question here. It's a long. What is the best way to install? So what's the best way to install geopandas especially within a PyQ-GIS environment? There sometimes seems to be conflicts requires a protein salation stuff such as probably that's meant for you on. Yeah, yeah. I think installing. Yeah, so I had the same problem actually when I was working with the Windows machine and then and then just came and then asked me please learn Docker and that's what I did. And then it just saved my life. I would highly recommend that you guys to check out Docker. Yeah, I mean it's definitely going to be life saver. So yeah, I mean that's that's what it is because as we were talking, this is exactly what we discussed. So different packages walked with different GDAL versions and of course you cannot have a couple of GDAL version running directly on your system. So it's better to it's better to have images, Docker or maybe try to create separate environments. I haven't personally worked with PyQ-GIS so I'm not sure about it. Okay, thank you very much. We need to wrap up and yeah, thanks again very much. Okay, ciao. Give him hearts and claps and you can do that here at the bottom. And we go thanks very much. Great stuff. We'll be in touch for sure. Yeah, make sure to attend the make sure to attend my Geopandas, Geopandas talk as well coming soon. Bye. Okay.
This workshop is ideal for someone who has recently started using python and exploring the possibilities of it in the GIS industry. This is the beginning of complex spatial scripting Since almost all industries are more or less connected to Location and mapping, it is important to spread awareness and literate developers to understand different aspects of the GIS (Geographic Information System) industry. The first Part of this series focuses on different GIS Data types and how to read them, This includes understanding different data formats such as Shapefiles, GeoJSON, WKT, CSV, TIFF, GeoTIFF, etc.. Users can actually read such files on their computers and be familiar with them. The second part of this series focuses on geospatial analysis with python. Users will first practice working with some core GIS functionalities using GDAL and OGR on the terminal (and later in python). After this, users will be familiarised with the most widely used geospatial python libraries such as pandas, geopandas, fiona, shapely, matplotlib, PySAL, rasterio. Complete Series is divided into the following sub-topics : 1. Introduction and Installation of all Geospatial libraries in computer and in python environment 2. Working with GDAL and OGR capabilities 3. Spatial Operations and Relationships 4. Vector data analysis and visualization 5. Raster Data analysis and visualization 6. Working with Interactive Map in a python notebook Pre-requisite for this workshop: 1. Basic knowledge of python 2. Basic knowledge of GIS and GIS Data formats
10.5446/57202 (DOI)
Hello everyone. Now we are going to see another talk related to the Geo Styler Mapfail parser. So the speaker will give a quick introduction to the Geo Styler framework before going into more detail about the current state of the mapfail parser, including lessons learned and the life demo and future prospects. Our speaker is Galtasar. He is now working as a Geo Special Engineer at Camp2Camp with an interest in Geo Special Data Science. And he likes to rush them to the food, so now enjoy the talk. Welcome to the presentation. My name is Galtasar Toyesher. I work as a Geo Special Engineer at Camp2Camp. At Camp2Camp we provide open source solutions for Geo Special Business and also subscriptions. To the content of my talk, I will first give a short background, the context of the project, where this happens and then move on to explain a little the development process of the parser and also show of the current state also in an example in the end. Yes, the stakeholders of this project were on one hand a small team from Camp2Camp, including myself and the other part was the Federal Office of Topography Swiss Topo from Switzerland. They initiated the project. It was a side project that we started because of COVID restrictions. We could not start another project. So this project was to fill in the gap and we worked a few months on this. The background why they proposed this project was that the Swiss Topo, they have a WMS service based on a map server. It's quite a big one with over a thousand map files and layers in total and on top of this they also have the WMTS. And their vision is actually to summon in the future to get the automated project translation from map server to QGIS and also ArcGIS and also back so they can access the same information from many different clients and applications. So they said that the geostyler ecosystem is a requirement for this project. So here a short overview of the geostyler ecosystem. Basically the geostyler ecosystem consists of intermediary generic format, the geostyler style and several parsers where you then read and write to this style and from this style into a specific format. They also offer editor UI where you can style in the browser. It's based on TypeScript and they also have CLI. For the development we started with a kickoff with Terrestris. They are the main contributor of the geostyler project. They offered us a template project with two main functions. It's the read style, you get a string from your style and you return the geostyler style, the intermediary style and the write style which then takes a style and writes to a string. It's the basic interface of the geostyler project. We also found an existing map file parser in JavaScript. It was a line-based parser that just parses a line into an object with the key, the value, whether it's a block or it's an end tag or some comment. And from there the first step was to create a green dot. And for me there was a step zero. I first needed to get used custom to JavaScript and TypeScript as I never did a project before in these languages. So from there what we did, some key development things, there is the reconstruction of the tree structure from the line-based into a tree to map it to the geostyler style object. We also substituted the simple tags from the external simple file definition and we then mapped the geostyler style object from our tree parser, tree structure. And by this we started with some parameters that we analyzed from the old map files of SwissTOPO to get some idea which are used more frequently than others. We also ported the parser from JavaScript to TypeScript at some point and also finally integrated the parser into the CLI of the project that also takes an extra parameter for the simple set pass to resolve the simple definitions. The current state is that the parser can only read map files and parse it, map it to a geostyler style. It does not offer writing capabilities. Implemented tags are like in the vector symbolization instead of the color size outline, pattern, angle, opacity and so on. I guess there are more but the basic ones, the most commonly used ones are there. We also have basic raster symbolizations, some color ramp stuff is there. The symbols is the well-known name, symbols are there, TTF symbols and icons are there. We can parse the expression tags in the class and also the scale denominators are implemented and parsed. For the, yeah, what is not working, the data and filter tags are not working, it's not directly related to the style but it has a kind of a big impact on the final visual experience of the dataset. So if you generate some style parameters on the fly, it's difficult to resolve them when you translate this. And some user defined vector symbols are not implemented. The units of measurements are not implemented and also does not include the relative pass because as you have seen the interface, it's lacking the pass so it cannot resolve this. And it does not support to input multiple layers from a map, it only works for one layer at a time. And there of course are also many limitations of the geostyler style itself and other parsers formats so what you cannot encode into a geostyler style, you also cannot translate, it gets stuck. And also if the target parser or the target format is lacking some parsing in the writer capabilities, it's also not supported or not getting translated. Yeah, this little example given the CLI you can just call the geostyler CLI, you can give a source format, map file in this case, target SLD format, the output file is a point SLD and then you give the input file the map file and also set the map file symbol pass to your symbol file and then it generates you the SLD style from the. And here is an example we used to demonstrate one from Swiss Topo, it's the aerial navigation obstacles. Yeah, it looks like this in the browser with the official one. This is when we take this and then translate to QML, there is also a QML parser project in geostyler style and when we then load this into QGIS and apply the QML generated QML style it shows off like this and you can see that the icons could not get parsed, it just shows off some red dot, the colors are there mostly. And if we parse to SLD, the results are better, this is basically SLD is the furthest developed parser in the geostyler style ecosystem. It allows to parse also the icons and can finally show this if you load it in QGIS. So almost as the original and if we take a closer look to the original SLD and the SLD versus the original styling from the map server. Yeah, many things are working, there are some small issues like for example you can see the fat lines on the left, this is just that the styling has some start symbol and end symbol and it translates to just having a symbol on every vertex. So this small issue. Yes, but this is difficult to translate because the QGIS uses some vendor specific SLD stuff to do this, so it's not easy to translate. Yes, here are some links to the project on some GitHub and Geostyler should be quite easy to find and also the CLI use tool. Okay, thank you.
The GeoStyler mapfile parser provides automatic translation capabilities of style information from mapfile layers into other formats like SLD or QGIS style. This enables for example to transfer the styling of a MapServer project into a QGIS project. There exist a vast number of definitions and formats to encode graphical representations of spatial information like for example QGIS Style File (QML), QGIS Layer Definition File (QLR) oder Styled Layer Descriptor (SLD) among others. GeoStyler offers an intermediary format that facilitates automatic style translation between various styles formats. In the present context, the GeoStyler project was extended with the capabilities to parse styles from MapServer mapfiles. The GeoStyler mapfile parser has been developed in 2020 by camptocamp as a case study for the swiss Federal Office of Topography (swisstopo). As of now it is possible to read styles from mapfiles and translate them into other formats. This presentation will give a quick introduction to the GeoStyler framework before going into more detail about the current state of the mapfile parser including lessons learned and a live demo and future prospects.
10.5446/57203 (DOI)
And Gidega's already got his slides ready to go for us. Thank you very much for that. Gidega is the CEO and co-founder of Synergize, a geospatial company from Sylvania that you may all know for the Sentinel Hub and the EO browser. And several years ago, they recognized the potential to open EO data that hit a wall trying to use the existing technologies to work with these large data sets. So fast forward a few years and Sentinel Hub is now processing more than half a billion of requests every month, powering thousands of applications and machine learning workflows worldwide and providing seamless access to planet, Sentinel, and Landsat, and many other satellite missions. So today, Gidega is going to talk to us about Global Earth Monitor. And Gidega, I hand over to you from here. Thank you, Margaret. I'm sure you can see my slides and hear me. Yes, excellent. Okay. Thank you very much. So hi, everyone. So as mentioned, I'm Grega and I'll be talking about the Global Earth Monitor, which is a Horizon 2020 program which started a bit less than a year ago. And where we are actually trying to find methods and tools to allow for the continuous monitoring of the really, really large areas. Now, we started a bit less than a year ago, but we didn't start from scratch. As Margaret mentioned, we already had developed the Sentinel Hub previously, actually also as a part of similar projects, which simply allows a seamless access to the satellite data to be integrated in any kind of machine learning workflows or web GIS application and so on, and produces more or less any kind of analyzes ready data form that people need for their processes. The data that is accessible through Sentinel Hub are more or less all the relevant open collections, Sentinel Landsat Models and the like, as well as the most known commercial providers. In addition to Sentinel Hub, then there is also the EO browser, which simply allows anyone to come to his favorite area in the world, check for the latest available imagery, go back in time for a few months or maybe a few decades, then does some level of processing in order to extract the information that they're interested in. And then they can simply observe how our planet is changing through time. And there are quite a few people doing that, like observing the planet and finding quite a lot of things. Unfortunately, many of these are bad things like wildfires and hurricanes and so on. But every now and then there are also some fun or cool things like finding penguins via finding their poop in Antarctica or this guy who used the EO browser to find the missing hiker who sent this photo of his Herialax and then this guy was able to geolocate him and basically send the rescue team there and possibly set his life. Now we made available the access to the data and that's fine, but there are simply not enough people to look at all this data, which is why we started to work in automatically extracting information from satellite images in order to simply do it faster and automatically. But we soon realized that this is much more complicated than the theory tells us. So we eventually did it in a systematic manner, first starting to work on an open source Python stack called Teolern, which kind of bridges the gap between the all well-known machine learning toolkits such as TensorFlow, NXNET and alike and the complexity of multi-spectral and multi-temporal satellite images that don't fit in these tools by itself. And people can come and simply configure all the steps needed to prepare their data in order to fit into these technologies and basically execute the whole process there. We developed the cloud detection, which seemed to be quite good enough that it's now part of the Google Earth Engine layers as well. And overall, the results of our work, so this was yet another project in the past, has been very well accepted by the community who have downloaded a ton of our software stacks as well as read all the blog posts and so on. Now this was the past and so when we started the global Earth monitor, we were thinking of so what's the next step, right? And basically what we want to do is to create automatic monitoring that would make it possible to monitor the whole Earth on an ongoing basis daily, ideally. And obviously this would need to be reusable, would need to be cost efficient. I think this is most important, right? You can already now process all the data in the world. The data is there, the tools are there, the cloud infrastructure allows you to spin thousands of virtual machines. So it's not that difficult to do, but it's expensive, right? And it's so expensive that it typically costs more than the value that it produces. So we really want to focus on the methods that would allow this to do it on an ongoing manner systematically and obviously would not cost more than it's worthwhile. And as we are, we find the importance of sharing our work with the community so that we all learn and we all grow. We will also publish most of the work in an open source license, probably in kind of an EOLearn next version of the EOLearn or simply an evolution of the EOLearn. So what we are working on, first of all, we are bringing quite some new data in the mix. Most importantly, the weather and climate data. We were always interested in how getting this weather data in the models would help to improve this. I mean, obviously the weather is a very important factor of the changes in the world and it should be obvious. I mean, that these things are very much relevant. But yeah, I mean, there are not that many models which do that for the moment. Then we are bringing all most known machine learning methodologies simply to have all the tools that one would need at the place to do whatever model one wants to. And we are not doing this alone. So we have partners like Meteor Blue who are well known for the weather data, two, the experts in machine learning that are covering this part, Tom Tom, again, the same for mapping, and then the European Union Satellite Center, which is kind of a very demanding client who wants to use this data for some purpose. And I mean, this is an important point. We don't want to just do some theoretical model and so on. We want to demonstrate that this can be used in the real life scenarios. So the use case I like most is this conflict pre-warning map which is basically managed by Satsen. And what they try to do is to model the migration patterns using the weather and the satellite data. I mean, it's a super complicated thing, but if it works well, it will contribute to the security of everyone on the world. So what did it get so far? I mean, it's a bit less than a year since we started. We basically immediately went into scaling up the things that we already had in place. So we did this land cover model, which was simple and well documented and everyone loved and we just made it around it on the larger areas, more complicated areas. And while doing that, we noticed that there are some chunks of the process which are simply very slow, very costly and still used to maintain. And specifically, I'm talking about the pre-processing of the data really to fit the models. And this is why we developed the batch processing, which is a part of the Sentinel Hub services now, which was basically been developed to support the machine learning. So what one can do is configure the algorithm that's needed to interpolate and harmonize the data in order to fit the modeling and then simply select an area of interest, which can be a whole region or even a whole world, and the time period and then run this. And what the batch processing will do is it will split the area into small chunks like 10 by 10 kilometers. And then the batch processing will simply run this code through all of these chunks and produce the features and output them and object store it so that they can be used further on in the processes. As soon as we had this, we realized how much this simplified the procedure because this part that was like in an ongoing manner, asking for a question, checking whether things are okay, storing them and so on, is now basically run with a single command and the entire process. You can immediately start working on the training and the execution of the models. So when we had this tool in place, we wanted to test it at some decent level. So what we decided to do is to process 18 months of the Sentinel-2 data globally in order to produce the 10 daily features of all the bands. And we stumbled upon some challenges as you can see here. So these red things are where the things are failing. And we were looking into why these things are failing. It didn't cause too much of a problem. We still were able to simply rerun this a couple of times and we came through. But yeah, the things didn't go as smooth as expected. And when we died, we noticed that there is Amazon, like the almighty Amazon S3 that is basically throttling us, saying that we should slow down. That said, when we investigated further, we realized that we are hitting 400,000 requests per second to the S3 bucket. So maybe we shouldn't be too mad to Amazon because it is quite an impressive feat to be able to support this level of requests. I mean, these 400,000 were successful, right? So we were asking for even more. Yeah, so this is the result of this processing. It's a 120-meter, 10-daily, harmonized, as cloud as possible data set. So this is just a visualization. But obviously, the data are available as a reflectance so anyone can come and use it. And the data is available as an open data CC by license on Amazon and CreoDOS available like through the internet. And people liked it quite a bit. It's cloud-optimized geodude. So we did everything possible to make it as easy as possible. So anyway, the one why I asked, why didn't we do a 120-meter mosaic? I mean, why not 10-meter one? I mean, one reason is obvious. The cost, it's cheaper to process at 120-meter than 10-meter. But this goes also the other way, right? I mean, if you have a lower scale data set, you can then run your models much more efficiently, like orders of magnitude more efficiently. And our idea was that let's try to develop the models which will work on the low scales so that they can be run really efficiently on daily basis, maybe even with, I don't know, 1883 or Modus data, which are daily as well, then identify the relevant areas of interest so that the program can see, ah, this is something that I'm interested in, then dive deeper in to use the higher-resolution data, maybe even going to the VHR. And if the VHR is not available, maybe even queue the satellite to basically get the most recent data. So that's our idea, because this is how we believe that it could really work on a global scale on an ongoing basis, because then you reduce the cost by orders of magnitude. Now as soon as we published our thoughts, yeah, there were people on the Twitter kind of looking at this pessimistically, kind of, ah, I mean, we don't think this will work. You know, you always have these people. And then there was a heated debate, as the Twitter usually does, with lots of comments and so on and so on. I mean, it was a positive one. But guess what? Guess what? This person was right, right? I mean, it is really, really challenging to work with 120 meter, because there are so many features, landscape features, part of 100 by 100 meter pixel, right? And there is no best practice in how to handle this efficiently. So we wanted to explore this further. And to do it, we took the label data to be as accurate as possible, the label data which were a submeter resolution. And then we tried to simulate what's happening. And the first thing that we tried is taking, simply greeting this data to 120 meter grid, and then taking a majority vote, right? The class that was represented as a major one was representing the whole pixel. And high level, everything looks fine. Then when you zoom in, you immediately see the challenges of this approach. I mean, the land classes that are often, it will work perfectly fine. But those that are underrepresented will probably disappear. I mean, here you see that the water is disappearing, the shrubs are disappearing, and the forests and the grasslands are growing. So it's a challenge. If you look at it statistically, and if you compare the original data and the majority voting data, it's not such a significant difference overall again. But if you look at these classes that are underrepresented, like artificial surface and shabland, you see that a major chunk of it is disappearing. So this really is an issue. So then we said, okay, so let's try not to confuse the machine learning model. And let's try to just use the pure pixels, those that are fully covered with a specific land class, and then simply train the model on that and the results will be good. But what happens is that if you do that, you end up with quite an empty map, right? I mean, this is what remains more or less just forest and some urban land and some chunks here and there, most of the features actually disappear. So it's really, really challenging. If again, you look at this a bit more systematically, and you see that if we take a 100% purity, yeah, more or less just the forest remains. Then the lesser the threshold we set, the more of the other classes will put in as well. But then obviously it will be more and more mixed pixel. Another way to look at it, if you have the, in this case, artificial surface, this one left, if we say look 1% of the pixel artificial surface and we show it, we see that there is a ton of it. As soon as you go to 50%, the middle one, you already are losing majority of it. And if you go to 100%, there is practically non-remaining. So that really doesn't work well with the small classes. So what we did is that we used the majority class restoration and we run it on our model in Slovenia, which is what we typically use to simply compare things. And we got the 80% accuracy, which is not bad, right? But obviously this is the accuracy when you look at it from the whole area. But if you look at the specific land classes, the accuracy is way lower. Some examples of how this looked like, I mean, on the left side of the prediction, this is the ground truth, again, rest� right to the same resolution. And this is to have an idea of the image that we are working on. So it's really a challenge. I mean, if you look at some other examples, you see that things are working. I mean, it's not useless, but yeah, it's difficult. So then another thing that we wanted to try it is how the model that you set up at some area, time, how does it translate into other areas and other temporal periods? Because again, when you are doing something on a global scale, you will never have data everywhere. And so, you will have to rely on all the dates. You will always have to depend on the data from the past, so on. So on the left side, we have the data, the train on 2019 and run on 2019 and the same model, a train on 2019 and run on 2020. And you see that there's not much of a difference, right? I mean, the model worked pretty well across at least across one year period. And even if you compare the ground truth, it's quite okay. So then we did a spatial transfer. So we trained the model in Slovenia and we used it in France to see how it works there. And again, looking high level, things work very well. Even you zoom in and you see some features where you really recognize that, yeah, that's how it should be. And I mean, the best thing would be to end an experiment here and say, that's it. But if you try harder and look further, you will see that it's not as good as one seems. I mean, it goes from bad to very bad, right? I mean, here you see that most of the Arab land is represented as artificial surface. And the reason we believe is that there are features in France that are simply not available in Slovenia at such level. Like on the left side, you have the sand banks, river sand banks, which we don't have many in Slovenia. We don't have so large arable areas either, right? And this confused the model apparently. So then what we tried is trying to improve the model with some retraining with the local data. So we got some data from France, like a small sample compared to what we had for Slovenia. And then we improved our model with that data. And you see, so on the left side is like original model just trained in Slovenia. And the right side is the retrained one with France data. And you see that the model improved significantly. I mean, suddenly you see the river, you see that it's not as noisy as before. So things really are picking up. And we got quite decent results in this respect. Another example is on the left side without retraining on the right side with retraining. And this is the model that was run with France data, large labels, 20 meter solution. And you see that these two are quite a bit similar, at least at this scale. So things work in general. And I mean, it's definitely promising. But still it is challenging. So Grishto, of course, for sure, right? When he said that it will not work as simple as we expected. So we try to do a similar thing with detecting of urban areas as well. Detecting buildings with Sentinel-2, which as you can imagine, it's quite difficult to do. Then going down to the spot with 1.5 meter, a sharper resolution and the results are way better and obviously with Pladas even better. So again, trying to do these multi-scale models, that's something that we are working on. So then we put a lot of effort into monitoring of agriculture. For a simple reason that agriculture in Europe is super important and you have a strong common agriculture policy. So there is a lot of control involved with it. And we developed generic markers, which kind of identify what kind of agriculture activity is happening so that then you can translate this in basically monitoring of a specific plot. So we developed the homogeneity marker, which tells whether a field is just one crop or several of them. Bear soil to detect flowing and harvesting, mowing events. We did field delineation so that we would be able to get the field boundaries when there are none or where there are bad ones. And one thing that we noticed is that as good as your models are, as good as your results are, they will be useless for the actual use case if you don't integrate this properly in a business process. So we had to develop the application, which allowed in this case the governmental staff to check the results, to validate them, and then to basically follow up on that. And similarly for building detection, we did another tool, this one is for Azerbaijan, where the governmental officials, they go through newly identified buildings and simply validate whether they look okay in terms of the data. And then they check them with the records to see whether they have permits and so on. And this then adds the value. So just the data, just the processing, yeah, it's nice, but it doesn't add value. But when you integrate this in the business process, it works well. So I conclude here. I invite you to go to our website or follow us on Twitter and looking forward for questions. Thank you very much. Thank you so much, Krega. This is amazing, getting into the nitty-gritty of this balancing act between the costs, both in terms of data, and actual financial costs and that granularity, that level of high resolution detail. Quite a lot of things you've considered there. We've got a question from the audience for you. Have you thought about simply separating out the land use classifications to eight or however many times the number of rasters per land use classification as a single float weight raster per land class? And as an addendum to that, eight times the data that you still retain, there are 1,444 times processing speed. I mean, we were looking into that as well and we haven't yet finished. I mean, I don't have any specific results to share because we didn't get that far yet. We would still at the end want to have one model, right? So that's a challenging thing. So that the model would recognize these mixed classes as well. But it's definitely a way to go. And when we have the results, we'll for sure share them on our blog post. Fantastic. Thank you. See if there's any other questions for the audience. We are in that magical, beautiful spot of running ahead of schedule. I'm going to quickly see if Joseph is actually still with us. I think he stepped out of the back end. He's now watching the conference from the venue list. Just so that we've realigned with the public schedule so that anybody that's wanting to join the exactly Nana's talk or hopefully Gregor, nobody's been joining us in this, the introduction of your talk to anybody that joined and missed your introduction to your topic. I do understand that these are being recorded and shared later. So you will be able to catch that if you missed the intro. But then you need to look for the whole session. I mean, look, that's the it's an event, so they shouldn't go for the whole session. Yes. But just check with the organizers for running. We're actually running 15 minutes ahead of schedule. Everybody's been beautifully organized. They were clearly giving us lots of buffer to have technical issues and it's been going very smoothly. So away for sexy. If there's any other questions from the audience, always a shame that we can't see each other in person and just wave. You're getting lots of thanks from the audience for your talk, Gregor. That was clearly really enjoyable. When I'm sure there's a lot of people that are using it. In fact, is that a question for you? Did you also aggregate Sentinel one imagery to 120 meters like Sentinel two? And is this available as a product in Sentinel hub? So it is available as a product in Sentinel hub. We haven't yet produced this kind of a global collection. So I mean, if somebody wants to use it, it's simply available to an API. But yeah, I mean, for the moment, we were exploring quite a bit the relation of SAR data and optical data. And I have to say that our team didn't yet find this silver bullet that will say, here the radar really helps a lot. So we are focusing very much to the optical simply because it has so many more information and because we haven't yet touched the areas which have a lot of clouds probably. So we always were able to get a decent amount of optical data. But yeah, I mean, whoever wants to use Sentinel one data, they can get gamma not, radiomaterally corrected speckle filter data to an API with a one simple call. Great. Thank you. I'm just checking to see if there's any other questions that have come in. We've got two channels for me to double check with here. I think that I'll take this moment to give everybody a moment to get up, stretch your legs, refresh your cups of coffee or tea or water. Use the restroom if you need to. And I will get Nana ready on stage with her slides and we will dive into the second half of our group on Earth observation session in a moment. I will put us on hold and I will see you at exactly half past. See you all again shortly.
With the unprecedented volume of EO data, the possibilities for its use are endless. The cloud infrastructure and various tools make it easy to visualise the data, analyse it, and even run some machine learning models to determine land cover or the like. What is needed, however, is the ability to run these processes on a regular and ongoing basis so that we can make decisions based on what we learn, in parallel as events happen. With Sentinel Hub and especially our EO browser, we have helped raise awareness about the use of Earth Observation. With Global Earth Monitor, we want to go a step further - make it possible (and sustainable in terms of cost!) to monitor the planet on a weekly or even daily basis and extract relevant information from the data. We will present the development of the processes and open source tools that will allow any data scientist to create their monitoring stream.
10.5446/57204 (DOI)
Okay, thank you. Welcome again. This is going to be the last talk on this session in Puerto Madri. Now we have Martí, Martí Periquei from Geomático. He's a GIS developer and analyst. In 2015 he joined Geomático to work on web development and training in open source special technologies. He's going to talk about a different corona, this time the one you know about, unfortunately. So very interesting to learn more about this going viral in the pandemic. You want to share your screen? Okay. Yes. Thank you, Jorge. So my name is Martí Periquei from Geomático. Our sponsors are funds, our city also are in the room also. I'll do the talk, but he might answer some questions later. Okay, so thank you all for being here. Geomático is a small Spanish company specialized in analysis and publication of geographic information. And we usually make technical talks. So this will be totally a new experience for us. It's more a light speech about a beautiful adventure or a tale of success in the pandemic. So perfect for the last beach before lunch. We have always worked with a traditional GIS stack, our players, geo server, post GIS, SDIs, now of course vector tiles also. And we've developed solutions for many years in many fields. Here I just highlighted mobility and environmental, but there are many more agriculture, public transportation, harbors and satellite analysis also. And to finish for a fast presentation of who we are, at least of some of our clients, a majority from Spain, but some from Europe or Australia. And what do we do for them? We call it tailored suits, like we offer custom solutions to complex geographical problems that don't fit into a standard product. That's what we generally do. Okay, so let's start with the tale. Let's start with the adventure. Let's make a flashback to April 2020. I think you all remember. In Spain, it was a complete lockdown. There were times you were only allowed to go to buy food, to go outside to buy food for emergencies. Well, you know about that. And we all look like this in our homes, except that we always look like this in Geomatica, because we work remotely since long time ago. But those were times of fear and of stress, but also of great solidarity and good intention, like everybody wanted to help others. So what can we do to help? There were already many projects, some of them very popular. I remember those charts for the countries, for dead people, infected people. And Italy's first, now Spain's first, a bit tragic, but they were useful. Or ambitious projects like a friend of mine did a project to detect COVID-19 and the excrement of people from the water collected in the sewers to try to prevent outbreaks. So what can we do from Geomatica? Let's think about something. Spain had a very painful first wave. Many people were not allowed even to make sport, and kids were getting crazy. And parents with kids were getting crazy. So after almost two months of lockdown, the government decided that they were allowed outside with adults, but only one kilometer away from their homes. So big happiness for everyone, but of course, how far is one kilometer? The average person can calculate that. That's a GIS question, a very simple one, but a GIS question. So many GIS friends were already calculating their kilometer. They took a QG, it's a buffer of one kilometer, and they printed their map, whatever. But what can we do for people, not GIS experts? Well, in Geomatica, the same day the new law was announced, Micho Garcia had the idea to build a website and to call it simply, how far is one kilometer to solve this problem? Okay, so how to build it? The challenge is to show a one kilometer circle, shouldn't be a big deal. Usually we develop advanced solutions, so we thought about some complicated applications, but in the end it was clear that we needed something very simple, and that we can code very fast. Like we did the simplest approach possible, because everybody must use it. So go find your home and click. And that's all, and you get your one kilometer radius, and the kids will be able to travel inside the circle, so very simple to use. We coded that in two hours, so that it could be deployed the same afternoon, the president announced the law. It's still live at this onekilometer.geomatica.es, but I don't know, maybe if you want to calculate one kilometer for some reason, you can use it, but it's not used anymore, and we hope we won't use it anymore for lockdowns. Okay, we wanted to be easy to use, beautiful, deployed very fast. We end the tool that everybody, everyone needed in the right time. So as a technology, we use Mapbox GL as a JavaScript library and this is one line, and one line of turf to make the magic, so just a circle of one kilometer, that's all. An important thing, we use Mapbox base maps, because they had a free tire of 50,000 map loads, so this would have to be largely enough for what we expected. And there's the GitHub repository down with a mid license, if you're interested. Okay, let's start with the spreading. We finish the application, we deploy it in GitHub pages, we make one tweet, we don't have, we have I think a thousand followers, so not that much. And we put a dozen WhatsApp messages to friends. We hope that our friends would put this, our link into the WhatsApp parents groups from schools, you know, and then it would be possible that they would be retweeted. So we went to bed, it's eight o'clock at night, and we went to bed, and in the morning at 10 a.m. we had 10,000 visitors. So from 10 messages to 10,000 visitors. We were happy, but a bit shocked. We went to analytics to see the sources, and it's what we thought, but much faster. I mean, you can see 80% is direct, so from WhatsApp, it clearly went viral on WhatsApp. These stats are not the stats of the first night, these are the stats of the whole two weekends that it was, the law was enforced, but at that moment it was like 95% WhatsApp, so it was almost everything WhatsApp and some social media. So parents sending and we're sending the link. So we were very happy. But three hours later, we have 20,000 visitors per hour. So the famous exponential growth, keep calm. It was funny because often newspapers then were trying to explain exponential growth to citizens, if you remember, stay home because if you spread the virus to 10 people, blah, blah, blah, we went viral in the pandemic. So okay, so what to do, what to do, back to problems. We're running out of base maps. You remember we have 50,000, we're almost done, we need something to do. And we run out of the free time, so let's pay, pay Mapbox. We took $400, we thought, okay, it's also some publicity for GeoMatica, so we can spend $400. They immediately disappeared like flushed in the toilet and we were running out of cash completely. It was insane, we couldn't do that. So keep calm and think of alternatives. Do it yourself, base maps. We don't have time. We don't have time. Put some ads to pay Mapbox. Well, the idea was a non-profit application, so not possible as well. Another alternative, Mapbox posted a few days before that they were supporting COVID-related applications. So we decided to email Eric Gunderson, the CEO of Mapbox, for a Mapbox unlimited account. But of course, when it's noon in Spain, it's 5 a.m. in America, so Eric Gunderson is sleeping. And we got thousands and thousands of simultaneous requests. We need to do something. Keep calm. What can we do? What can we do? Finally, we got the solution. The open source community pointed out to the right direction. A free and unlimited dial service from the Catalan Geologic Institute. So thank you to the Institute Cartographic, to RAF, to Jane Kietz, and all the awesome Spanish FOSS4G community. They helped us a lot. And okay, it was on. So we could relax. Well, relax a little bit because the media were calling. We were on almost all Spanish media at some point. Some 10% of the traffic came from referrals, as we've seen before, the news links. And well, and some of the media were very nice. They put our name like in big letters. Some of them were not so nice. We'll see that later on. But suddenly we were in trouble again. The traffic continued to grow. It continued and continued to grow. We had four million visitors the first weekend. So the servers were burning. I mean, it was growing and growing and growing. So we decided that we must cache them. Only solution. We cache the cartographic tiles with Amazon Cloudfront, which is very, very scalable and affordable. That's what we needed also. So that we were able to resist all the traffic and make the servers relax. And it worked. It worked very well. Here you can see the bytes transferred. I mean, it's every hour, it's like 100, more than 150 gigabytes, which is maybe usually the monthly maximum of a normal hosting, maybe. So it worked well. It worked well. And two days later, Mapbox agreed on a super free account. So finally we decided to put Mapbox. So thank you also to Mapbox. And it worked. It worked well. But we hit some hard limits also on Mapbox servers. So too many concurrent geocodings, even for Mapbox, they also had to work and change some limits, some parameters for these massive concurrent users. We had eight million users in two weekends. You can see the first weekend and then the second weekend even more. Okay, that's it. The adventure is over. Some final considerations, some anecdotes, maybe. I will start complaining a little bit. For example, La Sexta, which is one of the biggest TVs in Spain, they put an iFrame in their web page and they encrusted our application. I don't know. Maybe it's legal. I don't know. But I didn't like that. Or once, one time, a girl contacted us. I have a blog called Pientelo Actuo, which is, I think, therefore, I act. I would be very happy if I could put your application in my web and my blog. And we said, of course, you can. And then they showed this in El Armiguero, which is the main late show in Spain, but presented with a sponsor, like Yoigo sponsors this section. So we didn't sponsorize our app, not to make profit. We didn't get any money, but El Armiguero got sponsored by these presentations. I don't know. Maybe it's legal as well, but I don't know. I found it ugly. Okay. Complaints from people. We got hundreds of tweets and mails that we put an answer almost. And like always, it's mainly complaints. A lot of people concerned for the privacy. I mean, if it's a free application, they thought if it's free, they are using our data for whatever reasons. And so we received a lot of mails complaining of and asking for privacy policy, privacy settings, blah, blah, blah. And other funny people were the ones who instead of clicking, went to find our mail and emailed us asking, sending their street and their number, like I live in the street this and with the number that, and please calculate my kilometer. Okay. Compliments. Compliments to end the talk. There were many, many compliments posted in virtual Spanish media, press, TV, in social media, many public organizations from the smallest local police, for instance, or to virtually all political parties also, like from left to right. Even some famous people like posted about us like Alejandro Shant. Sorry, I don't have the image, but believe me. And also some controversial people also posted about us like the president of Venezuela, Nicolás Maduro. And why would he do that? It's because they used it in Venezuela. The one kilometer became popular in Spain, but it was a world map, so it could be used anywhere. And when Venezuela made an identical law one week later, one kilometer, they thought they could use our app as the official way to calculate the kilometer. So they promoted us in Venezuela. And it was a complete surprise. And let's finish. If we have time, I think we have, it's very short video. With probably the most awkward moment of all, when the vice president of Venezuela makes a tutorial on how to use our app on prime time on TV. Okay, enough. I know it's very weird to finish like that, but I feel like after 15 years of politicians ignoring amazing GIS applications we've done, it's a fair compensation that the vice president makes a presentation of our simplest web map ever and on prime time on TV. So as a conclusion, also I would say for the whole experience, I would say that we should try to make useful things and not only complex things. So thank you. Yeah, thank you very much, Martín. That was a very nice presentation. Thank you. Yeah, as you were mentioning, we don't have questions for the audience, so I can maybe digress a little bit until they come. It's funny that, yeah, the simplest tool can be the most useful. And that's a very, it's a recurring lesson that we learn that we try to build very complex user interfaces and then nobody uses them because they are not for the usual people. But yeah, probably you didn't know what you were doing when you were creating a tool that does these very deep zoom levels because that's the hardest thing to serve as probably the people from Mathalium in the room can also at best. Also maybe you can't disallow next time iframes or at least from very specific referrals. Maybe you can redirect them to some landing page or something like that. That's something that they do, media do this all the time because they are experts in serving large scale but then they are doing this kind of, I understand that this was probably because I looked down, everyone was busy, everyone was super stressed and they did just what they so fit. We didn't have time to make these things Jorge, we were just like… No, I'm talking about those guys, not you. Creating the iframe instead of maybe calling you and giving you some money to maintain the site instead of killing you with requests. Oh no, nobody offers money. Okay, I don't know if there are no more questions. Thank you very much Martí and Oscar and the rest of the Geomatical people. I think with this we can close this session. Thank you everyone who attended and enjoy the rest of the conference and see you around. Bye bye.
How to build a map website in one afternoon? How to get 1 million visits in less than 24 hours? How to technically survive to an unexpectedly high traffic without spending a huge amount of money? A story about a small GIS company that developed a website that helped millions in a very difficult moment. A tale of collaboration, generosity, open source software and luck. The ‘Keep it simple’ motto to the highest extent. Starring Geomatico, Mapbox, COVID-19, Github, Amazon, Alejandro Sanz and Nicolás Maduro. On March the 23rd, after five weeks of COVID-19 lockdown, president Sanchez announced that Spain will begin easing restrictions. In this first attempt at loosening measures across the country, children under the age of 14 would be allowed to go outside of their homes for one hour a day, accompanied by an adult. The measure limits travel to no further than 1 kilometer from home. But just how far is 1 kilometer? To help navigate these new rules, Geomatico, a Spanish GIS company, developed a non-profit web application using Open Source tools and donated services from Mapbox that allows adults and children to visualize a 1 kilometer radius around their home. In ten days, 1km.geomatico.es became a reference for individuals and families throughout Spain looking to safely leave their house for the first time in over a month. Thanks in part to Mapbox’s support, the team has been able to deliver this mapping service for free to over 7 million users in multiple languages.
10.5446/57205 (DOI)
We are moving to our next presenter. I'll add Vicky. Thank you, Felix. Vicky is preparing his presentation. I think we will have a presentation directly from Mexico, right? Right. So I'm trying to share my screen. Share. Share screen. We still have two minutes, Vicky. And can you see the screen? Not yet. Share. Share screen. Oh. Mm-hmm. Mm-hmm. I'm going to share. Mm-hmm. Mm-hmm. I have it here. So we can see your screen. Not perfect. Yeah, if you can hide the browser, maybe do a full screen. How do I do a full screen? No. F11? F11? Mm-hmm. F11. Yeah, we still see the... Yeah. F10, maybe? No, in my browser, F11 switched to full screen. Yeah. F11, F11. There. Well, at least you don't see the bottom things. Yes. But we still... Yeah, I can see that you can still see this... this thing. No. Yeah, but don't worry. It's okay. Are you showing some code or something like that? No. Oh, it's okay right now. Okay, I can start? In 15 seconds, yes, you can. So let's start sharp on the hour. Oh, now it's full screen. Yes. It's full screen. Okay. All right. So, Vicky, the stage is yours. Thank you very much. Buenos días, buenas tardes, buenas noches, good morning, good afternoon, good evening. I'm going to talk about graph algorithms in the database. Well, at least that is the title. And more with PG Rowdy. This is not going to be too technical. So, the first thing I want to mention is that normally in phosphor Gs, I gather signatures from the participants. And this is a great memory that I have of Malena Lipman who is her own handwriting the last time I saw her on 2019 in phosphor G Argentina. And I'm dedicating this presentation to Malena on her memory. So, who am I? I'm an economist. So, I'm doing PG Rowdy development. I'm from Mexico and I am an OSG of fun and participants. And that is my email where you can contact me. And I will be giving this presentation about PG Rowdy. So, what are we going to cover today is what is PG Rowdy? What is Rowdy? How we develop PG Rowdy? We're going to mention some of the PG Rowdy products. We're going to talk about graphs. And we're going to talk about the student contributions in the last two years. So, going into topics, what is PG Rowdy? Well, PG Rowdy is an OSGEO community project. And it's open source. It's a library that is, like, it requires post-gis enabled database on Postgres SQL database. And it serves to route, that's why the name, vehicles or pedestrians in, for example, a city. So, in general, this is what is PG Rowdy. And it's always your community, open source, library for an enabled database or geographical enabled database in Postgres for doing routing. But what is routing? Routing, well, it's basically these questions. I am here and I want to go there. And we're going to go using the shortest distance. And we know that from elementary school that the shortest distance is in a straight line. But there we go. Once we get our route, and oh, surprise, the elephant doesn't have wings. And it's going above buildings, above crossing streets that shouldn't be crossed on those positions. And, well, the straight line didn't work quite well when we're putting the routing in context. So we need to develop PG Rowdy to solve that problem when we have things in context. And we do it by working on the theory. The theory is very important for us. And in this presentation, I'm going to present a theory that it's out of the box to demonstrate how we develop PG Rowdy. So, in secondary school, we were taught that gravity pulls objects down to the earth. And probably in high school, we were taught about tension. So if an object gets pulled down and it's tied up to a string, well, the string will get tense. And we're going to use this theory to solve a routing, the problem that we had before. So, PG Rowdy, we do it by its test-driven development. So what that means is that we create our tests and we define, this is my testing data, this is the expected result, and then we develop the algorithm. So let's start with the testing data. So suppose that we have this situation. It's not drawn to scale, as you can see, because like the edge 2, it's very long compared with the edge 14, which is the, we can think about it of the length of the edges, so this drawing is not up to scale. But we want to go from node 1 and go to node 5. So that is our objective. How do we do it using the shortest path? So we manually expect a result, we try many things, and we find out that we need to traverse the blue, like blue segment, the dark segment, and the orange segment. In this case, it's black because the background is white, but in our algorithm, the segment that it's black, we're going to color it white. So let's start with implementing our algorithm. The previous graph is representing this map. Okay? The black segment is under a tunnel. You can see the tunnel. So we're going to model this map with our algorithm, which is going to consist on segments represented with the strings, the color strings, and the vertices represented with these balls. And we have our red ball on the left and the blue ball on the right. So let's do test our theory to find the shortest path from the red ball to the blue ball. And we know that our expected result is using the light blue string, the white string that it's not seen because it's on a tunnel, and the orange string. That is our expected result. So let's see the demonstration. That is our map. We remove the tunnel. And that is the graph that it's representing the map. We want to go from the red to the blue. We call the red ball, which is our source, and we get all the strings that are a tense that depart from the blue ball, which are the orange, the white, and the blue. And we get our straight line that consists on blue, white, and orange. Now I have to add there. So with this demonstration, I hope you liked it, we prove that our algorithm was correct. We obtained the expected result, and that is how we developed PG routing. Right now we have over 30,000 unit tests proving that the algorithms that we are implementing are okay. So now let's talk about PG routing products. We not only have PG routing, but we also have two projects that help people to use PG routing. The first one is OSM2 PG routing, which will take OSM data and import it to the database so that we can use PG routing with that data. And the second one is that you cannot use PG routing if you import data with OSM2po, for example. It needs to be done with OSM2PG routing if you're using OSM data. We have functions that allow users to use other kind of data, but we are an open source project, so we developed this for open source data. The next sub-project that we have is PG routing layer. It's a very simple plugin for QGIS that allows people to play around with basic PG routing functions and visually see the results. And this year we started with VRP routing thanks to Ashish Kumar, and VRP stands for Vehicle Routing Problems. Right now, PG routing has an experimental function that is for vehicle routing problems, which is pick deliver. And we decided that we're going to start using a different repository for those kind of routing problems. So a vehicle routing problem is, for example, to solve problems like you have a set of vehicles and you need to pick up garbage containers and you need an answer of how the vehicle is going to go from the depot where the vehicle starts with the driver, how it traverses the city, and at the end, after it's been full, how it goes to the dump to get emptied and goes back to the depot. So that could be one vehicle routing problem. You can have school buses that you need to pick up the students and take them to school, or vice versa, you get the students from the school and distribute the students along the city. So those are the kind of problems that we intend to solve on the VRP routing new extension. The requirement of this new extension is going to be with some projects that are not actually packed. So it's not going to be a package yet because the dependencies need to be existing there. And of course, PGU routing. And the summary of current PGU routing that you can obtain with packages is OSM2 PGU routing, PGU routing, and PGU routing layer. Now, the topic of the graphs, which is what the title of this presentation was about. For the graphs, PGU routing actually, it's about graphs, graphs algorithms. And you can interpret a graph like, for example, a river. And we have algorithms that make flow analysis. You can have analysis of flow of vehicles, flow of water, flow of electricity with some of the functionality that we have in PGU routing. So it's not only about routing cars or pedestrians. We can also use PGU routing to find out interactions between people, like the six steps from one person to another, how long, how that you don't take more than six steps to get a connection with somebody else in the world kind of problem. So we can use PGU routing for human connections. We can use PGU routing also for example for connectivity of computers, routers, servers. And of course, what we're used to is to use PGU routing for routing vehicles. But if you interpret the graph, you can interpret it to determine where your sour system would go, where the electricity lines would be placed. So the graphs, the interpretation of the graphs is not only for vehicles. That is the point that I want to make sure that it's gone over you. You can use it for much more many things. Now, in PGU routing, we have several classifications of the functions. We have official functions proposed and experimental. Experimental functions are the newest functions and they are written normally by the Google Summer of Code students. And once we have some feedback from the general users, we can move them to proposed. And once they are proposed in the next major version, they can go to be part of the official functions. So basically experimental, please try to use them. And soon we will release version 3.3. It's not a major release. It's a minor release. And this will contain new functionality that was done by these student contributions from Google Summer of Code of 2020 and 2021. Like Ashish, who also helped on VRP routing. He added a depth first search traversal on a graph and a sequential vertex coloring of a graph. Vinith Kumar added also a coloring function which is edge coloring, which comes from boost libraries and what this graph does instead, compared with the one that Ashish created, it colors edges. Prakash also contributed to PG routing, but in his case, he is converting a graph to using the langur target dominator 3 to obtain the dominator 3. Also, he created a beeper type, which is a coloring graph, coloring algorithm that just tries to color a graph with two colors, if it can be parted in two. Himanshu Rush created a function to test the planarity of a graph. These graphs, K5 and K33 are not planar graphs. Basically, like streets would be planar graphs, except when you add bridges, which are crossing. So that wouldn't be a planar graph. A planar graph you can draw in a two-dimensional space without having segments crossing. And Hamwu, who created a transcefit closure algorithm for a graph. For the future, we don't want to be an OSGEO community anymore. That is right. But don't be sad, because we want to be an OSGEO project. So we're working hard to make this happen. Hopefully we will have our application to become an OSGEO project before this year ends. And please, please start using PG routing, start contributing to PG routing. You can fork PG routing on GitHub. And thanks very much for coming here. I'm going to post some links about PG routing, the documentation, the workshop, support, and my mail. And you can look for me in walking around in the social gathering or looking at more presentations. Thanks very much. Thank you very much, Vicky. It was an amazing presentation. Thank you. I've just shared your presentation on the chat on the venue less, so people can review your presentation. And we have questions from the audience. Please add your questions on the venue less platform. But we have already one question, Vicky, on the private chat here. You can have a question about which attributes can be used for routing. Okay, yes, you can use a speed to get route by time. You can use inclination, like for a pedestrian routing that is going for hiking. Going uphill is more slower than going downhill. So you can make this cost depend on the angle of the road. You can make the routing depending on the importance of the road. Like for very wide roads, you want wide pipes for sorts. And for very narrow roads, you want smaller pipes. You can make it depend on the amount of people that are living in the road instead of the length of the road. So yes, you can use the cost and you can use it on the fly. That means that you don't have to preprocess everything, but you can just change your cost function on the fly. Thank you, Vicky. We have another question regarding if pedestrian routing can be used to build the Isochron areas. Yes, with PGR routing, PGR driving distance, you can use this cost to make these areas. And we also have PGR alpha shape that shouldn't be in PGR routing. It should be in post GIS because that is a geometry function, but for historical things, it's in PGR routing. But you can also use the results of driving distance with ST convex hull and many other functions in post GIS that can create this area. Okay. It's not automated, but it can be done. Yes, it's in many steps. Just like you saw in Regina's presentation, you need many steps to achieve what you really want to achieve. Okay. Maybe it can be a contribution to PGR routing to make these Isochrons more automatic, right? Well, one thing that I by experience know is like, if I start thinking as a user, if I start using PGR routing, which I don't like to use it, then my mind goes, it's like, I want this function to work to this application, then I forget all the other applications. So having this kind of automatic, it's not a good idea because there are other functions that can do a similar answer. And depending on your problem is the one that you want to use. So it's like, I have to be more like a general, it's a library and with libraries, you just join the pieces to build the application. It's not application focused, it's library focused. Good. So we have another question more related with metrics and comparing PGR routing with other routing solutions. Do you have any kind of benchmarks that you can share? Okay. I don't have a benchmark. I'm more focused on the developing. But PGR routing works on the database with both G's. They are kind of linked together. And once when I started doing PGR routing, it was, I was a user because you cannot code if you don't know what is it about. And we were using OSRM also. And OSRM, you needed to kind of process the OSM data and created this morning file, the afternoon file and the evening file, because the traffic is different between them. And it was terrible. I mean, you couldn't just change things, change the parameters easily to get these traveling times. It was terrible. So by having the data on the database, it makes it more easy because you can have functions that on the fly will create these costs that will be used on the PGR routing. So that is from my experience. But as always, I mean, if the data is wrong, the results will be wrong. So that's why the unit tests are so important. Because with big data, you really get lost on if it is really the shortest path, for example, if that's what you're looking. Because it's a very huge city and maybe, well, I was working with Montevideo data that time and I didn't even have an idea of that city. I didn't even know if it was correct data or if the problem was on the algorithm or on the data. So unit tests, that's why I created those unit tests and test driven just to make sure that things work. Thank you, Vicky. Thank you, Vicky. I think it was clear. I have two more questions if you can go then in just one minute. These are more specific questions. One about the PGR direct CHPP, if it is already in production. Which one? PGR underscore directive CHPP. The Chinese Starlink Postman problem. I guess that's what I mean. I need to really see their name. That is still experimental. That is still experimental. So it's there, but it's experimental. Okay. And the last question, if the graph is big, would the Postgres perform well or the graph need to fit in memory? Okay, it's like I want to route from my house to the store that is in the corner. What do I want to use as my graph data? Whole, complete American continent from Patagonia to Alaska or Mexico only or my state. I live in the state of Mexico or only my city. So that's why it's so important that you use extensively the Poch's functions and you make sure what graph you want. So bound your graph, make it small, otherwise it will take ages to load. I mean, my computer wouldn't hold whole America. So what is your... Okay, that's it. Thanks.
What are the alternatives when a road is closed? You didn't find a path, because your graph is disconnected and you didn't know? pgRouting is more than finding the Shortest Path on the database. We provide graph algorithms that can solve those questions and more! pgRouting extends the PostGIS PostgreSQL geo-spatial database to provide shortest path search and other graph analysis functionality. This presentation will show the current state of the pgRouting development: * Wide range of shortest path search algorithms * Flow analysis * Graph Contraction * Graph Coloring Among other algorithms We will explain the different categories for the functions on the library: * Official * Proposed * Experimental We will talk about other products that we provide: * osm2pgrouting * pgRoutingLayer for QGIS
10.5446/57206 (DOI)
Okay, I think we can start already. So welcome everyone to Wednesday in Fast 4G 2021. This is the Konka Watchroom. Following we will have the dollar presenting handling your tips in current site code with GDAAL and Loan. Well, I'll let you do the possible talk. Okay, thanks Jose. So hi everyone, my name is Derek Dohler. I am a software engineer at Azevia. We are a professional services firm based in Philadelphia with a focus on geospatial work. And I want to talk to you today about a library I've written called Loam that uses GDAAL to handle geotifs in client-side code, i.e. code that's running in a web browser. So I'm not going to go into what GDAAL and geotifs are because I assume that if you're interested in this talk, you probably know what those two things are already. If you do have questions about those two things, please put in a question and we can talk about it at the end. But so just some background. You need GDAAL or some other library that supports the geotif format, such as GeoTrails or geotif.js. There are other libraries to read geotifs. It's not a format that's supported by non-GIS software. So you need specialized GIS libraries or software in order to read a geotif. And so if you want to integrate geotifs and spatial data handling into your web application, you are going to need to use GDAAL most likely and you historically would need to run that on the server side. So GDAAL is a C++ library, so there wasn't really a way to run that on the client side. So you would need to integrate that into the server side component, the backend of your web application. But fairly recently, the WebAssembly standard has started to be supported within web browsers and there is a set of tools called MScripten which allows compiling C++ libraries into WebAssembly, which then allows running GDAAL inside of a browser. So it's now possible to run GDAAL in a browser, a client-side environment. So the next question becomes why would you want to do that? And so to answer that question, let's talk a little bit about some of the drawbacks and some of the challenges of using GDAAL on the server side because what running it on the client side would do is it's essentially going to be replacing server-side handling of geotifs. So let's look into that a little bit. Server-side handling of geotifs, this could also apply to other raster formats, but I'm focusing in on geotifs because they're fairly common and a lot of people know how to work with them. The first one you have is you have a drawback of infrastructure costs. So when you're running a server, you automatically are going to be paying higher costs in order to support those computing resources, whereas you can host a completely client-side static web app for free these days in any number of places. If you want to actually do dynamic processing, you need to have servers and so that's going to increase your costs. It's also going to, second point, increase your infrastructure complexity. So you need to set up deployment for these servers. However you choose to run them, you need to have a way to install GDAAL onto the servers. You need to hook them up to the front end of your application. So you've already increased the complexity and the maintenance costs of your web application by adding that backend component. And then there's also a challenge of user experience. When you're running on the server, it can sometimes force you to structure things in a certain way, and I'll get to that in a second with an example that may not provide the best user experience. But on the flip side, there is also a lot of flexibility that you get because you have usually a more powerful machine, you can architect things the way you want, so when you do your handling on the server, you tend to get a lot of flexibility from it. So here's an example of a feature that I've had to work with in web applications that I've worked on. I'm sure some of you have had to implement a similar feature. So let's say you've got a web application that needs to accept user-submitted geotifs. So users want to upload geotifs for some reason on this application. If you're handling those geotifs server side, the first thing is you need to validate them after uploading. So you have to upload in order for the geotifs to be processed or validated because they have to be on the server. Geotifs can often be quite large, so users can find themselves sitting for several minutes waiting for a large file to upload and they don't get any validation or feedback until that file has uploaded and you've had a chance to validate it. So that makes the user experience poorer. Similarly, it's hard to provide a preview if you're doing all of your processing on the server side because you have to wait again until the file has been entirely uploaded. And then once those files are uploaded, if they're large, you are likely going to need to set up some sort of asynchronous task handling in order to process those files because they may be too large to be processed within a request response cycle. So there are some significant limitations if you're working with geotifs or other large geospatial files on the server side. However, if you can do it on the client side, you can do certain types of validation very quickly in browser and you can provide immediate feedback because you've got the file available there instantly. You can do things like preview the footprint on a map immediately. You could potentially even reprocess the files in your browser and then upload the desired format if there's back-end processing that has to happen. You could have that happen on the front end and do the processing while it's uploading. So then by the time the data gets uploaded to the application back-end, it's actually already in the format that you need. So I'm going to provide a demo here. I hope that this may not have worked. Can we see this? Yes. No. Sorry, one second. Okay, great. So this is a little demo application that we put together to explain how this can work. And this is basically just going to show some information about a geotif that I upload. So I suspect you won't be able to see the file selection dialog, but I'm going to click Browse here. I'm going to select what is a Landsat geotif. It's a scene from Landsat. It is just a single band. It happens to be Band 8, and it's about 230 megabytes. So when I select this, there's not going to be any upload process. We're just going to see some information about this fairly large tif file, and you'll see that appear on the page. So I am going to click the button to submit it right now. And so you can see that that was a fairly instantaneous response, and we've been able to get out some useful information about this geotif that hasn't been uploaded anywhere. It's just sitting on my local computer. We've got the coordinate system. We've got the size. We've got the band count. We've got the footprint of the geotif here that is displayed on a map. So if you were a user uploading this to a web application somewhere and you accidentally clicked the wrong file, you would see that here, and you'd say, oh, this is the wrong file. It's in the wrong place. This isn't the one I meant to upload. So you could more quickly cancel that and restart it. If the band count was wrong, for example, you could see that, and you could deal with it. And we've also got the bounding box coordinates. So if you need to download those as GeoJSON or use them in some other way, we've got those available for you here. And so in addition to the fact that this is something that provides immediate feedback and it has a good user experience, it also is really easy to scale because it's an entirely static website. So this whole website is completely static. So if you all go to this website right now, I'm not going to be worried about scaling or the server getting overloaded or anything like that because it's just static files. So there's nothing really that needs to be scaled up or there's no resources that need to be changed if there's a lot of users all trying to use it at the same time. It'll just keep on running no matter what and you all will have a good experience even if there's lots and lots of people using it. Okay. So I am going to switch this back. Sorry one moment. Sorry one moment. Okay. All right. Well, technical difficulties. Sorry about that. Okay. Okay. I think that should be good. I'm just going to leave this in the regular mode. So what is Loam? Loam is a client-side GDAL wrapper. So you may have worked with other clients or other wrappers for GDAL. Loam is a client-side wrapper that's designed to run in a browser environment using GDAL that's been compiled to WebAssembly. So let's look at some codes. So to open a file, you know, with GDAL, you need to use GDAL Open to open it if you've ever used the GDAL API. And the same happens with Loam. You need to provide a file object like a user might select from their file system, or you can also provide it with a blob. So if you've got something that's getting downloaded as binary from somewhere, it may show up as a blob. You can also pass in a blob and you just do Loam.Open and then you give it the file. If you then want to do something with that file, you can use... You have a number of functions that you can call on your data set. So when you do Loam.Open, you get a promise that provides you with a data set. So that's this ds here. And then if you want to call width and height on that data set, you can call ds.width, ds.height, and then you can... Those both return promises. You can wait on both of those promises and then you can do something with that data. So everything in the API returns a promise. A little bit more of a complex example, GDAL Translate is available. So you can pass in all of the parameters that you would use on the command line for GDAL Translate. You can pass those into a function called inLom. So for example, we've got... It's called convert inLom and you can... This is an example of converting a GeoTIFF into a PNG file. It is using bands 1, 2, and 3 for RGB. It's setting the out format to PNG, resizing to 512, using nearest neighbor resampling and scaling the histogram. And one important thing to note about this is that altering data sets evaluate lazily. So if you do this, then the promise that's returned will actually evaluate immediately. But no processing is going to happen. So you can call convert again, you can warp, you can do other alterations on that data set that you've created, sort of like a VRT, if you're familiar with those. You can do things to it, but nothing is actually going to get processed until you try to access some data from that data set. So in this case, the example is getting the bytes of the data set in order to probably display it somewhere. Once you call bytes, then that whole lazily generated data set is evaluated at that point when you try to make the access. What else can you do with it? There's another quite a few other things that are supported right now. In theory, anything that could be supported by GDAL could be supported by Loam, but the wrappers have to be written, so not everything is there yet. But here are the things that are currently supported. There's support for using Proj4 to re-project arrays of points between projections defined by WKT. So that's one thing you can do. You've got access to GDAL translate, as I mentioned, so you can resize files, you can change the format, adjust the compression, adjust the band type, reorder the bands, and many other things. You've also got access to GDAL warp, so you can use that to warp an image between projections. You can also do some windowing to select subsets of a file. There's also GDAL dem available, so you can do hill shading, you can do color mapping, slope, and anything else that GDAL dem is capable of doing. And then there's also GDAL rasterize. So you can, if you have GeoJSON in your web map, in your mapping application somewhere, and you want to turn that into a raster image, you can use GDAL rasterize to create a rasterized version of that GeoJSON. So I hope at this point that you are thinking, wow, this sounds pretty amazing, but there have to be limitations for everything, so what are they? And there's a few. So the first limitation, I think the previous presenter touched on this a little bit, is that within a browser you have some constraints because you're running in a browser environment. So the first one that is notable here is memory constraints. If you're running on a large machine, you've got many gigabytes of memory available, you have an ability to write temporary data to the disk, so even if you run out of memory, you can process a large file. In a browser, the browser environment is very tightly constrained, so you can't dump large files to disk, you don't have really any swap space, or spare disk space available to access to the application, you just have to run within whatever the browser gives you. So if you're memory limited, that's pretty much all you've got. You've got the memory you can work with, and you can't get any more. So for very large files, you might have a difficult time working with large files in a browser, although GDAL is really quite good at running with limited memory, so you might be surprised at how far you can push it. There's also limits on HTTP requests, so I know a lot of people like to use VSI Curl within GDAL, and that's not currently available in a browser environment. I think there's a good path towards making that available, but it's not currently there yet. You can, however, make the requests yourself via JavaScript, and then pass the response from that request as a blob to GDAL, so that's still possible. A third limitation is bandwidth. So in order to make your website load quickly, you want assets to be as small as possible. GDAL is a fairly large library. The assets for GDAL are about 2.2 megabytes compressed, so there's a lot that needs to get downloaded. Luckily, you can download that lazily at some other point when the web page loads after the web page loads, so you don't need to load all of that on the initial page load. It's not going to slow down your initial page load all that much, but it does need to get loaded at some point, and if you've got a slow connection, that could be problematic. And then the fourth point is that WebAssembly and WebWorkers, which Loam also uses, are relatively on the newer side for technology, so people who are trying to use Loam and WebAssembly WebWorkers inside a standard web application that was built with React or Webpack or some other front-end framework or Bundler often struggle with getting the build tools to correctly deal with those assets and those files. And then, as I mentioned, not all of the GDAL methods are wrapped yet. We can't use auto-generated or existing JavaScript wrappers because it's executing in an entirely different execution environment because it's in a browser, so you can't just use the existing wrapper for Node.js that's in JavaScript. You can't just transfer that over to the browser and expect it to work. So if you're wondering how you can help, which I hope you are, there was a fellow just a few days ago who opened a PR to add functions for OGR, OGR. So if you want to add things to it, I am very welcoming of pull requests or issues if you need something, and that's basically it. If you want to learn how to install it, it's just npm install Loam. The repo is, the link is right there. And then GDAL.js is the WebAssembly port of GDAL or compilation of GDAL, and that's the link to that repository. And now I can open it up for questions. Thank you. Thank you for that amazing talk, Edek. And yeah, let's see the questions that I feel posted up. So let's start with this one. Is it possible to open other just partial file formats that GDAL supports? Yeah, it is. So the file formats that are currently supported are GeoTiff, JPEG, and PNG on the raster side, and then on the vector side, anything that's built in when you compile GDAL is supported, but not all of the wrappers are quite there yet. That will actually come when that pull request to add the OGR functions is finished. In theory, it's possible to support anything that GDAL supports, but in order to keep the bundle size low, you saw it was 2.2 megabytes compressed. If you add in other libraries, that's going to further increase the bundle size. So I've opted not to compile in support for any other file formats besides those ones that I just mentioned, but it would certainly be possible to compile your own version into WebAssembly using the repo for GDAL.js if you need to have, for example, NetCDF support. You could create your own compilation. You could include, compile in the library for NetCDF support. And then Lome doesn't really care about what GDAL can support, and so Lome would be able to use that just as if it was any other file, as long as the GDAL compilation, the compiled version of GDAL behind it, supported that. Alright, thanks. Next question. Is this one, Azavia is supporting the development of the period, currently? I'm sorry, is what supported? Azavia. Oh, Azavia, yeah. Sorry, no, it's okay. No one pronounces it correctly the first time. So I am working on this project. Azavia has a 10% time research program, so Azavia employees can use 10% of our time to work on research projects. So I've been using my 10% time to work on the project. I've had some collaboration from other Azavia employees, and there have been some folks outside of the company who have worked on it. But it's being indirectly supported by Azavia through that program, but it's sort of my initiative. Awesome. Okay, and I think we have time for another question. How does its performance compare with GOTFGS? Yeah, that's a great question. I haven't directly compared it to GOTFGS. I haven't tried to set up a benchmark, but I can talk about some of the differences between the two libraries. GOTFGS is written in pure JavaScript as far as I'm aware, and it just focuses on reading data from GOTFGS. So it's much, much smaller in terms of the dependencies that you're downloading. It's just a very small library that allows you to read GOTFGS. So you only have a few kilobytes of data that you're reading, that you're downloading, compared to GDAL, which is 2.2 megabytes compressed. So there's a big difference in terms of the size of the payload that you're downloading. So that's the first point. The second point is that GDAL includes all of GDAL. So with geotift.js, once you've got the data, it's sort of up to you what you want to do with it, if you want to render it in a certain way, convert it to another format, do something else with it. You have to sort of make that decision on your own and write that code on your own. And if you're using GDAL, you get access to a lot of the things that GDAL has built in already. So it's more, it's like a much more full-featured library. But again, with the drawback, that you pay for that, those extra features because of the larger size that you're downloading. And then the last point is that GDAL is compiled to WebAssembly. In theory, WebAssembly should be, in most cases, faster than a JavaScript library. But in some of the tests that I've done, I've seen that that's not always the case. So I think that the best thing to do if you wanted to compare the two would be to sort of attempt to implement the specific computation that you're interested in and to compare them side by side. Okay, thank you very much. And I think that we have to run out of time for questions. So I think that we will have to save hours right now. So thank you very much, Derek, for your talk and for coming to Fuzz4G. And we will be seeing you today, right? Yes, thank you very much. See you soon. Goodbye.
GDAL provides extensive capabilities for processing GeoTIFFs and other spatial data formats. However, until recently, the use of GDAL in web applications was limited to server-side code. This talk will describe how we use WebAssembly and a new wrapper library we developed, called Loam, to make GDAL's suite of tools accessible from client-side code. This strategy enables improved user experiences and can lower infrastructure costs for web applications handling GeoTIFFs and other spatial data. This talk will cover: - Description of WebAssembly and how it enables GDAL to be run within a web browser. - Description of the Loam wrapper library. - Example integration of Loam and GDAL into a simple React application. - Examples / demos of other ways to use GDAL within client-side applications to improve user experience and reduce infrastructure costs.
10.5446/57208 (DOI)
Can you confirm please? Okay, now I see myself. So, well, I still have commented, I'm going to make a follow up on how to integrate the special data in business processes. First of all, I will go very, very fast to give you some background on the company work for, not for commercial, but to let you know that we are a company, let's say, that have different businesses. One of them is the EIS. And the other one is E-government Services, Electronic Administration and that kind of of the files and so on. So, sorry, I think there are some problems with my mic. Let's see, go see my volume. Better now too. Is it better? Okay, I hope so. I'll try to speak louder in any case. So, the thing is that we have a strong background in the EIS and in general purpose for the government's applications. And, well, we work worldwide and we have realized that there is a desynchronization of spatial and the non-spatial work. Okay, traditionally these have been very separated worlds. It's true that in the past 10 to five years we have tried to integrate EIS in every place that the users are more and more aware of the need of having the GI information updated in the processes. But the real problem that we face everywhere is that still there is a big difference between both worlds. You know, you go to a project and you have in a separate party the AlfaDinon spatial data and then the spatial data. And that's a really big problem for us because the spatial data that you actually have are inventories, which are kind of photos at a certain time. You have a list of licenses or a list of places or a list of whatever at a given moment. But they do not remain alive because these entities live in the non-spatial world and from time to time come to the spatial yearly, mainly. So we started to think some time ago and then you need to integrate these both worlds. Well nothing new to you, but we will tell you our experience here and where we are. So what we did is to create kind of applications where the spatial data will be very, very easy to manage. So actually all the users have the opportunity to create their own layers and have them updated. We gave them the abilities to have the tools, have tools to create layers, to update them, to give them styles, to build some tools specific for the needs. So they will be able, these non-spatial users, they will be able to manage their own spatial data. Of course everything, and this is very also linked with the other talk, also with only with OGC standards, following the standards and open source software. So we created some platforms that they were the during time and they had a lot of tools and capabilities. So we made a modular orientation where they could easily select what were the basic layers to add in a map and also to add the tools that they needed for a moment. So we reduced, even though it's quite simple to deploy, for us, let's say for GIS people, it's quite simple to deploy a post GIS together with a, let's say a GIO server or a WMS or a WFS or whatever server and then a map viewer with tools and so on. For these non-average GIS users, it is a huge learning curve. So we, in our processes in the years, back in the years, we built applications to try to make them publish the spatial data in three clicks, trying to avoid that the spatial data was kind of photographs as we commented before, so they could live at the same time and they could be updated by the users themselves. But the thing is that even though we tried to do so, still there was a learning curve for them. To mention all that we provided to manage this spatial data, we made an API so the applications could integrate, we gave them lots of functionalities, I'll come to this later. We have a very, let's say, simple or a very classic flavor with all these technologies. We can go from, let's say, from business to technology as you prefer. So as you desire, so anytime you can write and we can answer. So we provided these technology stack and built all these functionalities. We made them with these three clicks approach to publish layers, even in WMS or using vector data, PDF, also editing data, downloading WFS, editing with WFST, metadata, mashups, you could embed your map wherever with the layers you wanted to. You can embed this, this is HTML with JavaScript at the end and CSS. So finally we also gave them mobile apps to field work so they could have a mobile application ready to go. We gave them the chance to publish in social networks and even more functionalities, styling, high resolution printing, search engines, vector tasks I mentioned before and more others. But still they were separate worlds. Still they needed to go to the spatial part, still they needed to get into a map, they needed to draw, they needed to edit, they needed to put the pieces together. It improved the situation because it spread the knowledge of the GIS but didn't solve the initial problem, the state of the problem at the beginning where these worlds were completely different. Still they had separate lives. Our main goal here and our difference from other approaches was to join them together. So coming back to the initial slide where we said we are a company focused in GIS but also in non-GIS products, we realized that we had the problem and the solution inside the house. So we started to work together, not in the clients but in our own product to offer a different value to the clients by the integration of the spatial data in all the procedures and the workflows that the rest of the applications were dealing with. I mean when I'm referring to non-GIS applications, I mean for example for local entities, licenses for a bar, a restaurant, or let's say a public event, whatever is a workflow, all that has a special address, has a street name, has very specific spatial data which lived in separate worlds. So for our workflows what we try now is to, with all the tools that we have built during time to manage the spatial data, integrate them in our own product. So the clients themselves wouldn't have the chance to avoid the GIS world. So we incorporated all the tools, all the layers in these tools specifically in one tool which is the creation of the forms. This is not GIS related but we integrated in a general purpose application where you build some formularies to the user so they're filling their data, their address, where they want to ask something. So we integrated a map, we integrated a straight map reverse geocoding, it may seem simple things but in the end, in the few years, well a couple of years ago, people were not filling an unnormalized address, they didn't have a reference street name. So they could type different streets, they could even fail in the spelling. So in the end, we realized that the GIS was needed and was needed specifically at that point, at that time, when you are creating the data, no matter spatial or non-spatial because it's all the same entity. So finally we gave them the possibility to locate the positions on a map and then we have noticed it's a quite recent project and this is why we are presenting it here. Finally we came with a product that has really liked to the customers because they feel that they have everything managed in a single point and the old information can be changed live. So each time they provide with a restaurant license, the layer of restaurants in the city changes as the license is approved or rejected or whatever. So this was the real game changer for us. We had lots of tools, lots of ways to make maps very, very, very, very easy but in the end we needed to integrate them and it was not an actual integration. So I think I'm okay with the time and six minutes left. I'm just going to show you a couple of live things and a small video to let you know. This is evolution so you are aware of the changes that we have made which I insist it may seem very simple but it has made really, really a difference. So in the past we gave the users the possibility to build a map. Well you upload a layer, you connect to a database, you add a layer here, you mark or mark the layer you want, you can put or remove the buttons you want, whatever. Really, really easy to the users but the thing is what I commented before, it didn't work. It didn't work as much as we wanted. It worked very, very well because it was a leap for the users but it didn't work in the way we wanted. So we expected to or we were looking for. So what we made then was to move it to another application to embed it as a part of a workflow. So in order to avoid running out of time, let me move here. This is what I wanted to show you. This is an application where you are building a form for a specific purpose, no matter what. Then you have a control that you say it's an evolution of the map. Then you have a control where you say, okay, I have the location of a restaurant precisely. So then you add it, you put the map that you have created here, you have several maps to choose. Okay, you put them up there. And then you add it here as a component. And you can also interact with the rest of the form by adding expressions on the JSON file or the info of a WFS request that you made. You can put a query to extract certain properties from a JSON that is returned. So and then you fill another part of the form with that information. So as you type in or you put a point on the map, non-spatial data, traditionally known as it is filled. So we can see another example. This is how we build the form. Let me show you how the form is then presented. You have the usual license request for restaurant. Then you have it there, your map as you configured before. Then you put the name, that you put certain aspects of the license. And finally, what you do is to locate and put the correct address. You look for your own address, which previously you had to, what you were doing was to type it free, which was amazing in the end. And then you have this information right from a street map, from an official street map, put on the map. And then you can verify, then you have a lot of info there. So the goal here, as I commented, it is a very, very simple use case. It's not rocket science. But believe me, that is an evolution for the way that the non-spatial applications behave with special data. And then finally, you can see how it is integrated, this special part in the application, which is called G1 spatial. And then you have all the layers. This was one of the ones that we have been editing with the forms updated on real time. So what you have finally is all the licenses that are being on a workflow correctly positioned on a map with the real data with no mistakes and a layer that is alive in real time and updated by completely non-GIS users. They do not know what is behind. They only type the street name or they paint on the screen. They put the point in the place and that's it. So well, this is it. I'm trying to go very fast, but I hope I explained the point. So this is how we have integrated the spatial data. No very fancy functionalities, but very simple, but very, very, very effective for the use cases that we have facing with our customers. So this is it on time, I think. Cannot hear you? Better to unmute. Thank you very much. Thank you Enrico. Thank you for your impressive talk, the amazing platform you presented to us. And I heard it's also raining in South Africa, but it stopped here, so you probably can better hear me now. There's a question that came up and the question is, when integrating GIS and non-GIS workloads, what role do unified data management platforms play? Have you considered using a unified database for GIS and non-GIS business data? Yes, we have tried several approaches there. We have a phase that, for example, you can do it, maybe the audience is pointing to, for example, it can be as simple as creating a layer in a table in a database with the spatial data and then making the non-spatial application to populate that table in the post. Yes, it can be as simple as filling all, filling fields in a database and forget about these tools. And I agree, depending on the case, it can be very simple. But the thing is that during time we have evolved these tools to provide the users real, well, powerful tools. And the role that this platform to manage all the data has played here is to also engage a little with the customer because we provide them very, very specific tools. Maybe this is very, very simple that once they have the platform for a simple use case, they see the potential of it and they start using it for a general purpose, even for the very, very specific GIS. So I would say that in some cases we have started with a button to top with very specific use cases, maybe with only with a table and then the platform and so on. In other cases, we have started directly with the platform because the users have that need and in the end they have integrated. So depending on the case, the platform may have a different role. But for us, we have always tried to put it there because in the end it is very, very useful for use. Okay. Thank you. I have another question. You're using a lot of open source stack in your things and you talked about that these are very simple functionality that you offer. But is there any drawback to the projects you have when using the things? How do you manage that? That would be really interesting, I think. Yes. I mean, indeed, well, I have not gone into the details, but if you see, for example, we have created a little small open source project here, which is my PIA5, which has its own, it has repository under this and you can put it there. When we find any issue or we have any need or when we fix any issue or anything that we are in contact with the respective communities, so we can have that feedback to them or even contribute with a commit, respecting all the needs and the timeline of the project. But yes, in one hand, we try to give it back by contributing to the project as we are able to find some enhancement, so we produce some enhancement. And in the other hand, we have created on top of this another project for whoever wants to use it. So one more thing, sorry. All we do in the end is also open source because all we do is for the government. I'm here in Spain, we have a very good policy, let's say, that you have to provide all the sources and then it becomes open source and it is offered to the community. So all that you have seen is in the respective client and part of the entity. Perfect. Perfect. I'm happy now with that. Thank you. Another question came up. The question is, when integrating a GIS workflow, do you find that it generally just needs to be simplified or do non-GIS users need completely different workflows? Again, it depends on the case. They are very, very complex. For example, for hydrology, they needed extreme processes to give some permissions to extract water from a place, calculations, so they needed a lot. So the logic was put in the GIS part and they needed very specific tools and probably they needed also to simplify their approach, but the real deal was in the GIS part. I have seen the other way around when the users have not very GIS enthusiasts, they just, they are fine. If they have an address and they have a point and they say, wow, that's what I wanted, but depending on where they are coming from and their business, you can face the both extremes. You need really, really tough tools to decide on your workflow. You need just to have a point in a place. We have faced everything. As I guess you have done in life. I'm laughing because I know the game and the other, the opposite is when people from non-GS came, come over and see and they told me, oh, you're a GS guy. It's always easy because you have these marvelous pictures on your application. It really helps. I don't have to say in English, but they say, oh, little, little maps, broachers, you do broachers, you do, yeah, touristic maps and things like that. Cool. Okay. So get your kudos in the chat. Thank you very much for your presentation and for the listeners, we have three minutes to switch over and then we proceed with the next talk.
Spatial information always brings added value to workflow processes of all kinds. Traditionally, applications for managing general information do not incorporate management functionalities for the associated spatial information, which is treated independently and, thus, not synchronised. This leads to lack of coordination and can cause management and decision-making processes to be delayed or not have the spatial information updated in real time. This success case shows the development of a general interface for the integration of spatial information in the worflow of general purpose applications by establishing communication interfaces based on OGC protocols and Open Source tool capabilities, acording to the following workflow: Workflow process identification and sending of information in JSON format. Representation of the general purpose information using OGC protocols. Editing of the spatial and alphanumeric file information via OGC protocols. Consolidation of spatial information in the central processing repository. In this way, by means of Open Source technologies, instantaneous updating of the spatial information associated with procedures is carried out in real time through the use of OGC protocols and Open Source technologies. This success case proves how, through standard-based interfaces, the absolute integration of spatial data in a centralised repository is achieved and managed in the data production processes in an instantaneous way, resulting in a unified product that allows the processing and management of procedures with spatial information updated in real time. Technologies: PostGIS, GeoServer, OpenLayers, Mapea, OGC standards, GeoJSON, REST API
10.5446/57210 (DOI)
Her field of study is in remote sensing and GIS and she has waking interests in radiate radar, infirmatory, full land defamation detections and optical images for air pollution determination. So there we go. Hello. Hello Dr. Tri. Yeah. With that for the view, I give you the floor. Okay, so now I share. Can you see? Not yet. Can you see? Not yet. Not yet? Sorry? Okay. There we go. Okay. I'll add you to the stream and I'll disappear quickly. Now, can you see my screen? Help. It's okay. Yep. All okay. Because the internet in here maybe has had program a little bit late and slow, so maybe it's probably not in case something goes on. I'll probably just hang around here for you in case something goes wrong. Can you hear clearly? Yep, I can hear clearly and I can see your presentation. I hope everybody can hear clearly on my presentation. Thank you. This my presentation is about last night monitoring by Sentinel-1 time series in the province in one small district in Inba province. Firstly, a little bit about the contents, I will introduce something for our research methodology, data, and results and discussion and conclusions. As you know, landslide is a time of disaster that occur frequently in mountainous areas, especially in the mountainous of Vietnam and causing severe damage to human life, material facility, and serious environment impact. This is triggered by manufacturers like brainstorm, soil weathering process, and indigenous activities like road cutting, etc. So in here you can see the photos of some area in Van Nien District in Inba province in Vietnam. The landslide is not so much but it still happens year by year so it makes it very dangerous for the people to live around. So we select the method using this research is PSI, PSI method. So this method is developed from traditional method of determining ground deformation named D-insar, different interferometry method that the land deformation can be taken by the face between two or three images of quite a different time over the same area on the surface. Assumption that we have to acquisition time in here from the satellite images is we will create the face different. So you can see the equation in here we can see the difference between the two images have different time. So usually it can create from the face displacement plus with face topography that mean relate to the elevation and with the different face of atmosphere before and after the after one image. So the problem in here we want to extract the face displacement in this equation that mean we can remove the face elevation but the atmosphere we cannot remove because the atmosphere in the different time it will be different. So it has the same is what that means it will be zero that is good but in real condition is never the same. So it has problem when we use two images. So that mean PSI image which is the time series of images. That mean we have many images and many images will create many pair of images. So each pair of images will have the distance between them the core is the base line. So each pair of images will create many point the point can have the back scatter high back scatter we call it the persistence scatter or the time that mean the pair of images will have many persistence scatter and we can take the persistence scatter to detect the chains of the elevation. So it is can take the good quality of the pixel point we don't take all the image all the pixel on the images so that why it's the method. So our study areas here you can see in the district located in near the north of Vietnam you can see here Vietnam had the border near to China so the in back proven is around here. So in these areas the elevation is about from 200 meter to 1000 1800 meters. So you can see here a big rivers from is run from China to Vietnam and it's originated from one land China's and along is have long about 70 kilometer long run through these areas so river also make many problem for the for the area around the rivers and also it make many less line around that areas. So data we use the data is Sentinel one data descending orbit is no Sentinel one data is the free data from is a and in this case we use a single look complex images that means is one type of images have separated the face and amplitude so in this case we use the polarization for that and in this table you can see the acquisition of we use 28 images from 2019 January to 2020 March and the baseline in here that means the distance from the master image to the slave images almost are short because with the PSI method is required the the shortest baseline is good for the quality of images processing so that way we have to select at the short way like this with the images processing this time we use two software the first is snap and the second is thumb and PS snap is the common architecture for all Sentinel toolbox is being jointly developed by broadband our consultants sky watch and CS now call the Sentinel application platform is can be a free download from is a website and the other is time and PS that mean the Stanford method for persistence scatter and it's very also is also the free free software can be downloaded and it's run on MATLAB in Linux OS so the processing can divided into block the first is create the single pair of images so with the Sentinel one single look complex images you know it's had three track the three three track we had to extract each track for the Sentinel we cannot use all three track together because the track is had the cannot analysis in the same in the same time so we had to to select the sub squad for the master images first and prepare the slave image and then splitting the slave images and then after we doing the pair in each pair of images we can export to some NPS and running it's just follow the step in here you can see in here this software just with the snap we can create the graph the graph can automatically run to create the interferogram for each pair of images can be shown like this one on the screen and the other is some NPS you can see in here this interface is run on the MATLAB in Linux OS here is the chat of position and the connection Sentinel one images you can see in here in the we select one image in 2020 is master and the other is the slave images look like this one so the result of the last slide by this one after after processing 28 images you can see in here the the last line it's happened in in some area with the to do it like this one is worth average in per year is about minus 16 millimeter per year in in around in this area and in this area so you can see in in all the images there are many many but not all of them are the landslide so that's why we have to compare it with the landslide inventory map but actually the landslide inventory map in the right on the right you can see in here it was updated in 2014 a little bit very old so the comparing is actually is not so good but we we just have only one this map for comparing and also because of the COVID-19 we did not go to the field check in this time so we had to to do some the other thing for for validating the result of that so and firstly we had to to mark some slope slope reaches than 70 degree and smaller than 20 degree that mean if the slope is greater than 70 degree is is less landslide and also less than 20 degree also also so we make the mark and to highlight this are some some area that have opportunity to have landslide in here you can see in this area in in the northern in the top and then here also you can see in this area this area and also that but actually we still don't know exactly that comparing with the real landslide so we did more data with that as you know as you listen to some some research before about the Google Earth engine so we also tried to do with the some amplitude change detection on Google Earth engine in the same time but this one we use just only amplitude you know with the Google Earth engine it just only have available with the amplitude image so that why we just collect only amplitude images for for this so with this one we we collect the pre and after pre and post the event that mean we take from 2019 about May to 2021 to March but we have to separate the pre and post for for detect the difference between two time for checking why we did this because because of COVID-19 we didn't we could not go out for the field check for the landslide detection in this area so we have to to check and to see the the change of the surface so we want to we hope to to comparing between the chains of the surface with this landslide happen using the single look complex of Sentinel-1 images so this is the flow chest of using Sentinel-1 and rather than detected the image here which is a very simple very simple thing before and after the images using Sentinel-1 images between them and this is the reference from to to paper in this so using the time before and after one defense that mean the event we collect and that is the rain rainy season that mean before and after the rainy season we select the images and input the the images into the code and create the ratio of amplitude stack pre and post and makes a trace based on Mars by using SRTM 13 there that mean in this case also we are Mars the the chains of greater than 70 degree and let that 30 20 degree up slow and also comparing with the landslide inventory so after computing this you can see in here there's some red red dope in here comparing with the triangle violet triangle in this is the inventory landslide inventory map from 2014 and there are some there are some coincided with the landslide inventory map but still have the some area is not coincided with that so I we also comparing with with the landslide computing from the single look complete images from Sentinel-1 and create the the chart you can see in here in in the north and in the north and it have some point and in here we select some some area that have the also the the film work from 2014 and take only one point one point in here in these areas and create the chance of time sorry you can see in here the time sorry vary from 2019 January to 2021 March and the the chains of the the land you can see is happen in here is vary from maybe from 30 millimeter to minus 40 millimeters but the average is about maybe is minus 15 millimeter per years and also in another place also have much much landslide happen in here you can see but the average of the landslide in these areas is also about maybe let them 10 millimeter per years because in this area you can see many coincident with the landslide inventory map and we also take one point in here to take the the chance you can see in here the the the chains it a little bit cause in this area and with the third area also you can see in this you can in this area had some place around here around here there is a landslide average and average about less than 10 minus 10 millimeter per years and after all we had the conclusion about that you see after using the 28 Centennial 1 a satellite image is over a band industry in by proven we think that with the PS Insta and using with SNAP 8.0 and STOM and PS 4.1 software the landslide identification was on indicated that the U of multi-temporal radar satellite images with PS Insta have to better understand and more of the progress of landslide in addition based on the time series of Sentinel 1 data the landslide velocity can be generally calculated at certain time even for very small deformation in the middle of the years and the landslide point are concentrated on traffic route or stream or rivers according to the data collected from Vietnam institutes of geoscience and minerals some of landslide from Sentinel 1 coincided with the survey point from 2014 but there was also many points could not check so the survey's amplitude change using Google Earth Engine can be used to detect the change over the last year the main world was to apply to detect the survey change that coincided with the landslide made from PS Insta so there were as many places where some amplitude change is not related to landslide this must be maybe can be caused or sorry because of the seasonal change maybe because of seasonal change like crop land or something like this so in the future we think we have to go to the field work to check for validation and also to improve the code for Google Earth Engine to make more accurate results for that thank you for your attention and here is my email address if you want to ask me more detail or to connect something so please send me email thank you so much Professor Tran let me just remove that before we get all confused Professor Tran is also joined by a colleague who is a co-author on the paper Dr. Trong I hope I'm pronouncing correctly while we wait for the live stream to catch up with questions maybe I'll just ask one so you were using the PS Insta method and I think you also talked a little bit about differential SA in Turf from I can't pronounce it but D Insta method which he said while it may be useful it does have some limitations in its use I was wondering if you encountered any limitations with using the PS Insta method for your study sorry I was wondering if you use PS Insta for your method I was wondering if there's any limitations that you might have encountered during your study implementing this method there are limitations for encoding in my study area the limitation I think the chance of registration is one big problem for the PS Insta because our area is the many forest area so with the sea ban of Sentinel-1 it cannot penetrate through the the forest and maybe in that area it's lost current and also it's lost current is what cannot create the PS point for that so maybe under the forest maybe it had the last line but we cannot detect that you also mentioned quite a few times that you weren't able to carry out some validation exercises because of COVID I was wondering if you have any future plans to carry out these exercises for your study I mean maybe in the end of these years we had to go to the field check for validation because all the results we still don't know exactly and we cannot conclude that in this area the last line happened exactly also because we just compare with the last line inventory map but it's too old and it needs to update for that so that's why we had to try to do the change using amplitude from Google Engine to detect the change and comparing but it still had not many results to have to coincide between two results and it still had problem because you know with the empty two data it will detect many changes even with the cropland chains and many things it will show on that so that's why it still had problem with the result of our research that's completely understandable nobody can avoid COVID at this point in time I don't see any questions at the moment maybe some comments with regards to the slave and master terminology which I completely agree to hopefully change the terminology and someone has just commented that the new response, the new term is reference and secondary instead of slave and master so that's something I learnt today but thank you so much for taking out your time to present your great work in Vietnam I think if anybody else has any questions they'll probably contact you via email or maybe find you in another room if you have any questions and we'll move on to our next presentation thank you so much Dr Trong and Professor Trong for joining us thank you
Geological disasters like landslides have been causing huge losses for people and property in many countries, especially the ones located in mountainous areas. These disasters are very hot issue that is being paid special attention by managers and researchers from many countries around the world. Vietnam is one of the countries in the region that is frequently affected by landslides due to tropical monsoon climate and three-fourths of Vietnam's land area is mountainous. In the context of global climate change which is happening quite acute, landslides are becoming more dangerous, more severe. According to recent researches almost every year in Vietnam during the rainy season landslides are occurring, causing great damage to people and properties. Scientists around the world have studied the problem of landslide and published many valuable papers on this field. In more recent, many works have focused on remote sensing data and techniques to identify landslide regions, tectonic destruction zones, etc. Remote sensing technology has now become a useful tool in identifying landslides because it provides an integrated view that can be repeated over time. Nowadays, those methodologies are becoming more accessible through many freely distributed datasets and free and open-source software packages. In particular for landslide studies, the SAR satellite interferometric measurement is a method of evaluating changes on the Earth's surface that has been in use for over 20 years and can achieve very good outputs. Differential SAR Interferometry (DInSAR) is a method where are used two or more images at two different times for the same location before and after a topographic change occurs, for example, to detect land deformations. However, this method has many limitations that do not eliminate some of the effects: such as the influence of the atmosphere and some scattering characteristics of objects on the surface. The PSInSAR method, is based on the use of a series of multi-temporal SAR images of the same location to extract a number of permanent scattering points which are used for detecting terrain deformation.
10.5446/57217 (DOI)
Okay, so it is time to introduce our next presenter. It's a pleasure to introduce Peter Priddle. I hope I'm saying the name correctly. He is the founder and CEO of MapTiler. And tonight, or at least for me, Bucharest, tonight he's going to talk about a community project, Map Libre, Mapbox GL forks. So Peter, you have the floor. Thank you very much, Katrina. Can you hear me well? Everything runs? Yes, I can hear you well. If you have some slides to share, I can also help with that. So my name is Peter Priddle. I am the CEO and founder of MapTiler. And I'm here to speak about a community project called Map Libre, which is in fact a fork of Mapbox GL. I'm here to speak also for the people who contribute to the project and who are four members of the project, which you see here on the side. He has been lucky with the project. So there are multiple contributors and great heroes in the open source community who really donate their time and effort to bring the project forward. So it's not one-man show project. It's really a project alive. And I'm here just to speak out what is the project about and how it moves on. So what is Map Libre? Map Libre is a mapping library for web and mobile devices quite similar to leaflet and open layers, which you hear on Phosphor G on multiple occasions and presentations. On Map Libre, there are also other presentations in here. There were break shops and other materials for people to learn. It's a community fork of recently closed source Mapbox rendering stack with 100% open source implementation of vector tile rendering. And it remains to be open source with open source governance. It's quite popular project despite being alive only since December. And you want to know more. The easiest is to go to the project website at maplibre.org where you can find links to all the documentations and also demonstration of the map. The library is unique in the ability to use WebGL and OpenGL for rendering the maps. So you see these buying effects, but you can interact with the map. And that's the biggest difference to open layers or leaflet, which are other alternative open source libraries. Under About and Projects, you find the links to documentation and especially to GitHub and the community. What is it? It is an open source project. VR in fact, applying for OSGO membership. And it's quite related to OpenStreetMap. So if you use Google Maps API or another provider, like this is open source alternative to Google, if you combine the open data with maplibre and the visualizations, you get something similar. It's all about vector tiles and displaying the vector tiles in a browser and in mobile devices. It's often used with OpenMap tiles project, which provides the vector tiles, free vector tiles for OpenStreetMap. And now it's also used by commercial mapping providers. The story of maplibre. In fact, in December last year, Mapbox has announced that their great Mapbox GLGS library has been a closed store. So they switched the license from BSD license to Mapbox terms and whoever loads the library in a browser through JavaScript needs to pay Mapbox for initiating the library, which is quite a unique license. This also means, unfortunately, that independently whether you use some Mapbox services or not with the library, you had to pay for initializing the code. And that's where quite big of a wave on the community appeared because the previously available license BSD provided much more freedom on ability to load the maps from your own server. And that's why in fact, Maplibre was born. So within a couple of hours, there was a first meeting of people who created forks on GitHub and who met on Hacker News and discussed what needs to be done on the project to have a viable alternative to the closed source code. And these forks are then merged into one fork and we try to duplicate all the efforts and really focus on one fork, which is going to be the one taken on by community. Essential from the beginning on was really defining, this doesn't happen again. So there is no control of a single company over the code base and that there is open source governments defined in the community. And we also discussed quite a lot how to motivate people to contribute. So really on December 9, we all wrote Memorandum, the four people who created the forks on GitHub and defined the rules, published it on Twitter, and it has had quite a good response on the social media. But this was the beginning. In fact, the name Maplibre comes from a map library reborn or Maplibre as a freedom in the map. Libre the freedom part. And Yuri is the one who contributed with the name. The Maplibre has two parts. One is a JavaScript rendering inside of a web browser, which gives you the two and a half D rendering of maps where you can tilt and rotate. And it uses hardware acceleration on GPU. It's accurate and interactive way how to display vector tiles, especially, but also raster tiles in a browser with no tracking at all and ability to bring in plugins and additional functionality. And it helps you to create your own application with JavaScript with functions and modern interaction with the maps. It's pretty easy to start. In fact, all what you need is 20 lines of HTML and you have a map which is zoomable. So anybody can really start and start to use Maplibre instead of leaflet or other libraries. Not a big deal. It's all based around the style JSON and the GLJS is loading the vector tiles in the MBT format and the styles are in the GLJS specification. There is a great documentation with APIs and examples launched. So if you are on the website of Maplibre, the GLJS set of reference documentation with examples taken over from the latest version of Mapbox and Rebritten to load the Maplibre library. And that's the great way how to start, learn and try different functions of the software. Recently we've launched also YouTube tutorials for beginners. So if you are really new into mapping and are just starting with polygons and lines and markers, you can just Google Maplibre tutorials and you will find these online and it will guide you through. The other part of the library, so next to the JavaScript, there is also native implementation which in fact shows the same styles, the maps look exactly the same. But instead of implementation of the rendering in JavaScript, this is done in C++ and OpenGL ES with shaders which are shared between the code. And this gives you ability to create native application on Android and iOS and other Android-based devices including UT, embedded source code and Windows and other compiled implementation of the native code. It provides the same rendering capabilities, the same functionality and it's practically an alternative to Google Maps SDK on Android or Apple MapKit which you may use native applications but fully open source with BSD license and ability to adjust everything what you need to adjust in the code. In fact, Mapbox has decided to close down this part of the previously open source code base back in April and therefore, MapTiler, the company where I work, we started to work on iOS and Android SDK based on the latest fork and then it was released and merged into Map Libre. So we contributed with the code back to the Map Libre organization and now it's with the open source community governance maintained further and improved by more people and multiple components. The Android has a wrapper with Kotlin and Java. The iOS is with Swift and Objective-C. Usable, there are also ports for Mac and Qt and you can contribute with other bindings and define your own usage on top of the native code. Everything is with the BSD license as previously. There is no telemetry and no tracking of the users, complete privacy and true open source and thanks to Amazon developers, there is now also implementation of Apple Metal. So nowadays it's quite safe also for the future version of iOS devices. To write your own application, it's also relatively easy in like 30 lines on Android and 20 lines on iOS. You have a very basic application and we will have a presentation about the mobile applications just after this block. So you'll stay on this track, you will hear more. There are documentation for these iOS and Android from the time when MapTiler launched these SDKs. So you can go to method.com.slash.docs and see their tutorials and try to develop your own app on your own. What is the status of the project and what is coming next on MapTip. We really had a lot to plan. So the initial part of the project was really all about setting up the community governance, launching the GitHub repository and deciding who is the steering committee, how to run the open source project in a proper way that there is a correct delegation of the control and voting and everything is set up the way it should be. Then we were very keen on having the table release with the JavaScript and also a release of iOS and Android. This has been successful thanks to the community contribution. They already come up the examples and reference documentation on the website which has been ported to MapTip.org from the original latest version of MapBox. We launched the project website and YouTube tutorials. Now in progress there is the method rendering on iOS which is under development especially from the contribution from Amazon team. On-going is a huge rewrite of the JavaScript library in TypeScript so that's something that will be published together on the upcoming release 2.0 and 3D terrain visualization is also on the plan for 2.0 if everything goes well. It's going to be merged and become part of 2.0 release. For the future there are a couple of ideas but it's already driven by the people who are keen to contribute and by the different teams and individuals who work on the features. If you have your own idea feel free to join on truismatlybread.org or GitHub you find links to Slack where the community is very actively speaking and you can just talk about anything, any idea you have and feel free to create an issue on GitHub, discuss. There's also a discussion forum on GitHub so you can just talk, propose anything what you would like to implement and if there is a good feedback from the community it can be easily accepted as a pull request into the codebase. One of the some of the cool things which we were talking about is ability potentially to show the world like a switcher to be able to show the globe, you zoom out instead of a marketer and quite a big thing has been also about potential ability to add support for custom coordinate systems non-marketer because that's essential for the national government bodies and carousel maps and other people who need the local coordinate systems. So those are things together with like binding, closer binding to open layers and leaflet which are really discussed and proposed. Because MabliBread has become the reference implementation of the vector tiles we were talking about different approaches how we can move on with the JSON specification of the styles and propose adjustments the way that the open source community accept them. So that's about the roadmap and what has been done. If you have ideas as I said very, you are very welcome to join, propose, choose GitHub and discuss on Slack anything related to the project. Who is in control? Currently there is a steering committee, a technical steering committee which anybody who is actively contributing to the project can join. We are meeting once per months, use the video calls and discuss directions of the project and potential proposals which needs discussion. Also the governance of the project is practically happening on these meetings. Currently there is also there are four board members which were the original four people who formed the project in the beginning and there is going to be a democratical voting for the new four members in the steering board which is proposed latest by the next Phos4G. The project is not organically grown but it's made out of the latest Mabbox open source version but it has already attracted quite a viable community and people who are contributing practically using it and really discussing. What we have learned from launching Mapply Breath is really supporting the community is hard work. You need to put a lot of effort into having the project properly communicated out, having the logos and also channels for people to discuss and feeling for you contribute that it's welcome and that there are no blockers and there are really no blockers on this project. Open source project needs time and patience that's another big part of lessons learned here and if there is no community there is really no project because it needs multiple people who benefit, who have their needs and who want to push the project forward and I'm really glad that these people find themselves in the project and meet and regularly talk and move on working on the next version. Another thing we have learned is better than to ask for money is to ask for active developers so if we are talking to a bigger corporation it's easier for them and also more essential for the open source project to have people actively working and this has happened. So there are people contributing from multiple companies to the project and synchronizing the effort on their side. In the beginning we were also looking for foundations for legal protections to avoid any issues but we have learned from that that the foundations and organization usually don't provide to proactively the legal protection they act in the moment of threat or a problem and therefore the more important part is being sure that the project doesn't accept any any legally problematic parts in the contribution from the community that's why we have one of the next big things also scanning of the bank compatibility on the code as part of continuous integration process so if they help request we are sure now we are carefully reviewing and we are sure in the future also choose the automated matters that that no code has been copied and pasted from another code basis. It would be great if you really joined the project it's pretty easy if you want to migrate from Mapbox to Maplibre. There are multiple tutorials and also bindings to React, GS, View and others and if you are really using npm and dependencies in your code then the switch is relatively easy the version one is compatible now in fact we have 1.15 so you can load the latest Maplibre GL with the npm as a dependency and just switch the existing code to Maplibre and more documentation for the react and other bindings are available on GitHub and on Maplibre.org. Feel free to dig deeper in the project so after you start to use it and if you discover a bug or anything what needs improvement please report this as issue there are people actively talking and communicating around the issues on GitHub and if you have a bit of time writing documentation as you learn this is the best way how to contribute to any project in the open source not only to Maplibre and it would be very very welcome on this project. If you don't have time or if you are a company who is actively using Maplibre in your commercial product it would be really lovely to either donate people who are contributing to the project or provide a financial donation so we may have potentially in the future some budget to support grants on development of the features and also on the maintenance of the project. For the summary we have went through the introduction of the Maplibre what was the story the two different versions the JavaScript and native implementation status of the project roadmap what is coming up the release 2.0 on JavaScript and new metal on Android and what we have learned from the community and how to join the project so that's all for this presentation. We'd be very glad to answer any questions you have. Thank you very much. Thank you Peter and we have three questions and about two minutes left so I will start with will Maplibre aim to maintain feature api parity with mapbox glgs or would it diverge and become its own thing? In version one we are fully compatible in version two we may start to introduce new new adjustments and the two projects will diverge so it's not necessary plan to implement everything what is in mapbox 2.0 and farther the project has its own paths and different feature set and different APIs but there is a need to remain as close as possible with the compatibility for people to easily migrate. Thank you. Another questions are there plans to support React native or Flutter library for mobile app? I believe Flutter is already supported and I don't know about React native because the native part is React.js is. It's all community projects so whoever needs the wrapper in React native the native like maplibre native can be wrapped in React native so I perhaps it's already down on GitHub just check on GitHub and it can be for sure migrated from maplibre latest open source version to from mapbox latest open source version to maplibre. Thank you. The answer is perhaps yes. So one more question how does maplibre compare versus say open layers leaflet? I mentioned that open layers and leaflet doesn't have this tilt and they are not hardware accelerated by default. Maplibre is heavily based on WebGL and vector tiles and font rendering on the client side so all these things differ in the in the performance and ability to display a large number of data. Now leaflet is really about raster tiles mostly and basic API. Maplibre is more about the visualizations open layers is excellent if you have advanced features requirements such as custom coordinate systems or ability to load various formats so the three really each has their own use case. Okay so thank you very much and a last requirement do you have links to slides? Sure we can we can post them where I will post them mapteller.link slash maplibre slides. If you write to me in the private chat I can just switch it and copy paste it to the to the venue list so. We are going to publish the link soon but in the link it doesn't it doesn't exist yet but I will put it there just after this session. Okay I see no more question. Perfect I am copy pasting it into the venue list and there you go. Okay so thank you very much for your presentation your time and for answering the questions. I think we need to get ready for the next mapteller presentation. Is your colleague Peter Pocorny going to be present or is it you? It's him? No no Peter Pocorny is going to be coming up here. Okay thanks.
The status, recent development and roadmap of the open-source community driven project for hardware accelerated rendering of maps powered by vector tiles in a web browser (GL JS) and with native code (Android, iOS, etc). Learn how to migrate with practical source code samples. After Mapbox announced the closure of Mapbox GL JS, their JavaScript library for displaying maps using WebGL, the community around Hacker News gathered on Slack and GitHub and made a collective decision to maintain and further develop the last open-source version of the software and build a 100% free alternative of the project. This is how the MapLibre was born. As a group of individuals, we coordinate the effort and synchronize contributions from multiple teams (MapTiler, Amazon, Facebook, Elastic, Stadia, Microsoft, Jawg, GraphHopper, Toursprung, etc) - working on JavaScript and Native code implementation of the renderers and related ecosystem. Multiple releases have been published, the project has CI checks for contribution, regular steering committee meetings, updated support for TypeScript, several bindings such as ReactJS, the Metal rendering on iOS is implemented (as Apple decided to deprecate OpenGL ES), and many issues and bugs has been fixed. There is plenty of ideas what to do next - from implementation of 3D terrain rendering, to support of non-Mercator map projections, or tighter integration with Leaflet, and much more. Let's explore the current status of the project, learn how to use MapLibre in your own software with practical code samples, and how to join and contribute to the collaborative development and participate on a shared roadmap.
10.5446/57218 (DOI)
Hello everyone, I am Gérald Fanois. I created the Gérald Labs company 14 years ago in 2007. We consider ourselves as experts in open source solutions, development and support. And I am very glad to be here today online for this Phosphor G21 to present to you the MapMint Service-Oriented GIS platform. The main goal of this GIS platform is to simplify your day-to-day work consisting in publishing maps online. For such purpose, MapMint provides tools making you able to concentrate on your work and not on coding anymore. To do so, we are relying on the existing OpenJer Spatial or GC web services. The most important one in the MapMint platform is the web processing service, WPS. Indeed, for us, everything can be seen as a process, meaning for instance that even the dynamic HTML content is resulting out of WPS execution. When I say everything is a process, I mean almost everything is a process. We use the dedicated web services listed here to spread the data, to name them the web feature service, WFS, the web map service, WMS, the web coverage service, WCS, and web map type service, WMTS. So how do we offer such a set of services? Well, by combining the following open source software. The Zoo project is used as a WPS server or as we will see at the end of the presentation, the OGC API processes and server implementation. Then we use the MapServer and the MapCache software to make your data available through WMS, WFS, WCS, and WMTS. The OpenJer JavaScript library is used to display and interact with online maps. And also we are using a library office to produce various kinds of documents. Actually, there are much more software included in the platform, but let us continue and we will name them as we will name them in a few slides. So in MapMint, as in other SDI, you have two different kinds of user interfaces. The first one is presented on the left hand side on the slide for the administrator of the application. People that are having the right to configure, publish, and modify the available application. Then there is on the right, presented on the right hand side, the public user interface that tells the end user and authorized ones interacting with the published application. In the next slide, we will briefly present the four basics administration modules, the dashboard, the distiller, the manager, and the publisher. They are basically the one required to only publish a map application. Then we will introduce you the advanced modules like the geo-referencer, the table module, the importer module, and the indicator. These are the administration modules that you use for doing more than only presenting maps online. So let's first start with the first administration module, which is a dashboard. It provides, first of all, it provides an overview of your setup. It lets you manage your symbols and your favorite SRS, special reference system SRS. It lets you see and edit global settings and also manage your users and organize them as groups. The second administration module is the distiller. It lets you manage, convert, and process your data remotely. You can access a number of GEDAL tools available as web services from here. But there is also something new. Indeed, we are offering the capability, the access to our virtual box application. They are now available directly from the distiller, meaning that you can run any of the available virtual box applications for processing remote sensing images remotely. As we did for the virtual box application, we did for SAGA GIS, which means that we also offer access to the SAGA GIS application from the distiller, giving you access to various kinds of processing offered by SAGA GIS. Now, let's go to the next basic module, which is the manager. It lets you organize and manage your layers as maps. You can create, save, and manage maps in such a way that your layer will look great in the final application. You have different options to define the way your layer will be displayed from your final application. You have a specific one, which we call the timeline. It basically means that you can produce multiple classifications for a single layer. This layer will then be shown as having multiple steps or multiple classifications. So here, we present the form, we present the form used to define such a style, and here we see it in action. More than 70 layers produced as one and used from the KFONC application. Obviously, we do the same for the raster layers, even if the handling and the configuration is a bit different. Here is Landsat 8 2017 and the same location shown for 2016. The final basic administration module is the publisher. It lets you configure and publish your final application. Here you can define your application metadata, the base layer you are willing to use. You can list the layers that will be activated per default. You can set the default extent or also the activated tools, which are usually corresponding to a WPS service or a map mint module. Here are a few examples of the resulting publishing applications, including at the bottom right the CEDICIUM JavaScript libraries that has been used as a 3D client to display the layer from the map mint publishing project in combination with other datasets, such as 3D tiles and such examples. To be complete about the capability offered by the publisher, you can restrict access to the web services associated with your published project by setting up the AWS security software that will act as a security proxy, answering that only other IP addresses or user can access the data. This has been tested with various client applications, such as for the open source software QGIS. It has also been tested with proprietary software, which was the first target of this development. Still, from the publisher, you have also the capability to publish web applications, final applications that will be able to display the planet-based layers that you have access to. Here is the configuration user interface from the administration. From the administration interface, you have the screenshot of how it should look like from the final application. As I told you earlier, the publisher lets you decide which tool will be available within the client user interface. With map mint, you have now the capability to integrate your own tool by creating what we call map mint module. It is a combination of JavaScript and AWS implementation. For instance, here in this screenshot, we have an integrated tool making the end user capable to interact with the R services implemented by the client from the publisher application. Let's have a look at the more advanced administration modules we were referring to in introduction, starting with the geo-reference. It's a basic user interface that lets you geo-reference your raster data online, which is accessible from the distiller in case your raster data requires to be geo-referenced. Another advanced module is the indicator one, which lets you cross data from different sources. One should be geographic data, and the other one be of any kind, such as an Excel file, for instance, until it is supported by a ticketed library. From this new data, you can then configure a view, a graph, and a report associated with. In the screenshot, you can see how it may look like on the client side with the report, the report, the graph, and the table shown on top of the map. Now, an important advanced administration module is the table one. It lets you configure and grant access to some user data, to some user to data stored in that dedicated PostgreSQL database. The data can be geographic or non-geographic. For a given data table, you will define a view, meaning that the way the table will be displayed, which column should be shown, and so on. Then you will define an addition workflow, which may have multiple steps, and in which you will define whom will have the right to write which field at a given step. And then you can associate a report. You can associate a report. This report can be associated with the whole table or a single entity from the dataset. This module also tracks the history of the modification to an edited dataset that I think is outside the scope of this presentation. With the administration tables module, come another important tool, the MapInfoMe Android application. The name MapInfoMe stands for MapInfoMeasure and Evaluation. MapInfoMe can be used to record data on the field with or without internet connectivity. As you can see, done the slide, I want to thank the five mentors and the five students for the work on that during this year, Google Summer of Code period, and also Google for its ongoing support to open source software, and especially the MapInfoMe 1. There is an old screenshot, but still it illustrates the fact that from the table client user interface, you can not only create new geometries or use existing ones, but you can also create new ones by combining and processing them with the Exposed WPS services. You know sometimes user needs to integrate data that are stored in a non-trivial format, such as the one presented in this screenshot. The import administration module is integration of such kind of exotic data. When it comes to integration with the other web application, it is commonly required to expose an open API. As the MapInfoMe platform relies on the ZOOP project for every processing service, it means that these services are also available through the OJC API processes, and you can see here a traditional swagger UI or at least a small part of it. This last slide is to inform you that you can now set up MapInfo on your own server by using the binary Docker image automatically published on Docker Hub by using GitHub Action. You should be able to find it by searching for your lab's MapInfo name. So the presentation of the MapInfo product is now over, so let's talk about the next step. The first next step should be to publish a Docker image for every single OSGLI release, to give the opportunity to select a specific version of the software depending on your application's purposes. Document the JavaScript API that is to be used when implementing a new MapInfo module. Integrate a short how to use the MapInfo from Docker. Create more complex MapInfo module examples, create videos demonstrating the capability of the MapInfo UI administration user interface. MapInfo is available on GitHub, so please give it a try and don't hesitate to report any issue you may find on it, and contribute your code back to the repository. Before finishing the presentation, I want to thank Venkatesh Raghavan and Rajat Shinde for their continuous support, helping with the search, Google Summer of Code mentoring, and other tremendous contributions. To conclude, I want to thank the Co-ed company from Ireland for actively using the MapIn product and continuously asking for a new development of it, as the Kelfonsier company from France do. So this is it for me. If there is any question, I would be happy to answer them, and I hope everybody will have a great first 40. Hello. Welcome to the stream, Ziranth. First of all, I wish to apologize for the technical issues, its Murphy's law. I suddenly lost the sound, and it was not coming in video, so sorry for that. Sorry to all the participants, but it was a very interesting presentation. So there is a question from the participants. Is there any integration with mobile applications? And I'm very happy to tell about MapMint for ME, but I would give this stage to you for more explanation. Actually, as I tried to present during my talk, was that we have an embedded application named MapMint for ME, MapMint for Measure and Evaluation, which is an Android application you can take on the field to record data with or without any internet connectivity. I don't know if it answers the question. Okay, yes. So I have a question, and I like not a question, I would like actually I would want you to tell us more about the Google Samoan of Code projects. So what were the contributions mostly, and what are the datasets which we can use in MapMint platform for the processing? So actually, when you are accessing the binary Docker image and Docker Hub, you should see a short documentation on how to set up everything to be ready to run MapMint using a very basic North Carolina dataset, and you can use to start publishing your online web map application. So basically to find the data by following the documentation available online, you shall be able to use a ready to use dataset. Okay, I have one more question, and it is related to the OGC API processes. So what are the upcoming objectives with respect to implementing OGC standards in the existing MapMint architecture? Basically as I tried again to present during my talk was that the MapMint product is almost 100% based on the Zizu project, so the WPS engine, which implemented back in 2009. And actually this processing engine is automatically exposing your services as WPS resources or OGC API processes. So basically it means that from a running MapMint instance, until when you have configured your OGC API processes to be exposed to the public, which means for instance in case you are using the binary Docker image, then it means that you hold the existing services so more than 700 for a basic ZU project setup, so probably more than 800 services are really right out of the box to be used as WPS services or OGC API processes. Actually this OGC API processes was very useful because it was acquired by one of our projects that was mainly linked to another application which has nothing to do with GIS. And it was complicated to show them any XML files, so we took advantage of this OGC API processes support which is de facto available within any MapMint instance. Okay, yes, that's very interesting. And I do not see any more questions in the venue list at, so. Great. Thank you again for your presentation and my apologies for the technical issues and good luck with the upcoming work. Thank you. Have a great day. Thank you. Have a good day. Thank you. Have a great day. Thank you. Have a good day. Have a great day. Thank you. Have a great day. Have a great day. Have a great day.
MapMint is a comprehensive task manager for publishing web mapping applications. It is a robust open-source geospatial platform allowing the user to organize, edit, process and publish spatial data to the Internet. MapMint includes a complete administration tool for MapServer and simple user interfaces to create Mapfiles visually. MapMint is based on the extensive use of OGC standards and automates WMS, WFS, WMT-S, and WPS. All the MapMint functions run through WPS requests calling general or geospatial web services vector and raster operations, Mapfiles creation, spatial analysis and queries, and much more. MapMint server-side is build on top of ZOO-Project, MapServer, GDAL, and numerous WPS services written in C, Python, and JavaScript. MapMint client-side is based on OpenLayers and Jquery and provides user-friendly tools to create, publish and view maps. In this presentation, MapMint architecture and main features will be presented, and its modules: Dashboard, Distiller, Manager, and Publisher described with an emphasis on the OGC standards and OSGeo softwares they are using. Some case studies and examples will finally illustrate some of the MapMint functionalities.
10.5446/57219 (DOI)
So it is my great pleasure to introduce Joe and our next speaker. This is going to be the final presentation for this room today. Joe, I'm adding you to the screen now. Hello. Hello. And promptly, at your start time, I'm going to add your speaker deck. So a pleasure to see you. Thank you for speaking at FosterGVC. Indeed. So hi, everyone. Yeah, I'm going to talk to you about data discovery and metadata creation untouched by human hands. So this is me. I am the technical evangelist for data discoverability at Aston Technology, which is just a fancy way of saying that I help people find and share data. So Aston was founded in 2006. And although we're based in Epsom, which is near London, we've got 25-odd staff spread across Europe now. And we do spatial and data stuff based on the open source technology stack. So the first thing is I'm not a coder. My real passion is enabling other people to do their job, which might be coding or it might not be. But anyway, making that as easy as possible for them, preferably with open source tools, obviously. And I'm here to tell you this really important fact, which is that metadata is really hard. It's complex and it's time consuming. And it's hard to know where to start. And there's a really steep learning curve. And often people don't get any training. It's something that they have to do in addition to their day job. And everything's really complicated. That makes me sad. And even worse, manual metadata entry really doesn't scale at all. Solutions that work for a small number of data sets don't work so well for thousands. This leads to big problems with people just giving up. They don't complete all of the metadata. It's not accurate. It's not kept up to date. And nobody really knows who's responsible. So we're going to fix that by automating all of the things. Hooray. Now this is not a new concept. People have been deriving at least some metadata elements for years. And it's not even a new concept to use FOS tools for it. There's plenty. I grabbed a few logos for some of them. And obviously it's not just done in geospatial as well. Things like CitiML have methods for calculating metadata. And of course, some metadata elements, your file system will derive some metadata elements, like the title and the location of your data and when it was last updated. And tools like QuantumGIS, QGIS will show you the spatial extents and useful things like that. However, we're still left with a few elements that we can't derive in this programmatic way, like nice human readable titles and abstracts and keywords and things like that. Those are much harder to do programmatically and hard again to do at scale. So we have bolted together a number of open source tools and libraries that we're hoping to use to overcome this challenge. And part one is metadata crawler. So crawler is a script for discovering data, be that spatial or non-spatial. It could be in file systems or databases on websites. It could be raster. It could be vector. And for each data source that it finds, it derives as much of the metadata as it can. So the kind of things that we've already talked about. Crawler is built on some libraries that were built by Titleus for the talent ETL spatial plugin. So that's again, that's kind of open source. And it's a really handy tool because it can be run as a web service or as a cross-platform shell script. So it's a very neat tool. So we've taken crawler and we've extended it to work with non-spatial data, as I said, and also to output metadata in the UK Gemini metadata profile. So here's the workflow. So crawler takes databases and files and it creates an XML-based metadata record for each of those data sources. And it uses a set of placeholders for any elements that it can't actually derive. So the first stage of the process is that you get a bunch of XML files that you can then take and put in your metadata catalog. If you've got a metadata catalog that can take transactional CSW, then crawler can input them directly into the catalog. And it can also create new records or update existing ones as it needs to. So whilst we've mainly been working with GeoNetwork, this is all standards compliant stuff. So it would presumably work with other metadata catalogs as well. So the next stage, first of all, we've got the old approach that we used to use. And what we do is we'd provide our customers with a spreadsheet, which is effectively a second run through all of their data with a roper record with fields for them to fill in like the abstract, the keywords, and the contact information. So Excel might be clumsy, but people like it and they can copy and paste and do things in bulk. So it works pretty well for this. And we use controlled text and things like that so that we could keep things nice and precise. So what we ended up with then is a CSV file with these additional metadata elements. So we then wrap the GeoNetwork API in a Python script and update the records in the catalog with the additional information from this CSV file. Basically we use GeoNetwork hosted up on AWS, which means that we can actually let our users run these scripts and update their own metadata records using environments like Cloud9 to save them needing to kind of install Python libraries on their work computers, which seems to panic people for some reason. For extra geek points, if rather than running the scripts themselves, what they can do is they can email the CSV as an attachment to an email. And then we have another set of processing scripts that take that attachment, pop it into an S3 bucket that the GeoNetwork server can get at. And then we have another Python script that extracts the information and updates GeoNetwork. But I want to talk about the cool new approach that we're going for for getting to that point, which is, I have to say, mostly only slightly better than a proof of concept. The code does exist, but really only in co-lab notebooks at the moment. So now we can use some Python and some machine learning and natural language processing to try and extract this missing information, so the titles and the abstracts and things like that from the actual data itself. So as an example of this, what I've got there is a really long and complicated text. It's a real metadata example. It's about tree planting in Scotland. And so if we run this through some natural language processing tools, then we can extract the keywords from it. We can GeoPass it, we can find the geographic keywords, and we can autosummarize it to get a reasonably coherent abstract. So what we are intending to do with this is to extend the keyword extraction in particular to pull out things like variations in spelling and synonyms. And because we do a lot of work in Scotland, we're also interested in getting Gaelic place names as well. So we're going to extend this processing to do these additional tasks. So here's the sort of flowchart of what we're talking about. So we've got our spatial data set and we run it through crawler to extract the basic information as we talked about earlier. Then we can pick up the geographic information. We can pick up the extent. We can get geographic keywords out. We can extract the text keywords, as I said, and then we can do things like refine the keywords and rank them. And we can compare them with code lists like the inspire code lists, for instance, to give us a set of controlled keywords and free text ones. So when we combine all of those things together and we do our autosummarizing to create an abstract, then effectively with the information that we already got from crawler, we can create our entire metadata record. Now we know that we can train this machine learning workflow on a huge corpus of existing metadata records. And we've also got quite a bit of best practice guidance for data discovery around search engine optimization, so lengths of titles and abstracts and things like that. So we've got a set of rules that we can use as well. So at this point, we've got effectively a modular workflow that we're trying to get to the point where it's fully modular and that we can avoid any silos or technological lock-in. We'd like to get to the point where we're not saying that you have to use GeoNetwork or that you have to use specific machine learning libraries. At the moment, we're using GeoNetwork, but we'd like to get to the point where this is all quite agnostic in terms of the programs that we use. So the end result will be metadata records that need minimal human intervention. Now you probably do actually want a human person to review them before publishing. We don't want to go publishing things that people haven't had the chance to check over. And of course, we're expecting that our machine learning process will need some refinement as well. So what we're hoping is that people will just be able to look at the records very briefly and say, yeah, I'm happy with that and then publish it, which is a big step forward from where we are now where even if we've derived a set of metadata records, then they've still got 3,000 abstracts to fill in, which is a bit of a blocker. So the usual caveats apply with this kind of thing. We have all of the bits and pieces, but as I said earlier, they mostly exist as kind of Google Colab notebooks at present. And it's going to take a lot of work moving forward to kind of scale all of this, and we're going to be wanting to get expert assistance on some of the machine learning side of things and also speak to some search engine optimization experts to really refine our results and make sure that what we're doing is actually worthwhile. So here's a couple of useful references to some of the technologies. So we've got the link to talent spatial, which is, as I say, the kind of reference, the kind of starting point for crawler. It's a really, really useful tool for doing clever ETL things with your geospatial data. And we have to thank Titleist, an enormous amount for creating talent spatial and continuing to maintain it. And also here's a link to the Geospatial Commission search engine best practice guide, which if you were in the talk I was doing earlier with Paul van Genuchten, we discussed it then, but basically it's a lot of really useful information for data publishers wanting to make their metadata as easy to find and easy to share as possible. And actually, that's it from me. It's slightly shorter than the earlier one, but that's probably not a problem. So if you'd like to get in touch with me to find out a bit more, then there's my email and my Twitter handle, or you can get in touch with me at astantechnology.com. So there's our Aston Twitter handle as well. So that's it from me. Thank you very much. Thanks, Joe. That was great. We have a number of questions coming in from your adoring public. There's also lots of little claps and emoji things going by. So first question, I'll just put it up here, is what is the difference between talent, data integration and geo network harvester? So the geo network harvester can harvest metadata records, but talent actually discovers the data sources themselves. So if you point talent at a database, for example, it will find all of the spatial tables in that database and create metadata records for them. Or you can point it at files in a file system. We're about to extend it to actually find, try and extract some decent metadata from things like PDFs, but that's a bit of a work in progress. Whereas the geo network harvester is for ingesting metadata records themselves rather than working with the actual data. Okay. Thanks. The next one you might have already answered in your talk, it was asked a little bit earlier in the program. It was can you generate keywords from the abstract? I'm sure we can. If people have got, well, we know we can. If people already have an abstract, then we can work at generating keywords from that. We are coming from the position where people might not even have an abstract yet. So it's about creating that abstract and the keywords and a nice human readable title. But one bit that I didn't mention was that we're envisaging a kind of a second round of refining these keywords and things like that based on search engine analytics. So if people are not finding records, is it because we need to add in some different keywords or things like that? Okay. Thanks. The next question is fairly long. So this was in reference to the autosummary. What do you think about the active metadata concept? New apply machine learning models to metadata so that it can be used to make decisions and trigger actions. Wow. That's probably something that we would want to come back to when we've had a good stab at this first machine learning. That's the question where you asked to visit you at the booth in the exhibit centre later. Maybe this time next year rather than later in this phosphorgy. But that's a really interesting question and one that I'll certainly spend some time thinking about. But we're a little bit new on this machine learning journey, I think, to do really clever things like that. Okay. Let me see if I can figure this one out. So when you are linking the keywords to controlled vocabularies with URI as names for things, would that then help discover related data across your network instance? You could almost believe that I'd actually preceded this question, although I swear I didn't. We sincerely hope so. We're developing the idea of a kind of almost like a meta-catalog sitting on top of a bunch of metadata catalogs that might help large organisations or governments or whatever to kind of pull together information from many catalogs. So yeah, we would hope that that would be via keywords or via other linked data things. But certainly that's something that we want to do because a lot of the organisations that we work with have many different catalogs and one of the big problems that people have is that they don't know which catalog to look in. So if we can make that much easier, then I think that would be a really good thing. Excellent. The next question I've got is, what sort of approaches do you have for ensuring proper review of metadata pre-publishing? Okay, so with GeoNetwork, we know that we can provide that we can, they're already built in workflow methods for submitting metadata for review so that it has to be reviewed and approved before it's published. So I think certainly from the GeoNetwork perspective that's already kind of built in using the workflow that's already there. For other metadata catalogs, I'm not sure, but generally speaking, I think we would definitely want to get the buy-in of the people who owned the data. We want them to trust the approach and we want them to be happy with it. So we definitely want them to review things before. So it would create a draft and then it would go through a normal review process. Yeah. Yeah. Okay, and the last question I've got is a little bit more about the GeoNetwork project. Is GeoNetwork on the stack bandwidth in so many seem to be? Jodi, do you know the answer to this question? I really do actually. So stack is an early precursor to the, like all of the OGC API protocols. And so it is really a lot closer to the CSW specification rather than the catalog specification which GeoNetwork focuses on. So if you're looking for an OGC API protocol that GeoNetwork is going after, we're really looking at the OGC API records protocol which is focused on metadata content rather than stack which, yes, it has metadata but those metadata are associated with like specific raster images. So it's a little bit more like a WFS with some attributes and a really big geometry that happens to be a cloud optimized GeoTiff. So it would be interesting if we could maybe think about harvesting the metadata from a stack. But I would view stack as being closer to a WFS than a catalog service. It's a really, it was such a strong technical approach that it's very influential in terms of rallying the other OGC standards to head in that direction. Thanks, Joe. Everyone's kept you pretty busy with questions. We've almost caught you up to your time slot. Indeed. And other people are saying your questions are not planted. So that's kind of good. No, honestly, but I couldn't have asked for better questions mostly. Excellent. Excellent. Okay. Well, thank you so much for speaking twice today. Do you have any other talks scheduled for the week? I do, but not for FOSS4G. There's a conference in the UK called Data Connect 21, which is happening this whole week as well, which is about doing things with data in government. And so I'm talking tomorrow about why standards are fun and people should get involved with them. So any UK people, then find me at Data Connect 21 tomorrow. I'm glad that we got a little bit of Joe in our FOSS4G schedule. It makes everyone happier. Thank you. Okay. I think that's it for me to wrap it up. Thank you so much. Everyone hit the little clap button in the channel. I'm just holding it open for you guys to clap and then we can we'll wrap things up. Okay. Thank you very much. And I need to figure out how to shut you off.
Everyone knows metadata is A Good Idea and Very Important, even more so given the current focus on data sharing. Unfortunately it's also time-consuming, hard work, and a bit boring. Assuming you've even kept tags on all of your data sources, manual metadata creation also doesn't work well at scale. Out of date, inaccurate, or incomplete metadata can lead to bad decision-making with real-world consequences. Conversely, good metadata can help make your data far more discoverable on the web. What if you could automatically keep track of all your geospatial data, create fully valid, high-quality metadata records, including the fluffy stuff such as abstracts and keywords, and keep it up to date? I'll demonstrate a potential workflow for reaching metadata nirvana using entirely open source tools such as GeoNetwork, Talend ETL, and some Natural Language Processing libraries. While the underlying subject is complex, the talk will be pitched at an accessible level.
10.5446/57222 (DOI)
Good morning or good afternoon or good evening depending on where you are. My name is Marco Mingini. I will be the chair of the next talk of the academic track and I'm happy to introduce the next speaker. Natalia Morandera who is a doctor in biological sciences. She's a researcher at the National Scientific and Technical Research Council working at the Institute of Research and Environmental Engineering of the University of San Martín. A researcher focuses on landscape ecology and wetland plant ecology aided by remote sensing and geographic information systems tools. And today she's presenting a talk titled monitoring active fires in the lower Paraná River floodplain. Analysis and reproducible reports on satellite thermal hotspots. Natalia, the floor is yours. Thank you Marco for introducing me and thank you all for coming to my talk. Here today I'm sharing some of my work during last year and my slides are available. I'm sharing the link through the chat. So first I want to give some background on the environmental topic. Floodplains and wetlands cover more than 20% of South America of the continent. And among these wetlands, large areas are covered by floodplain wetlands such as those associated to the Amazonas, the Orinoco and the Paraná River. The dynamics of these large floodplain wetlands depend on floodpasses and climatic conditions. In this picture I'm showing an aerial view of the Paraná River floodplain in a high water level condition in the year 2010. And you can see that freshwater marshes are very green and there are a lot of shallow lakes with open water. However, last year and this year too, the area, the Paraná River is extremely low and the floodplain is very dry. So in this aerial view of the Paraná River floodplain in Argentina, this is a picture that was taken last year, you can see that the freshwater marshes are very dry. There was a lot of dry biomass. And here are some native forest and water is reduced to permanent shallow lakes, open water. So in this context, the area was affected by extended fires and at least 329,000 hectares were burned. This area corresponds to 14% of the Paraná River Delta and about half of the area belongs to natural protected areas. So last year I aimed to monitor the wildfires using spatial data. In particular, I used satellite thermal hotspots. So now I'm sharing some physical background on this special data. The energy emitted by the Earth in the thermal infrared wavelengths can be related to surface temperature. So we can measure this emitted energy with remote sensing sensors on board of satellites can measure the thermal infrared emissivity. And thermal hotspots are very hot pixels that are probably related to active fires. The NASA publishes fire hotspot products within three hours of the acquisition of the satellite emissaries. And these data are freely accessible through the fire information for our source management system. The data can be downloaded as point vector layers and you can also visualize it online like this red hotspot. So these products are generated from two different sensors which differ in their sensor resolution. Sensor resolution needs to be taken into account when interpreting the results. So if we imagine a fire like the one that is shown in this picture, if this fire is monitored with a low resolution sensor system, probably few hotspots are detected, each one corresponding to a large hot area. While if the same fire is monitored with a medium resolution sensor, we probably will detect more hotspots, each one corresponding to a smaller hot area. So this needs to be taken into account when interpreting results and comparing fire activities throughout the years. So my aim during last year was to process these point vector layers and I constructed the workflow that I'm presenting here to reproduce the same analysis and automatize most of the steps and to generate bilingual reports. The aim was to do quick analysis and summarize the information of the on fire situation mainly because peers and journalists were asking us for updated information and also the lockdown and the fires prevented us from conducting fieldwork. So we had to work with the satellite information and also we have our background in the studio area because in our lab we have been conducting studies in the Paranario Delta for almost 20 years. So that was the general situation and it seemed fine but the problem was that we needed to repeat the same analysis once and again to write dissemination articles. Here is a talk at our University by Patricia Candus, post in social media, write these articles, respond to journalists and that's why I need to reproduce the workflow. In the first month in June I designed the QCCIS modeler to process the special data. So in this modeler the input layers are the hotspot products from beer satellite and modi satellite. You download four layers, one is the beer's current data, beer's archive data, modi's current data, modi's archive data and with the modeler I merged the data, clipped the data to the studio area, and then we reprojected them and export output layers with all the active fire records in a single shape file. This is how the modeler is seen from the view of an end user and one problem was that I found a source of error that was manual layer selection. You can note here you have to select each of the layers and for example in this screenshot I misselected the same layer twice. So this is a source of error and also it is time demanding and seeping the shape files, selecting the layers and then the layer needs to be further analyzed and exported to AR to construct the plots and summarize information. Cushies has an advantage that it can generate very nice maps and for example I generated this animation, this is an example of an animation included in a dissemination article co-authorized with Patricia Candus and Priscila Minotti. And this animation was generated with the plugin time manager that recently was replaced by an integrated temporal control function and is included in Cushies. So next I wrote a code in AR to account for all these steps and to produce the plots and the reports and first I show what you need if you want to use the script as an end user. You only need to have a polygon layer of your studio area and save that polygon layer in a specific folder. Next you go to the Firm's NASA web page and download the Archive data and here you can download the data with a very broad bounding box, just drawing a polygon. And then you save the SIP files to a specific folder in your AR project. Next you just need the Archmarked Veyron code. These are the three steps to run the code and now I'm sharing what are the processing steps that are included. The code is written using mainly the libraries Tidyverse, SF, CCplot2 and Armarkdown and it includes file and geometric operations such as reading the SIP files that were saved in a given folder and SIPing the data, reading the hotspot point shape files and creating spatial objects, look for string patterns in the name files to create the hotspot objects. So this avoids the source of error of manual selection of the pond layers. Then all the geometric operations, the same that were included in the Cushy's Modeler that include merging the objects, reprosection clipping to the studio area, next you can obtain an interactive map of the hotspot of the current year and export the final layers to Shell Package. Also other steps include data cleaning, data tidying and producing the plots and the report. So this is general data cleaning and data tidying processes on the attribute tables of the layers. And then you also obtain plots in English and English Plans with showing daily hotspots, cumulative hotspots and also the image, you can export the image of the plots. And I'm showing next some of these plots and the annual comparisons and historical activity comparisons you can obtain. This is an example of a report in Spanish. The processing steps for both the geometric operations and the data tidying and the, and obtain the reports is less than two minutes in my laptop. So to illustrate the workflow, I'm showing some results of the situation last year in the Paranar River Delta. This is a screenshot of the thermal hotspots from Beers Data. In the Paranar River Delta, you can, in the interactive map, you can click any of these points and obtain information on them. And next, you obtain a plot summarizing the number of Beers hotspots, of daily Beers hotspots per day. And here I use to monitor what was happening last year. I use Beers hotspots because they have a medium resolution, the pixel has 375 meters and is better than modest. So here you can see that the months with the highest number of hotspots was August, which accounted for almost 40% of the total hotspot of the year. Next, you can also obtain a plot showing the cumulative number of hotspots all over the year. And the number of Beers hotspots that is recorded in the year, the total number was almost 40,000. And also, you can compare the historical fire activity for your studio area. And here, for example, using Beers Data, that has this resolution of 375 meters of the pixel. Here you can see that the total number of hotspots was the highest in the last nine years. Beers Data are available since 2012. So if we want to analyze the historical fire activity from previous year, we need to use modest data. Modest data are available since November 2001 and the resolution is of 1 kilometer. So the number of Modest hotspots during last year was almost 9,000 and were the highest since 2009. This is a plot that shows the number of Modest hotspots recorded per year. And you can see that there was a high number of Modest hotspots last year. But in 2008, that was also a very dry year and the primary floodplain was dry. In 2008, the number of hotspots was larger. Lastly, I want to include an update on the fire activity during this year because you may know that the Paraná River remains with very low hydrometric levels and fire activity continues. So I updated the analysis yesterday and I obtained that this year more than 11,000 of big hotspots were recorded. And this is a lot but is much less than the fire activity that was observed in 2020. This plot shows the monthly number of hotpots, of real hotpots in 2020 and this year. So our future work will include an analysis of the relation between the historical fire activity and the hydroclimatic trends. And I'm also working on the estimation of the barnet areas because more than fire hotspots are barnet areas which is important to analyze the ecological impact of the fires. So here you can see for example an area of grassland that was burned in the Paraná River floodplain. And an idea was to use the thermal hotspots at seeds and grow regions starting from these hotspots. I'm using the Arsaga library for growing these regions and each region starts in an area with the hotspot and grows according to the spectra similarity. I'm working now with an index called normalize barn ratio that is computed from Sentinel-2 in Mashary. Thanks for your attention. Muchas gracias. Here are my contact details. Here is the link to the full article in the phosphor sheet proceedings shown now. Here is also the GitHub repo of this project. And I also want to share that most of the pictures that I featured in my slides belong to a photographic ESAI project by Sebastián López Brach that is founded by Natcio. I'm grateful to Sebastián and also I want to share his Instagram account for you to see more of his work. Thank you. Thanks a lot Natalia for this exciting talk and for most of all an excellent research. I just want to make just one short note just before opening the floor for questions and it is about a word that is in the title of your call that is reproducible. This is really a keyword and thanks for making your research reproducible. This is a very, it should become a standard practice. Unfortunately, this is not always the case. So thanks for that. The link to the repository is in the slides. If you need the link now, I think we can paste it in the chat now or you can even more easily just open or find Natalia's paper that is associated to this talk and that is published in the phosphor sheet proceedings on the ISPRS archives. So if you look for ISPRS archives for phosphor G 2020. Lots of additional details. Thanks also for the pictures and the link to the Instagram account. Of course, we don't like so much to see fires, but we like to see pictures. So let's clearly hope that the situation will will improve. There is one question that was actually also some question that I wanted to ask that is about the how these research, how this data. Or the outcome of it, especially also regarding the historical analysis will be used by any governmental institution or anyone else, any civil protection organizations, fire brigades or anyone. Because that's very important and I hope that this will be the case. Thanks for the question. Our environmental ministry, Ministerio Nacional de Ambiente is using fire records from Firms NASA. Not my analysis, but they are conducting their own processing and analysis. So my results were used mainly. I use them mainly to disseminate what was happening and to analyze which areas we are being burned to detect. Maybe to disseminate what was happening in social media. And also we work with some non-governmental organizations that were reporting intentional fires or accidental fires followed by land use changes. So that was the use of my analysis. Thanks a lot. This definitely answers the question. We have other questions from the audience. Do you correct for errors in active fire products? No, for the moment I didn't correct for errors in the active fire products. I had ground truth data related both to conversations with local settlers and with journalists. I have also some reference points obtained with flights. I used that ground data to check if the active fires were really fires or not. But I didn't correct the data. My aim was to conduct quick analysis during the situation. Thanks for the question. Questions keep appearing in the venue. So the next one, did you compare the active fire records of the two sources that you use, modis and veers? And do they show any difference? The difference that showed was that veers, since it has a better special resolution, detected smaller fire areas. If you use veers data to detect burn-in areas, you may detect the burn-in areas better because of the better resolution. So veers data showed more hotspots and small fires were easily detected, done with modis. Thanks Natalia. We can quickly go through the last question. Can you use the results for projects to reforest or restore burned areas? It's a good question. The problem in the area is that some of the areas that were burned were changed to land use. For example, they were converted from natural areas or areas with cattle to agricultural areas. So that is the same general problem. I don't know if reforesting is the main issue here in the area. But probably you can use the data to locate which areas are being more seriously affected and to take actions in the ground. Thanks a lot Natalia for answering the questions and for once again giving a very interesting presentation. We need to close here. I would like also to thank the audience for the good questions and the input. I hope and I'm sure also you like the talk. If this is the case, please let us know and let Natalia know using the up-close button in Vanu-les. Thanks a lot and I wish you all a good continuation of PosterG 2021. Thank you Marco. Bye bye.
Floodplain wetlands play a key role in hydrological and biogeochemical cycles and comprise a large part of the world's biodiversity and resources. The exploitation of remote sensing data can substantially contribute to monitoring procedures at broad ecological scales. In 2020, the Lower Paraná River floodplain (also known as Paraná River Delta, Argentina) suffered from a severe drought, and extended areas were burned. To monitor the wildfire situation, satellite products provided by FIRMS-NASA were used. These thermal hotspots —associated with active fires— can be downloaded as zipped spatial objects (point shapefiles) and include recent and archive records from VIRRS and MODIS thermal infrared sensors. The main aim was to handle these data, analyze the number of hotspots during 2020, and compare the disaster with previous years' situation. Using a reproducible workflow was crucial to ingest the zip files and repeat the same series of plots and analyses when necessary. Obtaining updated reports allowed me to quickly respond to peers, technicians, and journalists about the evolving fire situation. A total of 39,821 VIIRS S-NPP thermal hotspots were detected, with August (winter) accounting for 39.8% of the whole year’s hotspots. MODIS hotspots have lower spatial resolution than VIIRS, so the cumulative MODIS hotspots recorded during 2020 were 8,673, the highest number of hotspots of the last 11 years. Scripts were written in R language and are shared under a CC BY 4.0 license. QGIS was also used to generate a high-quality animation. The workflow can be used in other study areas. An R workflow to obtain reproducible reports on active fires monitored with satellite products is presented. The work is an ecological application of spatial analyses conducted with open-source software (R, QGIS). By presenting this approach and results, I aim to highlight: the importance of using remote sensing data and ancillary geographic data to monitor large-scale disasters; how generating reproducible workflows can facilitate and improve geospatial analyses, and lastly, I want to show the usage of open-source geospatial software to account all these tasks. Wildfires are a current environmental topic in South American wetland environments. The case study area is the Paraná River floodplain.
10.5446/57225 (DOI)
Hi, I'm back. I'm going to remove this because I don't think the internet connection is going to be good enough for this. So can you see the first slide now? Yes. Now I can see it and I can hear you perfectly. I can see it changing. You can see it changing. Okay, good. Yeah. Okay. All right. Fine. We're ready when you are. Okay. Then let's get started with your talk. Dr. friend, Parky Shabir. Open Source and Mining, other map. All right. Thank you very much for having me, Joshi. So this talk is the Open Source GIS and Mining a Roadmap. And that's going to be basically a call for GIS, Open Source GIS developers to actually look around the corner and think a little bit more of what they could do for the mining industry. And all the untap the sizeable, untaped market that there is for them. So my name is Evan Pakus. I hold a PhD in geology from University of Western Australia. And I obtained in 2018 and for the past five, six years, I've worked in the METS industry. That is the Mining Equipment Technology Supplies Industry. They're serving the mining industry with technology, which includes software and equipment. It's just such thing as dump trucks all the way to drills. And during my times as a consultant and working for a software development company developing proprietary 3D geological modeling software. I noticed that at conferences and in business meeting, there were never any open source alternative that were presented, especially when it came to GIS. And at the end of the day, everything could be on principle be done with open source technology, database with Pogis and actual client with QGIS. And this needed to be remedied one year ago. I joined Ocelandia where I am leading the expansion of Ocelandia into the METS sector. And this is a call to actually say where this doesn't have to be our own private backyard. It can be actually there's enough space for plenty and lots of the technologies that are being developed by in the open source GIS community are actually applicable to mining, but there's been some kind of lack of communication between the two, which is mostly due to lack of awareness. To actually move on, I'm going to start with a basic concept of demand. I hear people, especially in open source communities that mining is something that is about to disappear, to go away. We don't want it anymore. So I would like to make it very clear here. We're not talking about oil and gas industry and coal. We're not talking about fossil fuels. We're talking about the mining industry. So we're mostly talking about metals here. And to prove my point, to take the point home, to bring the point home here is a table of elements with some elements highlighted in green and in red, some in both. And what this table actually tells you is which elements are considered critical by academics, which are basically economic geologists and European Commission. That's fairly recent 2018-2020. And you can see that some you've probably heard of things like rare earth elements there at the bottom here being critical elements that we have trouble sourcing in for electronics. But elements that people don't usually hear about like nickel, copper, zinc, cadmium are actually under high tension. And they are very well needed for the future, especially to transition into renewable energies and for electric motors, for example, for copper. And when it comes to cadmium, nickel, zinc, this is also for batteries. And now the concept that is often misunderstood is that if you are either at the conference in Buenos Aires or you are at home, you might look around and have a misunderstanding of what is rare and what is not around you. And just to give you a quick example, I think most people should just sit down for a minute. I'll just ask a quick question, which is rarest between copper and titanium in the earth's crust. I'd say that 90% of people will tell you that titanium is surely much rarer, much more difficult to obtain than copper. Where actually copper is about 30 times less abundant in the earth's crust. And in titanium, this essentially means that copper is actually a rare resource and that we need a lot of it. And there is a lot of investment going in there. So to drive the point a bit further, staying on copper, this is basically data followed by a projection, 2020 is about midway through this graph, of the total demand versus supply for copper. Teragrams on the Y-axis basically are million, million tons. So the primary supply being the mining supply, what comes out of mines and the secondary supply being essentially recycling. And that's why it only starts appearing in 2010. And it's a very ambitious and very optimistic secondary supply projection starting from 2020 that is having exponential growth here. And the total demand also having exponential growth, mainly driven by the shift to renewables, the shift to electric cars and also electric heating systems. And population increasing and global life standards improving. And the primary supply is set to start diminishing, let's say for real around 2050. Now this is a bit too pessimistic on this side on this end, because as copper will become rarer and more difficult to source, the price will go up, technologies will improve and will mine out things, will mine out deposits that we usually, that currently we know about, but we don't consider economical to mine out. So this is just a bit of context to just show that mining is really there to stay and is actually very, very much needed to be able to actually face the challenges of global warming, sorry, for the future, especially to power, so this shift to cleaner technologies. Now, of course, with that comes the notion of investment, as if we were as if I'm encouraging GIS open source developers to go and get involved in the mining industry. I have to talk a little bit more of how much is being invested in the mining industry. So this is just a 2017 figure, which is a very poor figure actually, it was a very poor year. Investment figures in millions of millions, millions dollars US in the world with the two major players, Canada and Austria. And this was only for mineral exploration, not for investments, such as we are building a mine from scratch. So here you've got about $8 billion US invested in the year 2017, just for exploration. And I was a bad year for mining. There's a mining boom currently going on and there'd be a lot more. There's a lot more being invested in the $30 billion just in being invested in exploration alone. That is, we don't know if there's any positive, we have some idea with favorability mapping and the likes, which by the way, relies a lot on GIS technology. And we are just basically trying to find something. So this is like high risk investments. But when it comes to actually investing in a mine, a single mine could absorb that much investment. A single mine could cost $10 billion to build and to set up. And this is also the scale, the magnitude of these enterprises is often not understood by the general public. So basically, this is just a small, tiny graph that I asked our graphics designer to make. I really like it very much. And anyway, this is just to tell you that the mining cycle goes from exploration to development to early production for production, reclamation. Reclamation is when we restore the environment to not exactly its previous condition. That's not really possible, but to something that is going to impact the environment less. So here, the investment I was talking about was in the exploration stage, but when it comes to its development in early production, where the most is being invested. And then full production where actually there is a return on investment. Now, the next thing we're going to do, we're actually going to look at a more interesting map. We're going to think of location and now that I've installed the environment, I set up the scenes and we have a bit of context on why mining matters and why we should care. Location of a mine is critical to its economical value. And if you look at this map, and this is just of course, map of Australia, these are all mining sites for metals. Base metals, strategic metals, rare metals, everything. And this is of course, from source from Geoscience Australia, the geological survey of Australia. This is current, this is 2020, but it's basically current. And what this shows is that I'm not showing the geology here. And you might think to yourself, well, of course, these mines must be related to geology and that is completely correct. The right conditions have to be met for deposit to be found. But most importantly, if you look at this map, you realize that the likelihood of finding a mine is strongly correlated to distance to the shore. And that is because a mine is an economical entity and exists only if it's worth mining something at that location. In other words, if a mine is too far off for any existing infrastructure, it might be difficult to make a case for its profitability. And this is the reason why there's mine all over the coast. It's because they need to be close enough to harbors to actually ship the ore out as Canada is producing a lot more than it needs for in-saurant supplies. There are only 20 million inhabitants, but it produces pretty much all the iron for everybody, for the whole of humanity, and ships it to China. To make it more obvious, this is a map of Western Australia from the GSWI. And what we can see here is that most mines are located along roads. So even the mines that are actually inland happen to be located along main roads. And the reason why we're talking about this is to basically make everybody understand here that a mine is essentially about logistics. The logistics of extraction, the logistics of the processing, the logistics of the shipping. This is what is going to cost you money and no matter how big a deposit is or how high the grade is, if the logistics don't line up, the accounting doesn't line up, and there is no mine, because there is no point sending anybody there at a loss. So basically a mine is not only these things, like on-site monitoring drilling on the right and on the left. These are crushes, which are basically going to take the ore that has been blasted off the face in big chunks and then reduce it into smaller chunks to actually be processed further. And it's not only large dump trucks taking these large slabs of rocks and taking them out to the processing plant. And this is the image that many people have of these mines. So basically, and you end up with this tiny, this cute little diagram of what a mine is. By the way, mines are a lot, lot more complicated than that, but we'll see that in a minute. So these things indeed exist, and that's pretty much how it looks like. At least this is like a toy, an example. It's mostly about infrastructure and networks, and here we are entering GIS territory. It's about infrastructure management, network management, deploying infrastructure, designing it, and then running the networks, monitoring them, maintaining them, and of course using them. So about networks, there's going to be lots of different types of networks. For example, as I was talking about the connection to the shore, there's going to be railway networks. So this is all privately owned, privately operated, privately designed, and privately deployed. This essentially means that the miners, the mining companies, have their own systems for handling railway traffic, for handling road traffic, for handling internet. They have their own cable networks for electricity and internet, telecommunications. They're going to have their own water network on the right here is actually a can. In the middle of the bush in Western Australia, in the Alpbac, you're going to have to build a city from scratch. And this is why this investment can be, and this is a city only for the people working under mine. It's going to have all the amenities. It's going to have electricity, and it's going to have gas, it's going to have the internet, it's going to have water piping for, of course, sewage, and drinkable water, industrial water. All of these, all of the networks, the infrastructures and the infrastructure network that you can think of for a city or a region, exists at the scale of a mine and have to be completely set up from scratch as soon as the mine is a bit too far out away from state infrastructure, nationwide infrastructure, and can even go to such a point where it has its, a mine has its own power station, and this is actually on the Procman's incline. So that's an iron ore mine owned by Rio Tinto, has its own power station. It generates its own electricity. They might have their own dam. They might build a dam to generate hydroelectricity, or they might have a gas powered station, or they might have a solar power station, as there's been a shift to that to be less reliant on external supplies, and of course, it's going to have their own ways of managing it like a small city, like a small country, almost at that point. And to give you an idea of the scale, this is the Procman's incline 4. This is an iron ore mine owned by Rio Tinto in Western Austria, North, East, and Western Australia. It's really far from everything, and the scale at the top left corner is 1500 meters, not very visible, but this white bar here is 1500 meters. This is site number four, which is easily 10 kilometers. There's an airport, it's not an airstrip, it's a paved runway, and there's a camp over here. This is about 15 kilometers in size, and this is one out of eight similar sites. There's eight other sites and other camps. There's only one airport, however, and this is to give you, that are all around. If you just zoomed out a bit out of this satellite imagery, you'd see a lot more. So these are massive city scale infrastructure that are privately managed and privately owned, and there is a lot to do. And it can even go down to things like ventilation networks for underground mines, or telecommunications networks underground. Also, they're going to have their own mobile network, actually their own GSM network that they're going to be running. So what about the contribution that Ocelandia has been making to this? What is possible for Open Source GIS to get involved in? Well, what we've done is that I've mostly been talking about early production and full production here for my example. But what we've been involved in is exploration and production with the mining industry, with Orano and SEMFIRE resources being the two pioneering clients. And there is a lot of interest in the mining industry for Open Source GIS technology and to actually have a little bit of competition with, let's say, legacy solutions that have been around for a long time and that are showing signs of age and that are basically crying for being disturbed a bit, being disrupted a little bit. In any case, what we've done in this scope for Ocelandia is Albion for Orano, which is 3D geological modelling for the purpose of exploration. So this is to find roll-front-style uranium deposit, has cut down the exploration time from, at least the data processing that was tied to the acquisition of the exploration data from about four months to six weeks compared to the products that they used to use, the other solution they were relying on, which was, of course, not QGIS, not an Open Source QGIS plug-in, but was a proprietary plug-in running on a proprietary GIS software that starts with an A. With that said, we also worked on OPS, that's not OpenStope, that's QGEOlogist. This is a Stratlog viewer, which lets geologists have visualised all the data that they collect when they're doing drilling. So this is useful for, at all stages, really, except for reclamation, where there is drilling going on every day on a mine. Usually, at the end of the life of a mine, there's dozens of thousands of holes that have been drilled for monitoring, for exploration, for development. And this is a tool that lets users actually visualise this data by selecting a hole on the GIS package, just select one of these points, and then they can visualise the lithological data, geochemical data, geophysical data, and of course the technical data that is actually tied to the drilling itself. And what we are actually working on at the moment, these are projects I have come to completion, is OpenLog, which is going a step further, that is trying to, instead of developing specific plugins, to meet the needs of one specific client, we've gone out and we've asked consultants, we've asked miners, we've asked geological surveys, what they wanted most out of QGIS to help them with mining activities, and their answer was drill hole visualisation in 3D, 2D, and of course on the map. So partially replicating the functionalities of QGiologists, that is basically being able to visualise strip logs, but also being able to draw sections on a map and project drill holes onto these sections, basically Cartesian projection, being able to visualise these drill holes in 3D and be able to visualise lithologies, basically which type of rocks are at depth, colour coded in 3D so that they can have better spatial awareness working on the field, or with the flexibility of having where basically unlimited licence, basically unlimited installs, being able to run it on a tablet with Q-field, being able to run it on their workstations and deploy it at will. So this is what we are currently raising funds for and we have managed to get about a dozen miners on board, sorry, a dozen partners including seven Australian mining companies on board, and we hope to make this a project to make both the open source GIS community and the mining industry aware of QGiST and aware of open source technology, that QGiST is a mature product and that they have actual choice in the matter, and that'd be it for me. Excuse me, the title, we need to finish up already. Oh, I'm done. Perfect. Then thank you very much for the talk, it was very interesting. Now we have some time for questions and answers. Please leave your questions in the question tab. Right now it seems like we don't have any. Oh, there's one. Have you considered similar applications for surface environmental investigations, contamination in soil, groundwater, etc.? Absolutely. As a matter of fact, soil remediation and simulating, for example, the diffusion of pollutants in an aquifer is something that the mining industry does, and that the mining industry is relying on proprietary products at the moment, and it's completely ignoring not because they choose to, but just because of a lack of awareness, because nobody's reaching out to them, and they do use these things. So everything that you can think of that you're developing probably has an application in mining, because as I said, when you set up a mine, it's like setting up a whole country's infrastructure from scratch. So you're going to have needs for hydrogeological simulations, and for the reclamation step, you will need to think of how you're going to handle that, because the days of just, at least, let's say in democratic countries where populations actually have a choice, the days of just dumping everything back into the open pit without thinking too much or just leaving everything as it is, that's not how it's done anymore, and now mines are being designed with the reclamation in mine, and of course these technologies are important. I have find applications in the mining industry. Interesting. Okay. I think we don't have any more time for any more questions. So thank you very much for your presentation, Dr. Efren. Well, thank you very much. Thank you for having me, and hopefully this will motivate and inspire people to actually look around and say hi to the miners. Everything that we have around us comes one way or another from the mining industry, and it's better to give them a hand and let them be more efficient if we want to actually expand more, and I think this is a great avenue for open source GIS to grow even more, as it deserves. Yes, absolutely. I think this is a great idea for us, like just mine. Okay then, we will be seeing you around if it's for you, and thank you very much. Thank you very much. Bye. Bye-bye.
Over the last 15 years, great strides have been covered in the Geospatial Open Source domain with the rise of numerous FOSS development companies using QGIS and PostGIS as foundational technologies to serve their clients and expand their scope. Nonetheless, the Geospatial Open Source community has been largely absent from METS (Mining Equipment, Technology and Services) despite the high applicability of the solutions it has developed for closely related sectors such as hydrogeology or infrastructure network management. This presentation highlights potential strategies for Open Source developers to integrate the mostly untapped Mining Industry market through a review of the current state of the METS software market, areas of potential improvement, customer demands and practical examples.
10.5446/57226 (DOI)
Hello and welcome. How are you? Hi. I'm well. How are you? Thanks so much for having us. I don't think you really appreciate it. Thank you for presenting. Sorry if I missed, didn't pronounce you or name correctly. No, it's perfect. Yeah. Okay, that's great. Thank you. That is good. I will share my screen really quick. Okay. We are almost on time. You can put in full screen. Perfect. I will add it. Okay. That's great. Just we wait until just one minute and we can start. Okay. Great. Okay. I think we can go now. Hi, everyone. My name is Chisato Calvert. I'm the interim director of open a Q. And welcome to the talk today focused on how to explore and access. Open a Q is an open a Q. It is available to the public and across the country. It has a total of 133 countries. So just to give an overview. Open a Q is a nonprofit organization based in the U. s. And our goal is to connect communities with open air quality data. So that we can collectively fight air inequality. And what we essentially do is provide the service where we provide a global air quality data platform. available to the public. And as of now, since its inception in 2015, we've been collecting data from 133 countries around the globe. And we aim to use the gravity of that data so that we can get diverse stakeholders, including scientists, journalists, software developers, government agency, artists, policymakers, those that are really passionate about making a difference to create clean air, to really work together to fight air inequality. As a brief outline of the presentation today, I'll be first talking about why open air quality data matters, followed by a little bit about the OpenEQ platform, some examples of community use cases, and then lastly, walking through a brief demo about how to access the data. So why open air quality data matters? Thinking about data infrastructure as an invisible and foundational infrastructure is really important for solving air pollution. So when you think about skyscrapers in cities, they require foundations, even though they're these tall, gigantic architecture. In order to make it effective and to make it, I guess, ground-true thing, is to be able to ensure that there's that invisible infrastructure. And the same thing with air pollution and air quality data. In order to fight air pollution more effectively, we need to have access to that basic infrastructure, which is data. And the impact really depends on the ability for different stakeholders to be able to access that existing air quality data at various geospatial scales. So for example, when we're understanding health and environmental and economic impacts of air quality, when it comes to creating air pollution-focused policies, setting and enforcing these standards, raising awareness and storytelling around air quality, and communicating public health actions, all of these impacts are connected to and are really important connections to air quality data itself. This is one statistic that we found from a 2020 global state of play report that we published at OpenAQ. And this showcases that of all the governments around the globe, only 50% of those governments actually produce any air quality data, which means that 50% of the world governments do not produce air quality data. So what happens is 1.4 billion people around the globe don't have any access to data or information about the air that they're breathing on a day-to-day basis. And even further, of the half of the world governments where they do produce air quality data, here are some snapshots. They have the data available in different websites, which means that they're not really accessible unless you know about it. They're also in different formats, which means that you can't necessarily compare government stations in Buenos Aires, for example, with stations in China. And what makes it difficult is that you can't compare these different data sources because of the disparate formats. And even furthermore, because the data is stored in the government websites, that website can go down at any point. And once it goes down and it's not accessible, then people can actually access the data about the air quality in their city. To share, I guess, a backstory of why I became interested in air quality, I've been studying in Mongolia, Lombatre, Mongolia since 2006. And air pollution is a huge issue there. It's a seasonal issue. It's a topographic issue. It's a political issue. And I think that since 2006, there's been a lot of momentum in terms of getting air quality data into the hands of Mongolian citizens. And that gives me hope. And it gives or open a queue hope in terms of the types of changes and ripple effects that we could see in countries where if the data, if that foundational infrastructure, is made available to the public. Citizens and policymakers, people who are clear advocates of air quality can actually make a change on the ground. Now I'm going to shift over to share a little bit about the open a queue platform itself. This is an overview of where the open a queue platform fits in within the air quality ecosystem. So as I mentioned before, there's air quality data being produced by about 50% of the world's governments around the globe. We connect that data to the world by actually creating an open transparent, accessible open a queue platform. And that platform is then being used by various stakeholders, including media, government policy makers, climate change, and public health researchers, the private sector, as well as universities and other educational institutions. Here's a snapshot of the open a queue platform world map. So you'll see each point on the map. These are all the data that we're aggregating and we're making accessible open source in one stop shop, a one platform. As of today, we've reached over 10 billion air quality measurements actually across the globe, across 396 data sources. And just to clarify, a data source is a data managing entity. So for example, the US EPA would be an example of one source. And we're collecting data from 133 countries around the globe. And as of February 2021, we've not only been collecting reference grade government data, but we're also collecting low cost sensor data. So that really expands the possibilities of filling in these key data gaps where you see in this map currently, we don't have as much coverage in South America. We don't have as much coverage in Africa or Australia. As I mentioned, it's an open source platform. And that's the beauty of open a queue. So this is our GitHub page. Anyone can contribute to the open a queue platform, which is really amazing because this means that we can have contributors who are adding sources from Bosnia. We have people who are fixing adapters and making sure that the platform is running smoothly. All these contributions really help to make the open a queue platform what it is. Now I'm going to shift over to talk about a few community impact cases of how the data on the open a queue platform is being utilized in different ways. My first example is in research. So the NASA, the US NASA GMNO team has created a real time, or I guess near real time global air quality forecasting platform. And this was made available by the US and this was made available primarily because they were able to do evidence checks between the modeling that they were creating and against the data, the ground monitoring air quality data that's available on open a queue. So in terms of the impacts, they were able to compare that key model with the observational air quality data. And the actual platform itself allowed for comparisons across the globe, for example, PM2.5, particulate matter PM2.5 concentrations in China versus Mongolia. And this really also helped identify broader data availability gaps in NASA's key priority area. So it really helped to contribute to NASA's research as well as development of new tools for the public to use. Another research example is a study that was conducted by Sarat Gauritunga at Urban Missions based in India. And his study really showcased the nitrogen dioxide concentrations and comparing it against the pre-COVID lockdown and post-COVID lockdown period. And he used this research actually to inform policy at the various state levels in India. And so this is a very clear indication of if the data was not available on open a queue where you can actually track historically what the historical data trends were in a particular place around a particular pollutant like nitrogen dioxide, this kind of study wouldn't have been possible. So we're really excited to see that there are scientists who not only want to conduct the study but also make those connectivities to push for policy at various government levels. Another research study is focused on COVID lockdown sort of emissions. But rather than looking at one particular city, this is looking at COVID concentration, sorry, looking at air quality concentrations across 34 countries. So this is a more global study of how impactful the lockdowns were due to the COVID pandemic. In terms of community ground work, we've done in addition to providing the open a queue platform, we also connect with stakeholders around the globe. So we've conducted workshops in different countries. And this is one example of a workshop in Accra, Ghana, where we brought together stakeholders from media, from software and development, from government. Those advocates on the ground who are working with low cost sensor data and really collectively brainstorm what the main problem is when it comes to air quality, identify one particular problem, and then also co-create a solution or action related to that. So some key impacts from this engagement that engaged with open data is that they decided that a community statement demanding increased coverage and frequency of air quality monitoring in Ghana is the best fit forward. And this statement was actually picked up by a publication called Clean Air Journal. And a professor in Columbia read this journal article on Clean Air Journal and as a result, donated air quality monitoring equipment to the local network across Ghana. And so this is an example of how connecting people around open data and the action that it spurs can actually have that ripple effect, where you have an outcome like a donation of air quality monitoring and just having Accra, Ghana more prominently on the map, on the global map when it comes to air quality related work. So this was a really impactful engagement that we were a part of. Another community based workshop that we conducted was in Sarajevo, Bosnia, where we brought together diverse stakeholders once again. And they also wanted to push for a community statement demanding air quality emergency action plans. So this was a little bit different in that it was less about the coverage of the data monitoring network, but rather holding the government accountable to make sure that the public is being informed about the various thresholds of air quality that they are breathing. So they submitted this as a policy recommendation. It was able to push for emergency thresholds to ensure that there's warnings once the air quality levels are too higher, too hazardous. So this is another example of how we can actually use open data to start a conversation and how this bringing together of committed stakeholders can really push for advocacy on the ground and make a difference. Now I'm going to shift over to talk about data access on the OpenAQ platform. Just before I show my screen for a demo, I wanted to show broadly four ways to access the data. One is through the OpenAQ API. The bottom left is focused on how to use our dashboard, where it's a little bit easier to navigate through the website. And another way is through our AWS S3, which is our storage bucket on the OpenAQ platform. So I'm just going to shift over and share my screen here so that you can see the full website. So this is our home page at OpenAQ. And if you go to OpenData, you'll see five tabs here. So these are all different ways that we can access the data on the OpenAQ website. I'm first going to click on the world map just because you've already been familiar from the presentation. So this is a live map. So these are all the points that you'll see of OpenAQ's aggregated air quality data from across the globe. You'll see here, so the circle means reference grade sensor, and the square means a low cost sensor or air sensor. And here in this box is the various gradations with each gradation of colors associated with the pollution level. So dark blue would mean that it's the cleanest in terms of PM2.5 concentration, and the highest would be red. So you'll see some red dots here when it's scaled out like this. And then I also wanted to mention that you can actually pick different parameters. So we collect seven main parameters right now, our pollutants, and we collect several others, but these are our main pollutants that we're collecting. So if you click on CO, you'll see that it'll change in terms of the data collection. If you click on PM10, this is the data that comes up. So we usually, as on default for PM2.5, just given how important it is for public health to understand PM2.5 exposures. So this map, you can zoom in into a particular area. Just given that we're focusing in Buenos Aires, let's see if we can zoom in. So it seems like for Argentina, we don't have as much data right now. I think this is something that could hopefully change in the future. But a lot of the data that's available in South America right now, it looks like it's chilly. The gray actually indicates that the government site may be down. And so that's something that we try to be mindful about and be able to build partnerships with organizations and governments so that we can actually streamline the data real time as accurately as possible. So this is the map. When you click on a particular point, you'll see that it will show the location name as well as the concentration and the source. So this is a purple air low cost sensor. And you can compare and view location this way. But I think that in terms of the utility of the map, it's a better served as a broader bird's eye view of the data. I think if we want to go in a little bit deeper and interrogating the different data sources. Let's see. Looks like there's a little bit of a connectivity issue. Oh, there we go. Let's go to the locations page. And the locations page will allow you to filter by different parameters. So one is country. So if you go down to, let's say, because we know Chile had a lot of resources or a lot of data points, you can click Chile. And these are all of the data, air quality data. And these are different pages available in Chile. Each box will have the source, the collection dates, what parameters. So if it's which pollutants it'll be collecting. And then these tags are low cost sensor because it's purple air community. It's a community based organization that's collecting the data. And it's stationary. We do have some mobile air quality data, which is why we have that tag right there. So if you want to view more, you click on View More. And it will take you to that particular location. And it will share how many measurements have been collected during the projects period, the coordinates, the latest measurements. And there's a technical read me for purple air if you'd like to learn more about the source itself. And there's a scatter plot that allows you to see the different measurements of PM 10. If we change it to PM 2.5, you'll see the PM 2.5 measurements in the scatter plot across the different dates and times below. And this is a snapshot of the days of the week and month of the year. So if you're looking more broadly at different patterns throughout the year based on different air quality related events or different seasons throughout the year, you can do that. And here you'll have we aggregate the data by averages. So you'll see the average and counts of the PM 2.5 level here. And here's a map in case you are interested in seeing where the nearest PM 2.5 or the nearest government grade sensors or low cost sensors are spatially in proximity to the location that you've selected. So that's another way to sort of find the data by location. Another way you can do is through country. And these are all country boxes here. So you would actually just have to scroll to the country that you're looking into. This time, let's say we want to focus on Hong Kong. You click on Hong Kong and view more. This shows that there are 16 locations, this many measurements, and one source. So it may be a government source, for example. And here is a snapshot of the various air quality monitors that are available. It looks like based on this map in Hong Kong, the data that we've collected is all reference grade sensors. And then you'll see that it's one location. And you can view more, again, kind of like the way that we had done with the locations page. And then lastly, another dashboard way is through the data sets. This is more geared toward, I guess, project-based air quality monitoring networks. And so this is part of a project that we are part of with Environmental Defense Fund based in the US, where they were collecting mobile and stationary measurements. And so if we want to, and this is updated seven years ago. So in this sense, this was transferred over onto the OpenAQ platform in February. But we wanted to be able to provide historical data on OpenAQ. So here is a Chicago mobile methane project. So this is a network that was within the project that they were implementing. And so if you click More, similarly to the other pages, you'll see that this historical data is available and life cycle stage. So they analyze their data, which is why we have that life cycle stage pinpointed there. And because it's a mobile monitoring project, you'll see that we'll have the radius here, as well as the different geographical points where the data was collected. But this historical data is made available for anyone who's interested in looking deeper into this particular location using mobile sensors. And then the last way to be able to access the data is actually through our API. So this is for folks who are more interested in the software development side and or interested in accessing data, in some ways, in a little bit more efficient ways, because it's not dealing with navigating a dashboard. It's really just putting in particular endpoints. So you'll see here, we're now currently on our version 2 API. And each endpoint here is related to a particular search parameter that you want to conduct for air quality. So for example, if you want to look into version 2 measurements of air quality, you can use this page. And you can get the response. And you'll see at the bottom what the search parameter is. You can search by averages. There's location. Location ID is a little bit different from locations in that it's tagged based on what the source is identifying as the location. You can get the latest measurements and by country. So all of these search parameters are equivalent to what is available through the dashboard or through the OpenAQ website. Sorry, Chishato. I have to interrupt you, because we should leave some time to questions. OK, it was a very, very nice presentation, very clear. But you have some questions, so just go to them. Thanks for the presentations. Congratulations for the work. The world needs it. Did you consider to use the OGC Sensor Things API standard to serve the AQ data? Yeah, so I don't know. I missed the last part a little bit. But you can access the data through the API. And I'm happy to share after this calls, or I can actually put it in the chat. Yeah, I actually also copied the question in our chat. So you can. Oh, awesome. Great. Thank you. Perfect. Yeah, thanks so much for the question. And yeah, I think API access is the primary way in which the community access the data. OK, there are some more questions. For the community data, do you check the data from sensor community? Yeah, so we've actually connected with them in the past. And I think they're generally interested in sharing their data. So last time we connected with them, I think was over in the spring. But just given that they are community driven or organization and have low cost sensor networks across Europe and across the world, we're definitely interested in being able to integrate their data onto the OpenAQ platform. So it's accessible more broadly to those who are interested in a particular region of the world. But thank you. Yeah, I really appreciate the question. Great. Another question is, how do you deal with fails, false data report about air quality in some countries? Yeah, that's a good question. So our goal, because our goal is to be able to provide open source and we are providing raw data, we actually can't kind of, at this point, we haven't done any QAQC data. And so when there's false reporting, what happens is the value becomes like 9999. It's very clear when a value is false in the sense that the reference grade monitor is down or something, if there's a bug or something like that. But it is because we are OpenSource platform, it really is on the user to be able to differentiate that. So we could probably communicate that more broadly. But we don't have, because we're either, the government is providing the data or we're pulling the data, we have no, I guess, control in terms of what the data is that shows up on the website or on the API. OK, another question is, how does OpenAQ make the reference and low cost sensor readings consistent? So we have an OpenAQ data I can share. We have a format. So we request that any data coming in actually follows this format, regardless of whether it is reference grade or low cost sensor. So this standardized format really helps to ensure that there's consistency. And then also, we also collect a lot of metadata on both reference grade and low cost sensor, but particular low cost sensor so that there's more context around the data that people would be accessing through the OpenAQ platform. Great. Let me check. I think there is one more question. Are the API following the OGC sensor things API? Sorry, I will copy in the chat. Many acronyms. You know, I haven't heard of the OGC sensor things API. Everything on our system is actually run through AWS. And so all the protocols and sort of the coding that we're doing is through that. But we'll definitely look into this as well. OK. I think we have covered all of the answers, all of the questions. Sorry. It was a very, very nice talk. Thank you very much. We thank you. Yeah. And thank you for everyone who's tuned in and definitely visit our site at www.openaq.org. Thank you so much. Really appreciate it. Great. See you. Thank you. Thank you. Thank you. Thank you. See you.
OpenAQ is the largest, open source air quality data platform, hosting 5+ billion real-time and historical measurements from 120 countries, and serving an average of 35 million API requests per month. The data have been used for a wide variety of applications, from air quality forecasts produced by NASA scientists to platforms communicating air quality in India to data-driven media reports by the general public. By providing this foundational data infrastructure, OpenAQ is able to convene people and organizations from across the globe to further raise awareness and develop innovative solutions to combat air pollution. The talk will give a technical overview of the platform, highlight the impact through user stories, and feature new tools developed with the community to enhance the platform and effectively use the data to fight for clean air. Air pollution, responsible for one out of eight deaths around the world, is a global environmental and public health crisis. Despite the urgency of this growing problem, only 50% of governments worldwide produce air quality data, leaving 1.4 billion citizens without access to fundamental information that could protect them from the harmful effects of air pollution. Where data does exist, data are often in inconsistent and temporary data sharing formats, making it difficult for the public to readily access and make use of the data. The OpenAQ platform aggregates and harmonizes real-time and historical air quality data from 120 countries from a variety of sources including reference-grade government monitors to community-led low-cost sensors to research-grade data. The open source platform hosts 5+ billion data points and the open API serves an average of 35 million requests per month. The data has been used for a wide variety of applications, from air quality forecasts produced by NASA scientists, to platforms communicating air quality in India, to data-driven media reports. By filling a basic data-access gap and building foundational open source tools, OpenAQ has empowered diverse individuals, organisations, and sectors across the globe to fight for clean air. How has opening up air quality data transformed the way we think about air pollution? What kinds of innovative solutions have been developed? Looking into the future, how do we address data accessibility, coverage, and transparency in order to most effectively enable cross-sector and cross-cultural collaboration to drive action? This presentation will give an overview of the platform, highlight impact stories among the OpenAQ Community, and share new tools we have developed with communities to share insights into how open air quality data has shaped and continues to shape the global fight against air pollution.
10.5446/57228 (DOI)
presentation and I will start his presentation right now. I would like to make a presentation about PG meta data, which is basically a QTS plugin to manage your meta data in your PostgreSQL database. What is the meta data? It's a data about data. It's to help people to understand your data. For example, on the right side, you can see all the fields which can be used. So you have some identification fields like the title, the abstract of your data, the categories, you can add themes, keywords. You can also add special properties to tell if your layer is a polygon layer, a point layer to speak about the special level, the optimal scales. You have some data about publication like the date, frequency, the license, and the confidentiality of your data. You have some automated calculated fields like the feature count, the geometry type, extend projection. You can also add some contacts to tell the people who is the honored publisher and the custodian. You can help your user by giving them some links to external resources like webpages, documents. PG meta data is designed for people using PostgreSQL to store their layers' data. Basically, for example, you already have a PostGIS database with some layers like the buildings, footways, gardens, trees. And all you need to do with PG meta data is to add a new schema to store the meta data. What is great with PostgreSQL is that you have a centralized data store so that you can have your meta data stored in the same location as your data, which makes it easy for you to share the meta data with your users. They all just need a PostgreSQL connection to access it. You can benefit from PostgreSQL-rich features such as you can store tables, relations, you have contrast views, you can develop some functions and triggers to help managing your meta data. You can manage also the rights and access control of your meta data like allow only read or edit the meta data. What is great with PostgreSQL is that you have already many clients which helps you to gather the data to see it like LibreOffice, PGI mean, PSQL, DayPiver. Obviously you can also use QGIS as your PostgreSQL viewer. The last point is that you can also backup and restore your meta data with your data in the same process. As a GIS administrator, we have developed some tools to help you. There is a processing algorithm accessible from QGIS processing toolbox and you can create for example the needed structure. There is a schema called PgMetaData which must be created in your PostgreSQL database. Also the discrets allows to add the needed tables, the views and the data like the glossary and the translations of the glossary. You have also another QGIS algorithm helping to create a full-feature QGIS project to be used at the administration project. Basically it will create a new QGIS project with all the needed layers from the PgMetaData schema to help you edit the contacts, the templates, the glossary and obviously the data sets. When you have created your administration project, you just need to prepare the editing by adding the needed contextual data such as the user defined themes like the one above environment and climate for example. You can add your contacts such as the name, organization, the organization units and the email address. You can extend or improve the existing glossary if needed and if some translations are missing they just lay in the PostgreSQL table so you can edit them and improve them too. Once you have prepared your editing, you can just open the full-feature QGIS form with all the great tools inside QGIS like you have checkboxes, combo boxes, some constraints. You just have to choose the schema and the table of your data set and then you can create the needed fields like insert the title, the abstract keywords and you can have another tab with the contacts and their roles and you can add some related links. All that has been done with the native QGIS features. We have not developed this form, it's just QGIS features. For the admin, you can also have some helpers. Some data are calculated from the table content such as the valid unique ID, which is a UU ID describing the data set. You have the layer extents, the feature count, the geometry type, projection ID and name and also some other useful fields such as creation and update dates. We have also added some views to help find the often post-grasculable tables, which mean there is no metadata for these tables yet in your database. Or the reverse, which means you have already added a line in your dataset table in your PG metadata schema, but there is no table or view corresponding to this line. We also have added some views to help the admin to export the data, for example a flat representation of the datasets with the contacts and links aggregated. So you only have one line per dataset. So that was the admin QGIS administrator part. Now I will show you some key features for the GIS user inside QGIS. The main tools are described in the animated key. You have the possibility to search with the QGIS locator on the bottom left on the screen. You just type the name or title or description, you find your layer and you just add it automatically in your QGIS project, in any project. And then you have a right panel showing all the metadata the GIS administrator or editor has filled up before. But it is very straightforward for the user. They don't need to know in which schema the table is, they just type some words and get the data with the metadata corresponding to each layer. Every time you change the layer on the layers panel, you have the metadata which is updated. You can also export each dataset to different formats such as HTML, PDF or Decad which is a standard to store metadata. It can help to publish or send your metadata to another user. Once more, we also have some advanced features such as you can easily change the templates for the HTML content which is visible on the right panel. They are just stored inside the HTML template table so you can edit them very easily inside QGIS, in QGIS form. You can also use some PostgreSQL queries, for example, to generate the HTML card, like just select a function with the schema, the table and you can choose the localization. For example, here it's the French card you will get. You can have another PostgreSQL function to generate a decade representation of all your datasets or only of a subset of your datasets. For the system admin, you can configure how the QGIS user will use the plugin. For example, in the configuration file, you can add some variables to hide the admin tools or to auto-activate the plugin if it has been deployed by another tool. You can also share your metadata. And before we have seen how to export each metadata, you can also use the SQL functions to show, to get the HTML card and use it in your own application if you are a developer. For example, here we show how there is a module in LISMAP Web Client which is a QGIS project publisher, web publisher, and you have the same HTML presented in the middle here which you can get only with the SQL carry. So it's meant to help developers to integrate the feature of PG metadata inside other applications. You can also use this module, for example, also exports all the catalog in a format that can be harvested by third-party metadata portals. We have also documentation for the administrator, for the end-users, and for the system administrator. And with other pages covering changelogs, we have some videos, the roadmap, and also the database structure. You can have it on this link and it's auto-generated so it's integrated every time we publish a new release. As a conclusion, I will first try to answer the key question, why another metadata tool? We know many open-source tools already exist to store and share the metadata, so why PG metadata? Some reason here. I talked about that before in my slide about PostgreSQL. We have rich features, it's easy to share to publish because you just need a PostgreSQL connection. What is the key feature, I think, is we keep the metadata as close as possible to the data so that you cannot lose your metadata and have it separated. It's not really a new application. It's not a new metadata app. It's more a set of tools for QGIS users and existing PostgreSQL database. So as a GIS administrator, if you already know PostgreSQL, you can understand very easily how PG metadata works. It just tables, views, functions. You can improve it if needed. And as a GIS user, you do need to learn to use a new application. It's just inside QGIS. It's integrated. And it's more a GIS user-oriented plugin. As a user, you just need to search and get the metadata from QGIS versus when you have a web portal, you need to browse the web page and then download the data and then open it in your GIS tool. And it is not in it to want to be designed to replace the existing metadata web portals, which are there to easily share with just a web browser the metadata. It's much more like a complementary tool. You can edit your metadata in your PostgreSQL database and then you can publish it, share it with these other third-party web portals. We have a roadmap. We would like to add more locals. Today is only in English, French and German, but you are free to contribute if you need some other language. We need to add some new features such as to support raster tables. We would like to help the admin to auto-fill the dataset table, for example, from a selection of a PostgreSQL tables and views. You use the name of the table and the comment of the tables to fill in the title and abstract. We need to add some import and export tool from to the QGIS native layer metadata properties. We want to do that for the start, but we have not done it yet. It would be great to be able to import metadata from Dcat2. We can export, why not import? Some resources. There is a lot of links I give here. You have documentation, database structure. The source code is on GitHub. You can help contribute in the translations too. With TrendFX, we have a Twitter account. We have just released today the new version, 1.1.0, with views support, German translation, new items in the glossaries and some enhance locator search, for example. I would like to thank the French CarProvance for funding this extension. And PGMeetData already has some contributors. I would like to thank FJot and Trutenberg for testing, helping and improving the plugin. Thank you for your attention and the indication question. Welcome. Thank you so much, Michael. It was a really clear presentation. We have a lot of questions. I hope you can see the questions here in the chat. I can see the notes. You can see them. Can you start? Maybe it's better for you to read the questions. The first question is how to manage multilingual metadata. For now, we have a solution to localize the glossary, the categories, the things like that, the licenses or other things, but not the content of the title and abstracts, for example. If we want to do that, we need to add this feature. It's not yet possible. We would like to do it in the future. For example, for Switzerland or other countries, we need a lot of different localizations. We have only the possibility to export the HTML card in French or English or German, but only the word title abstract will be translated, not the content written in the dataset table. There are some questions about importing from or to the main web portal such as the Geo source or a CKN or that sort of thing. I can share my screen. I'm not sure it will work, but you will tell me. I would like to show, for example, there is this portal, which is a LISMAP application, and you have the metadata, for example, here of the layer, which is taken with the same SQL. This entry point can also be harvested, for example, to – we have a French government metadata system, and all these datasets are harvested automatically from a D-CAT version of the metadata. For example, I can show it here, and this URL can create the needed XML, which can be harvested. So you can import or automatically update the metadata from the portals by using this tool. And we made the plugin to help using it with different solutions, because it's SQL-based, you can build your own application. And for the import part, we have not yet import capabilities. You can always write SQLs, and we plan to use foreign data wrappers to get the XML and use post-rescue XML capabilities to import the data. It won't be very hard, what is hard is to know the difference between the schemas, but that will be – it is one of our goals, and we will do it in future versions. So based on the answer, we chose to use the D-CAT standard, because it can be harvested by many applications, and there are some tools which can translate this kind of XML to other formats. So that's the first step, and we have a lot of work to do to continue. I read another question. Is it possible to sort the metadata in another database, or has to be for each database a nonch schema? No, we have made it possible to use multiple connections. So if in your QGIS you connect to several databases, you can use QGIS to tell what can be done. So in the processing toolbox, there is some tools to set the connection to the databases. So you can choose one or several connections. But at present, if you want to use only one PG metadata schema in one database, we need to rely on foreign data wrappers, but it's a work in progress too to allow to use many databases. Is there another app with a layer metadata search plugin? It seems to do quite the same job. We would like to mimic the maximum, what you can do in QGIS. For example, there is a metadata in the QGIS vector properties with the identification categories, keywords, contact endings, and we try to have a database structure which to be close to this QGIS implementation so that we can in the near future, it is planned, we can export or import automatically from PG metadata to QGIS metadata panel. And I'm not sure if I have time for more questions. I just check here. You still have. Okay. So I answered the QGIS layer properties metadata. So it can be, we can have an option in PG metadata, for example, to tell save to QGIS or import from QGIS. We could also have a processing algorithm to search for QGIS metadata files and harvest them and just create the corresponding metadata. We do not want to force the user to have a full synchronization between the layers metadata and the PG metadata because for some cases, and in some projects, QGIS projects, you can have a metadata which will have a title a bit more different than in your database. So synchronization is okay if the user can control it. So it just has to be done. Is it correct that it's not possible to harvest the metadata directly? For example, from GeoNetwork, it is not possible yet. We plan to add it in the future. And there is one interesting question about the model we chose, which can be seen in the database tab of the documentation. And it is indeed different from the one in the GeoPackage metadata. We chose to be very light in the beginning of the project to mimic QGIS metadata properties. And we really need to have a further look at the GeoPackage standard to see what we can use, what we can share to make PG metadata more compatible with other solutions. But at present, it is a completely different structure indeed. And you can see the table definitions, the views, and all the functions that we use in PG metadata, for example, to generate the HTML from JSON or get data sets and things like that. So I encourage you to go see the documentation. And you can contribute if you want to add some more languages or to help with feature ideas. And I would like to thank you all for your very interesting questions. And I hope I was clear enough to say we did not want to reinvent the wheel, but to help the QGIS and PostgreSQL users to use and share metadata very easily, only with QGIS and Postgres. Thanks a lot for your attention. Thank you very much, Michael. And thank you for being so clear in your presentation and the questions. And this is the end of this session. I would like to thank you for the four speakers that we have. And we'll continue, but we have a short break now for the next presentation. So thank you all and see you around.
PgMetadata - A QGIS plugin to store the metadata of PostgreSQL layers inside the database, and use them inside QGIS PgMetadata is made for people using QGIS as their main GIS application, and PostgreSQL as their main vector data storage. The layers metadata are stored inside your PostgreSQL database, in a dedicated schema. Classical fields are supported, such as the title, description, categories, themes, links, and the spatial properties of your data. PgMetadata is not designed as a catalog application which lets you search among datasets and then download the data. It is designed to ease the use of the metadata inside QGIS, allowing to search for a data and open the corresponding layer, or to view the metadata of the already loaded PostgreSQL layers. By storing the metadata of the vector tables inside the database: QGIS can read the metadata easily by using the layer PostgreSQL connection: a dock panel shows the metadata for the active layer when the plugin detects metadata exists for this QGIS layer. QGIS can run SQL queries: you can use the QGIS locator search bar to search for a layer, and load it easily in your project. The administrator in charge of editing the metadata will also benefit from the PostgreSQL storage: PostgreSQL/PostGIS functions are used to automatically update some fields based on the table data (the layer extent, geometry type, feature count, projection, etc.). The metadata is saved with your data anytime you backup the database You do not need to share XML files across the network or install a new catalog application to manage your metadata and allow the users to get it. The plugin contains some processing algorithms to help the administrator. For example: a script helps to create or update the needed "pgmetadata" PostgreSQL schema and tables in your database a algorithm creates a QGIS project suitable for the metadata editing. This project uses the power of QGIS to create a rich user interface allowing to edit your metadata easily (forms, relations). Why use another interface when QGIS rocks ? More PgMetadata features will be shown during the presentation: Modification of the template to tune the displayed metadata Export a metadata dataset to PDF, HTML or DCAT Publish the metadata as a DCAT catalog with Lizmap Web Client module for PgMetadata. It can then be harvested by external applications (Geonetwork, CKAN) The data model is very close to the QGIS metadata storage and the DCAT vocabulary for compatibility.
10.5446/57229 (DOI)
Okay, welcome back everyone. I am Rajat this side. I am session leader for this session. Our next presentation after two very interesting presentations by Ricardo and Gerard would be on PM tiles which is an open cloud optimized archive format for serverless map data. It would be presented in a video format and after that the author, the speaker Brandon Liu would be available with us for the questions. So I request you all to be ready with your questions and ask it using the venue list questions tab. So I start the video now. Hi everybody, I'm Brandon. I'm here to talk about dynamic maps and static storage of PM tiles. So I know this track is about serverless and I think I want to reiterate the benefits of doing serverless computing. So three of the main reasons for serverless are that it's very simple and easy, that you don't pay for idle time on servers and ultimately there's less maintenance. So it sounds like a pretty idealistic goal to get everything serverless. But in some sense, a lot of applications on the web are already serverless. And the one I want to talk about today is actually video. So here's a video clip that is just about 10 seconds long and it shows a video of kind of a satellite image zooming out from Chicago. Now the interesting thing about this video is that it's pretty easy to understand how it's deployed. So you just have like an mp4 file and you upload it to a server and you can play it back using the video element in the browser. So video, it has a standardized format for a seekable video on the web platform. And videos also have codecs such as h264 that efficiently pack video frames into a single file. And we usually don't need a specialized video server for basic use cases. We can just put them on a website and include them in a page. And that's pretty well understood among most engineers and also web developers. So I kind of want to make this analogy, which is that videos are a lot like maps. So in this case we're looking at a straight, like, at one single frame from a video. On the other hand, you could look at a map, even an interactive map, such as one made with leaflet that also shows, let's say, a raster satellite image of Chicago, this one's from Sentinel-2. And it's also made up of tiles, kind of like video frames. But usually when we think about these tile maps, we think about servers. Since you can think about it, like, each tile is a different API request, it's hitting some TMS server, and we run a tile server that generates the data, serves it to the browser as an image. That's sort of the traditional way of thinking about these tiled web services. Now, there is an emerging technology called Cloud Optimized Geotifs. And this has a very similar goal. In a lot of cases, you're able to use a Cloud Optimized Geotif and serve it directly to the browser. It is, however, constrained by backwards compatibility with existing Geotif readers. It's limited to only raster data, so if you wanted to have vector or other kinds of data inside of your tiles, those don't really fit into the Cloud Optimized Geotif format. Also, the directory size, which is sort of the headers of the Geotif that describe where the data is, can be really big. If they're multiple megabytes, it might not be practical to serve those directly to the browser. There's another format called MBTiles, which is specific for the TMS tiling format, like Zoom 0, Zoom 1, Zoom 2, that are squares, powers of 2. But it's based on SQLite. And SQLite is a transactional database that is, in a lot of cases, overkill for a read-only use case. And there is some tricks for reading a SQLite database over the network using range requests, but that is usually required something like a SQLite library compiled to Wasm, for example, and it is quite complicated. So what I want to talk about today is a new format that sort of takes the benefits from COGs, from those Cloud Optimized Geotifs, and also MBTiles, and combines them together. So it's an open source format with an open source reference implementation on GitHub, github.com, slash protomap, slash pmtiles. It's totally specific to the Web Mercator tiling scheme. And those tiles are readable directly via browser range requests. More importantly, those tiles can be raster images like JPEGs or PNGs. They can be vector tiles. They can be anything. And there's also some trick-citing to make it efficient and work really well with the browser, such as recursive directories. Now, I'll get a little bit into the format of PMTiles. So basically, it is a binary format that has a header, at least one directory, and then all of the tiles that are just bytes in the archive. So if you look here closely, sort of these blue parts in the middle are the raster or vector tile beta. And at the very beginning is a root directory that describes a mapping from the ZXY tile coordinates to the offset and length inside of the file. So much like a video, you're able to kind of seek through this archive, kind of like, you know, seeking from one second to the other second, but instead, you are traversing this tile pyramid. And the parsing of this format is all done in JavaScript. Now, there is some ways to make this more efficient. So for PMTiles, all of the tile coordinates are stored in binary. So each directory entry is only 17 bytes. So there is a maximum limit on how big a PMTiles archive can be and how many sort of tile entries it can have, but it's quite large. So here's another example, kind of visually showing you a tile pyramid. So if you look at tile coordinates 0000, it will describe an offset into the archive, 100, another offset into the archive, etc. Now, something interesting is that the header section of PMTiles is always 512 kilobytes. So that is sort of, lets you read an entire directory at once from the browser without these additional requests. Something else that's important for geospatial use cases is deduplication. Specifically in the case of vector tiles, or in some cases, raster tiles, you might have a lot of tiles that are duplicated, such as the ocean or empty land, or if you go into a pretty deep zoom level and the same data is repeated over and over again. Well, those can be deduplicated inside of a PMTiles archive by having an entry or multiple entries that all point to the same offset. So they would only refer to that ocean tile once in the archive, and that can save a lot of space. In a lot of use cases, such as global vector tiles, the earth is 70% ocean, so more than half of your tiles might just be these same ocean tiles that are only stored once in the archive. And this idea of recursive directories means that in the case where your archive has millions of tiles, you don't need to download a directory all at once that has the offset data for the entire archive. Instead, it's organized with multiple levels like a tree. So if you are requesting, for example, here in yellow, a tile that's at zoom level 14, you might start at the root directory in green. And at zoom level, let's say 8, instead of pointing directly to a tile, it will point to a leaf directory. And the leaf directory will then have all of the data for that subtree of the pyramid. So in this way, you're able to scale this range-based design to archives that cover the entire world down to a typical zoom level of like zoom 12, 13, 14. So how do you create PM tiles? Right now, the preferred way to create PM tiles is to start with the MB tiles format, which is the SQLite format. And in that repository, which is github.com slash protomap slash PM tiles, there is a Python command line utility called PM tiles convert. And that will pretty quickly convert an MB tiles to a PM tiles. And how do you host and read PM tiles? Well, since PM tiles is totally static, it can just be uploaded to S3 or another major cloud provider that has an S3 compatible storage service. There's also in that repository a leaflet plugin and a very small JavaScript library to do the parsing of directories and the loading of the image or vector data. So to recap, some of the advantages of PM tiles are that you can serve directly from S3 to the browser. So you don't have to traverse any sort of process or web server software. It is the ultimate in serverless because you don't manage any sort of long-running process. You just upload it to a commodity storage platform. And there's other techniques such as splitting the tiles into sort of all of their zoom-based directories and upload those. But usually that that isn't scaled out well to if you have millions. So another big benefit is that it's a single file that is scalable to millions of tiles by design. Some of the disadvantage of PM tiles is that it is not a database because if you change one tile inside of the archive, it might change size, which would change all of the offsets of the entire archive. So if you need to update a PM tiles archive, you essentially have to rewrite the entire thing. Now usually if you are just copying from a different archive, that can be pretty fast. Another downside of PM tiles compared to a more traditional TMS server is latency. If you need to first fetch a directory and then fetch a leaf directory before fetching the tile, you've introduced two more round trip requests into your application. So the user experience might degrade. There's also a size penalty in that at minimum, even for very small archives, you have to read at least 512 kilobytes every single time just to fetch that root directory. So finally, this right now doesn't interact really well with CDNs. CDNs are usually oriented around individual file assets and not byte ranges inside of those files. So there is sort of like some serverless ways to translate between how CDNs work and how PM tiles work. But that is an area of active development as CDNs incorporate more serverless features. So finally, a major downside is compression. Usually if you are serving from PM tiles directly to the browser, you are not going to be able to compress the data with kind of generic GZIP encoding like content encoding because that's not supported by browsers. If you are reading a range from a web resource, the header should be transfer encoding. But in general, I have not found that it's widely supported by web browsers. So finally, that wraps up PM tiles, which is a new open source format for serving tile data. And protomaps.com is a service that I run. You can download open street map based vector data to have a totally serverless map application with a base map from protomaps.com. And if you have questions about the format or you want to know if it's good for your project, feel free to email me. My email is brandon at protomaps.com. Thanks. Hello, Brandon. And congratulations for the great job. And thanks for your presentation. So first of all, let me introduce you. So Brandon is a cartographic technologist in Taipei, Taiwan. He's busy building proto maps most of the time, and which is a universal mapping based system based on the open street map. So Brandon, thanks again for the insightful talk. We have a couple of questions for you already. So I'll go with them. The first one is how is the performance when hosting a PM tiles archive on an S3 with a planet scale tileset, so zoom levels up to 14 or greater? Cool. So the question is about performance for zoom levels to like 14. And in general, the first load will be quite slow because you need to fetch not only the first directory, the root directory, but also one leaf. And then finally, yeah, I mean, then finally those tiles. The one advantage is that in terms of my reference implementation in the browser, those directories will be cached. And those directories will also contain then all those tiles nearby. And because in general, most map users are going to be panning in one local area or zooming in one small area, those initial directory, those directories will be cached in the browser. So in general, the performance is noticeably worse for the initial load. It might take, you know, one second, a couple seconds to load the map at first. But then after that, it's usually more or less the same as a normal map. The other addition is that issue of compression because those byte range requests don't work well with standardized like Gzip compression. You're generally sending uncompressed data, which for typical vector data is maybe 30% bigger than normal if it was compressed. So you're also spending more time downloading for those tile requests. So there is that additional latency for raster tiles such as PNG or JPEG, those have the compression built in anyways. So there is no, there's no effect there. Okay, thank you. So there is another association come followed by a question which says that I get how this is better than MB tiles and cocks. But why is it better than simply uploading entire sleepy map tile directories to S3? It is followed by another question. Is it just that it's a single binary file? The main reason, yes, is that it's a single binary file. And in general, my observations have been that once you get above a couple tens of thousands or 100,000 individual tiles, the overhead of doing like an S3 sync to a bucket becomes quite significant, especially once you're once you're in the range of like a million tiles. There's two other things I want to point out. The first one is the deduplication is very important in the case maps, because in the case where you are syncing entire tile directories to S3 and you're touching the ocean, then in that case, a lot of your storage in S3 is just going to be the same redundant data over and over again, just like a blank, like blue water tile. So it's pretty wasteful there. And like, and also downloading, like, scanning around and downloading ocean tiles, that's also pretty wasteful. So using PM tiles can avoid all that. The other important thing I didn't mention in my talk is S3 usually gives you some guarantee of being atomic. So when you upload a PM tiles archive, it's impossible to read a partial upload. While in the case of like, if you were uploading new data to S3 with like S3 command sync, then you could be in a state where half of the data has not been updated yet while the other half is still being updated. So that's not super important for a lot of applications. But in general, it is nice to treat a data set as a sort of comic unit when dealing with S3, honestly. Okay. Okay, so yeah, this this makes very much sense. So I have a follow up question which is which might sound very naive in general. So this question is regarding if I have a very huge data set which consists of multiple raster types or vector types. So do we need to create a single binary file out of it or every sub got file or every sub fng file would create one binary file after archiving? So there's no way to have like a heterogenous data set. So each you'll have like a raster PM tiles a vector PM tiles. It's interesting to think about the the idea of having a combined archive. But you didn't but in those cases you'd have to make sure your archive is not a raster. So I think in general the approach is to have a separate PM tiles for each kind of layer. Okay, and one more thing which just came into my mind is about the deduplication part. So you mentioned that if there are a lot of types which represent oceans and level and then it could because of redundancy it could just be a storage base of storage to store every tile out of it. So would there be any data loss in those terms or is there a way to overcome that? There's no data loss because it uses in terms of my reference implementation it uses like a hash and that hash is based on bytes. So it will only it will only deduplicate in the case where the bytes match exactly so it's lossless. But that's also a disadvantage because if you have a raster data set like Sentinel 2 and all the ocean tiles are slightly different then you won't be able to take advantage of any kind of deduplication there. Yes, yes. That's very interesting. Thank you. We have one more question. So it asks about which raster formats are supported either natively or through the PM tiles convert tool as of now. So every raster tile format. So that means every image format is supported because the container is agnostic to the internal tile. It's just bytes. I think there is the concept of metadata which is just a JSON object stored in the header of the PM tiles. And in there you're able to store a mime type such as image PNG or image JPEG. But otherwise it is totally open. You could store for example things like SRTM height maps inside of it as long as your client knows how to interpret it then it does not care what kind of data goes inside. Okay. Thank you so much Brandon for the presentation and thank you for answering the questions. I do not see any further questions for now. So if there are any further questions you can connect with Brandon offline and thanks again. Have a good day. I hate Brandon. All right. Thank you.
Have you ever wished for web maps with no servers or backend to maintain? Introducing a new archive format called PMTiles, based on HTTP Range requests, for serving Z/X/Y tiles from storage APIs such as S3. PMTiles is a new archive format for pyramids of tiled data. It enables developers to host tiled geodata on commodity storage platforms such as S3, and can contain raster images, vector geometry, or data in any other format. This talk will: Introduce the design and specification of PMTiles, with comparison to Cloud Optimized GeoTIFFs Demo some open source tools to convert between PMTiles and other formats such as directories or MBTiles Explain the use cases for which PMTiles fits well, and how it interacts with map rendering libraries, web servers, compression and content delivery networks.
10.5446/57234 (DOI)
So, yes, and we're live if all goes well. I'll look it up in the quarter past stream. Yes, we are. OK, so welcome everybody. This afternoon session, we'll be hearing six presentations, and five of them happen to be geo-Python related. And the first presentation here will be about PyGU API. And we have these two gentlemen here, Angelos Tsotsos and Tom Kralidis. I will introduce them shortly. So Angelos, he is from Athens, Greece, remote sensing researcher and software developer at the National Technical University of Athens. But he's also OSTO president. And of course, an OTC member contributed to various OSTO projects like OSTO Live, also on this conference, PyGU API, this talk, PyCSW, and more. And he's even an open SUSE member. And of course, you may know him as a Ubuntu GIS maintainer. And we also have the honor here to have Tom Kralidis from Tom Ren, Toronto, Canada, and Thomas Senior Systems Scientist for the Meteorological Service of Canada. And Tom was active also in the OTC community. And he's committed to free and open source software, as are we all. And he's founder and lead developer of numerous open source geospatial projects. So here, like PyGU API, but he has also developed many other projects, like some he initiated, like Maps Server, GeoNotes, PyCSW, QGIS, PyWS, OWSLiv, and the list goes on. And he, that doesn't enter his charter member. Oh, did I mention that Angela was OSTO president, and Thomas also involved in OSTO organization. He currently serves in the board of directors. And that's about it. So I'll give the floor to you. Thank you so much. I already shared the screen. I will go to the back side. And folks, you can use the chat. And there is another tab for questions. And OK, floors to you. Great. Thank you. Thank you for attending this talk. So Angela and I will give an update on the project. Note that there are others in the audience who are part of the project team here. So any questions that you have that we may not be able to answer, they can certainly be addressed. We can get in touch with the other developers on the project. So we initially put forth this project and presented it in Bucharest. And today's presentation will basically provide an update on some of the new features, what we've been up to, and what the future holds for the project. So it's been a long two years. And we have a lot of updates, which is good. So project overview. The project was initiated in 2018. It's currently an OSTO community project. It started along the same time where a lot of the efforts around the OGC API evolution of API standards started to come about. So as we started to hear about things like WFS3, which is now OGC API features, the PyGee API project was sort of born out of a lot of the sprints and hackathons that were occurring at the time. And it's evolved into a project that does a number of those standards and is also a reference implementation. So core principles of the project, again, OGC API is front and center. We are a reference implementation for OGC API features. And we implement a number of the other standards, which will I'll show a matrix of what our current support is in the project. Given that we support all the OGC API principles, that means we support the restful principles, as well as JSON as a first class encoding. We also support HTML. And we provide, obviously, open API support for the service description, as well as a swagger UI. There's an international team, which is growing. So it's across time zones and it's across countries. So there's numerous contributors. It is quite the feat to try to get the Project Steering Committee members meetings going with all the different time zones. But we are doing our best. The project did create a Project Steering Committee as a result of RFC1. So we have that governance in the project. And we stand on the sodas of many projects that are upstream of us. Technical overview. Underneath the hood is a core abstract Python API. And we simply put a web framework layer on top of it. Our default framework continues to be Flask. We also support Starlet, more for async, although Flask is increasingly supporting async with Flask 2. Flask 2, as I recall. There's also work to implement something in JSON. We have a very simple YAML configuration, which allows you to connect all your data sets to PyGAPI, as well as make some service metadata available. We have automated open API document generation and data binding. This means that PyGAPI is able to go through all of your data sources and get all the right information around columns and data types and so on to give a very simple, simple, simple, and easy solution to have. And we also have a very simple API document that's also available in Python to give, to output, a rich open API document, which is tightly bound to the underlying data sources. And that can either be, and that is done upstream or sort of offline, and that's cached when PyGAPI is running for performance reasons. We support a robust, very robust plug-in framework. We have a concept of a data provider. So a data provider can either be features or coverages or records or stack collections or other things. And the idea there is that they all have a common Python API, and folks basically implement, or developers can implement and extend that plug-in framework to make their own plug-ins. So it's very easy to deploy, which we'll cover a little bit later. And we have minimal core dependencies. So one principle of the project is you should be able to stand this up really, really quickly for the base functionality. There's a look at the architecture again. So right in the middle is that PyGAPI common. That is the core Python API. So in principle, you can run PyGAPI from your own application and never interact with the web. I mean, that's how it's built. But we do put a web layer on top to take care of all the pure HTTP things, such as routing and so on and so forth. And we have an unlimited number of data providers that we could support. We have some on board in the project. And we initially started out with Elastic Search as a provider, as well as a native, very simple, but native CSV and GeoJSON data providers. The project has matured to support a number of data providers. And more importantly, it's helped downstream developers create their own plugins that they manage in their own projects that is very powerful. So again, here's what we have out of the box in terms of providers and hooking up your data. So we have an Elastic Search provider. There was MongoDB support added a while back. Juiced, who's the session leader here, implemented the OGR provider, which is awesome. And that gives you access, obviously, to hundreds of formats. We also support a covergis concept. So on board, we have an X-Array provider, as well as Rasterio. Francesco, who implemented the tiles provider, we have a Minio support, as well as a basic directory treat. Early up 2021, we implemented OGC API records. And with that, we provided providers for document, sort of no SQL style, backends. Again, you can implement your own, which I think is really the power. One of the big powers of the project is that if you're a Python developer and you're not scared to get your hands dirty, and you want to connect your data, you can do so in any way you wish, even in a very customized fashion. I should also mention that we provide a processing framework. That means we're able to expose Python workflow and pipelines as OGC API processes. And that's really powerful, because you can basically expose any kind of workflow or any functionality you want, which is made available through the OGC API processes specification. Along with that, we have support for job control. So imagine asynchronous processing, where you send the request, and maybe the process takes a while to complete. So we have support for job control. We have a simple back end and tiny DB. But again, that is a plug-in framework. So you can set up your own job control plug-ins to plug into your maybe specific machinery that you have in your project or your requirements. But the idea there is that to support asynchronous processing, we need to be able to record status and progress and figure out when a job is done, and so on and so forth. So again, we have a default capability in the project, but we provide the ability to implement your own. Implementing your own plug-in. So you can either implement it as a core plug-in and propose it to the project and maintain it, or you can develop and maintain it into your own repository. And in our configuration, you would just point to your own repository in a Python dotted path kind of way, as long as it's installed in the system. And basically, you don't need to make any changes to PyJP API. You make simple changes to configuration, and you're automatically integrated. Finally, we support schema.org and things like JSON-LD, which allow for mass market search engine optimization. And we have a number of different deployment options, such as our default is on PyPy. This package is a new bunch of GIS. We have a Docker set up in the code base that's available on Docker Hub, and we're on Konda and FreeBSD, and there may be others at this point. With that, I'm going to switch it over to Angelo's. Thanks, Tom. So let's have an overview of the core capabilities of PyJP API. You see here the landing page, where we have default HTML landing page, and this helps the output of the service to be accessible to the crawlers on the internet. And that's all about the new OTC APIs being available in a JSON and HTML format. So here is this is the landing page. This is what you see when you go to the first page of PyJP API. And there you can see the demo instance, where we have collections. We have stack assets, processes, happy definition. It's all there on the first page, and where you can see also the service metadata. Let's go to the next page, please. The open API is the core of the service, and it's core in OTC APIs in general. So the definition of the service is available through an open API document. It's there. It was named Troeger in the past, and can be used to do development and test the service directly on the browser. It's available also as a JSON object, and it's automatically generated from PyJP API from the configuration of the service. Next slide. So PyJP API started initially as an OTC API features implementation. That is the first of the OTC APIs that got finalized earlier. And PyJP API, as Tom mentioned, is a reference implementation for OTC API features core. And here you see the page where we see the features of a single collection here in PyJP API, where somebody can see a map. And this is just the HTML representation. Obviously, somebody can create and develop another any other representation as an output format. Here is the HTML, but obviously there is the geogation output as well in the service. Next. Next, we implemented also support for OTC API coverages. So the coverages are implemented later in the project. And there are providers, the XRA provider, and also the OGR provider, where somebody can directly access raster data from the PyJP API service. So this is already in master. It's already supported. It's not yet finalized as an OTC API yet, but it's in the process of being finalized. And we are hoping to be also an official implementation of that. Next, we support OTC API records. Me and Tom are in the OTC API records standard working group. So we are very keen to implementing the standard as fast as possible. And we will also sell PyCSW later today. So the OTC API records is directly implemented in PyJP API. It's, again, based on OTC API features, but with some catalog-oriented features that have been added to that standard. And it's already there. It's supported. And there is a meta-search support in QGIS directly for searching records in QGIS, for example. Next. We also support OTC API tiles. This has landed in the master branch of the project quite recently. It's also another OTC API in progress. And we already support it. And we can support vector tiles directly from a database right to the API. Next. And already Tom mentioned about OTC API processes. So we support also processing as part of the OTC API, APIs that we support in PyJP API. And as Tom mentioned, we have already implemented the processes API. Somebody can implement a processing algorithm directly and plug it into PyJP API. And then it will be available from the landing page and available to run a process in a remote machine running PyJP API. Yes, next one. OK, so this is the second OTC API that has been finalized recently, I think a few weeks ago. So this is the environmental data retrieval API. And this is already, this also is based on OTC API features, but with some extensions for environmental data retrieval. And it's already supported in PyJP API. Tom has implemented that recently. And there's support right out of the box. Next one. And recently, we also added support for spatial temporal asset catalog. This is the well-known stack. Stack has been around for quite some time as an extension of OTC API features. It's also a catalog oriented specification, but it's a community standard. But it's now going to be related to OTC API records. And we are trying to make things work together. So it's already implemented in PyJP API. And specifically, we implement the static catalog directly. So if you have a collection of stack items, you can just point the configuration of PyJP API directly to those stack items in your drive. And then this will be available and published through PyJP API. Next one. Also, we have HTML templating, which means that you can create your own HTML templates and make your look and feel the way you like it for PyJP API. Tom, back to you. Thanks, Angela. So we've been quite busy since Bucharest. So just a timeline of some of the new features that we implemented. A lot of the features were implemented and landed as part of the OTC API virtual sprints, which have been a very, very valuable exercise to have access to the folks developing the specifications, as well as other developers of client servers or parsers or serializers. So if you haven't been to an OTC API sprint event, I would highly recommend it. They are very valuable for implementing these APIs and having these discussions with some of the specification editors and doing some interoperability testing as well. Selected projects. So here's a sampling of some of the recent projects that we're aware of that implement and use PyJP API. So one is a project that I work on. This is with the Meteorological Service of Canada. We have a data dissemination platform, API platform, called MSC Geomet. And this is basically Canadian weather, climate, and water data. We have real time data, as well as archived data records, mostly around numerical weather prediction model, forecast model output. We also do have a hydrometric archive of Canada, as well as climate station data archives that go back over 100 years. We've recently started working on, well, we've always supported OTC API records. And we've done, or sorry, processes. And we've implemented raster data extraction through OTC API processes. And we're currently working on extending our raster support to deal with underlying data stores, which are ZAR or other types of data queues. Francesco and Juice worked on a COVID-19 demo server at the time. And it provided official data. It was an aggregator of official data. And actually went out to services that already existed. There was some misery integrations, if I recall, through OGR. But it was a powerful demonstration and a good example of how quickly things can be put together to provide an important resource. So we have that on our demo sites. If you go to demo.pyjoapi.io, we have a list of demos there. And the COVID-19 demo is one of them. The US Internet of Water has been an important collaborator to the project, specifically through Duke University. They have an effort in the Internet of Water to implement modern water data infrastructure. So they've implemented a reference feature catalog, a stream-gauge metadata catalog, as well as a sensor things data demonstrator, which is actually a PyJu API plug-in, which looks like an API features, but it's actually talking to a sensor things API in the back. So all that to say is really innovative and interesting ways that we can see the project being used for all sorts of different domains and workflows. As part of that, the USGS water folks have developed a number of their own plugins for water data processing and doing other OGC API processes through their high river client. They have a project called High River. The links are all there in this slide. But they've also implemented a PyJu API plug-in cookie cutter. So this is anybody who wants to implement the plug-in. You can download this cookie cutter that they've implemented, and it'll get you up and running quickly and efficiently to implementing your own plug-in, which is awesome. Back in Canada, the Natural Resources Canada is developing an open disaster risk reduction platform, so OpenDRR. It's an open source platform. There is a link there on GitHub, and all of their API provisioning for their data resources is happening through PyJu API. So it's a nice example of putting PyJu API into an entire framework of data generation and then publication and also extensive use. So I think they're using AWS on this one. So you can see how it can be implemented in a cloud environment. Across the pond, Euro DataCube and our friends at EOX, they've implemented PyJu API processes for headless notebook execution that can be used in workspaces. So this is a really innovative way of using the project for processes. And we've participated in that a number of sprints, which I've mentioned previously. So just closing up, I'm going to go through a couple of things so just closing up. In terms of roadmap, this is what here's the scene and what's implemented and what's in store. And we are planning on, well, we can't see in the slide, but we are planning on implementing OJC API maps and styles in the future, as those specifications become more mature. We want to do an API refactor to make things a little bit more modular, we want to support transactions. There's some work going on for an admin UI, in case somebody doesn't want to work with the configuration and everything through the command line. Continued stuff on schema.org, content negotiation. And we're now part of the OSU Live project, so that's great. Again, we're looking into implementing a Django front end and we've had discussions with the GNO community on making PyJu API a data back end for that project. So closing out, there's a number of support mechanisms and companies that you can get in touch with if you need dedicated support for future development for the project. And with that, I will leave you with these links here. I'd like to thank everybody for their time and support, and I wish you a good Phosphor G 2021. OK, thanks, Anselos and Tom. And in the meantime, we have gathered quite some questions. And I'll just bring them here in the screen, and because this one was upvoted four times, so it's at the top. And you can see the question here. So the question is, PyJu API versus GeoServer, can you compare? We had the same question yesterday, I think. Right, we had this question at the doing geospatial with Python. So there's a number of ways that it's hard to compare. It depends what your use case is and what your workflow is. I would say it depends on your data volumes, your configurations, your environment. There are so many different factors that you can use to do a comparison. I know in the old days, we used to have a WMS shootout at the Phosphor G events. I'm not sure whether one day we might have an OGC API shootout. That might be something interesting. But I will say for the Python developer who wants to use something very modular and be able to tie it into their own pipelines, that's one of the value propositions of the project. And we also have very high on interoperability, much like the other projects. So that's all I'll say for now. That's a big question, but hopefully that has. And we have more questions, but it's nice we have a voting. And this is the second one with three votes. Kenneth handle larger amounts of data. I would say yes, because of the way the OGC API specifications have been architected. They allow for things like paging or different ways of sub-setting the data. So whichever data server you're going to, whichever data server you query, if you ask it to return a million records, that's going to take a long time no matter what you do. So there's other strategies. And I would say most of it is in the hands of the way the APIs are architected, as well as how your data is set up in the back. But Kenneth handle large amounts of data, I would say yes. We are handling multi-gigabyte archives for one of my projects. It doesn't seem to be an issue. There's work that goes on in the back end to make that performant and to make that available, but it seems to fit well. OK. We have time for one last question in the third place. Says, what about OGC API? No, wait. Sorry. It was this question. What about OGC API maps? Do you plan to support it? That's a good question. We do plan to support OGC API maps. There was a OGC API sprint, virtual sprint, that concentrated on maps and tiles, I believe, earlier this year. We implemented a prototype that I think is it's in a branch. Somewhere in our code base. You can use any back end that you want. In our case, what we did is we used the map server, map script library, and we use map server as only a map renderer. So the idea there was that we can generate a map file on the fly, use map script to generate that PMG image, if you will, and support OGC API map. So that is in scope. And we're also looking at implementing OGC API styles. Having said that, once those specifications are ratified, we'll take a deeper look for sure. OK, thanks. And well, we're within time. I want to thank you once more, Tom and Angela, for this great presentation. And we hope to see more of PyGY API. And so we're off to the next speaker. I will remove the other speakers and the screen.
pygeoapi is an OGC API Reference Implementation for Features. Implemented in Python, pygeoapi supports many other OGC APIs via a core agnostic API, different web frameworks (Flask, Starlette, Django) and a fully integrated OpenAPI capability. Lightweight, easy to deploy and cloud-ready, pygeoapi's architecture facilitates publishing datasets and processes from multiple sources. This presentation will provide an update on the current status and latest developments, including the implementation of numerous new OGC APIs including gridded/coverage data (OGC API - Coverages), search (OGC API - Records), vector/map tiles (OGC API - Tiles), and Environmental Data Retrieval (EDR API). Authors and Affiliations – Tom Kralidis (Open Source Geospatial Foundation [email protected]) Francesco Bartoli (Geobeyond Srl [email protected]) Angelos Tzotsos (Open Source Geospatial Foundation [email protected]) Just van den Broecke (Just Objects B.V. [email protected]) Paul van Genuchten (GeoCat B.V. [email protected])
10.5446/57235 (DOI)
He is currently an STI specialist at ISRIQ. And he's been very involved also in phosphorgy projects like GeoNetwork and PyGEO API. So without further ado, I give you the room. I don't know who is starting. Maybe I can start. Let's see. I'm wondering if I'll share the screen. So maybe Paul, I'll drive and then you can just cue me. Is that okay? Okay. I see you're on mute, but I see that you agree. So let's go. Yes. Okay. Oops. Sorry about that. I muted. I didn't know that. Hello. Sorry, I don't know if you were trying to say something. Yeah, I was. Yeah, fine. Go ahead, Tom. Cool. Okay. Maybe I'll start off with a couple of notes. I guess this presentation is on how QGIS is increasingly supporting the OGC API efforts. And we'll talk a little bit about how they work together. I'll provide a small update on OGC API efforts. And then Paul will get into some of the details around QGIS and we're happy to answer any questions or comments. So OGC API. Hopefully you all have heard of what OGC API is. So going back in time a bit, we've started from in the mid to late 90s around doing web services with a lot of XML and basically remote procedure call over the web. And we moved on from their sort of web 2.0 type things. There are some realities associated with having those OGC, well, with specifications designed in that certain way in terms of overloading the HTTP specification as well as some difficulties in having things crawlable or searchable on mass market search engines and so on. So excuse me, in 2017, W3C came up with the spatial data on the web as practice. And a lot of efforts started to sort of culminate around the concepts of REST as well as JSON and OpenAPI. OGC had an API white paper in 2017 if I'm not mistaken, which basically called for a lot of created the opportunity to create these new OGC API standards which focused on JSON and HTML as first class and making things mass market friendly and web developer friendly. A web developer should not have to download a 400 page document to implement the specification whether it's a server or client or what have you. Mind you, I don't think anybody should have to download a 400 page document. But here we are. You can see a number of the specifications down below, whether they're maps, styles, coverages, features and so on. Most of them are successors to the first generation of OGC API standards. These standards are developed interactively. So these standards are developed now on GitHub. If you go to github.com slash open geospatial you'll see a number of the standards there. You'll see OGCAPI.org. You'll find all the information on all the APIs and all their specification documents fully available there in HTML. There's GitterChat so that you can easily interact with the community around the specification development and questions around compliance and performance and testing for that matter. And as mentioned in the previous presentation by Clemens, Peter and myself, there's really a building block model to all of these specifications that will allow you to plug and play and put those Lego bricks together to be able to come up with something that meets your needs. With that I'm going to turn it over to Paul. Thank you Tom. So I use it. So where to start? So OGCAPI features and QGIS. A lot of people contributed to QGIS over time. I'm not in a position here to mention everybody that contributed but hopefully I give everybody that contributed enough credits. Here they are. I want to drive in quickly since we only have 15 minutes to see some functionality that is available these days. So starting with OGCAPI features, it was also the first one that kind of got adopted within the OGC or started the movement. This was implemented by Ivan Ruo and funded by Planet and it was implemented as an extension to the existing WFS provider. So I think it's landed in somewhere 314, 316, I don't know. You will find this functionality under the add WFS layer and then instead of putting there the WFS capabilities URL, you put the OGCAPI features URL and from then on it kind of works in the same way that it fetches the layers from the service and you select the layer and you add it to the map. There's some special parameters that you can add here, for example the pagination size. So in that sense we have the same challenges as WFS that if you have a very big page size that the service may actually load quite slow, if you make a smaller page size it will be faster but you will only see a partial result. This is always something to keep in mind. So what is also good to know here is that it uses behind the scene the same architecture as WFS, it means that QGIS caches the features in a local cache. So if you zoom in and zoom out outside the area, it will still have the previous features in its cache. So it doesn't need to go back to the server to fetch them again. Next slide Tom. So from Ivan we discussed a bit before this meeting, some learning points from his side is that the fact that the items in the collection most of the times are untyped. So in theory you could add them, add the definition of the item to the open API document but the specification doesn't necessarily need you to do so. So if you have there an item say a tree, if it has a property branches and size that could be specified in the open API document but it's not always required. It means that a client like QGIS has to get the first page of results to see what is actually returned and which properties are available on the object. It's a bit awkward for a client and especially people that are used to build clients against open API specifications will be a bit surprised about this outcome. On the other hand for the server implementers this is easy because they don't need to go into each object to see what properties it has and advertise it in the open API document. Another point that we came up with discussion is that if you have hierarchical data like for example I have a soil profile in the soil profile is a horizon and in a horizon has a certain clay percentage and a certain magnesium value to query the magnesium concentration over the 12 micrograms for the second horizon in profiles within a bee box you need quite a dedicated filter expression capability and this is currently not yet fully crystallized how that will work out but it is something that we maybe we have some time this week to discuss that topic. Next one Tom. OGCI records I give the floor to you Tom for this one. You muted. Okay great thank you Paul. So for OGCI API records as Paul mentioned QGIS is a long standing project with a number of contributors and a long time solid support of the OGCI standard so we can see WMS, WFS, WCS and so on Paul just talked about OGCI features now let's talk about search. So in QGIS which Paul and I are presenting on tomorrow if anybody else is interested in seeing that presentation I won't go into too many details here but in QGIS is a long standing catalog client called Metasearch which has always worked with the OGCI catalog service for the web specification. Well in the last year alongside of working in the OGCI standards working group we also implemented extended support in the Metasearch plugin which is a core part of QGIS so when you download QGIS you will have the Metasearch plugin installed by default where it's all almost merged but as you can see in the lower left part of the screen but we implemented OGCI API records support in Metasearch and Metasearch internally uses the OWS live Python client OWS client library so that client library itself has been extended to support OGCI API features records and coverages that's the current support and we're extending it as time goes on and more OGCI API specifications are ratified but we do have a pull request in with some working functionality against OGCI API records so there you can see a screenshot there that's hitting a note that could hit an OGCI API records endpoint in the same way that it would a CSW endpoint so we've extended the client to be able to do that so when that's merged you'll be able to say okay add my OGCI API records client you do have to specify that it's an API records endpoint or a service as opposed to a CSW service and then after that all of the functionality is abstracted away so you don't really know whether you're working with a CSW or an OGCI API records client and that's the goal of the beauty and the value of the Metasearch plugin so in theory we're able to add any type of search service but here we're concentrating on QGIS on OGCI API records for that matter and in that work we actually did some digging in Metasearch and abstracted the search capability to make it very very easy to add more APIs over time if we need them or profiles of OGCI API records for that matter so like I mentioned it's currently a pull request we're just have some final issues to deal with with regards to packaging and we hope to have that merged very soon maybe even this week there's a code sprint on Saturday so let's see what can be done with regards to that some issues or some discussion plans that came up with OGCI API records link types I mean this has always been an issue in in catalogs and search APIs in discovering actionable links so how do I know that a link is a WMS link or a WCS link or WFS link well we've always had that issue and we have a way of working with that in QGIS which also requires some cooperation from the metadata providers but now we have a new specification with not only having the requirement to articulate the link types of the first generation OGCI web services but now we have we introduce link types of the new OGCI API standards and the link types in the OGCI API records case is going to be a lot easier for Metasearch because we have the ability to crawl down a certain level of an OGCI API record endpoint so we can get to just that collection because of the hierarchical and the restful approach of the way the standards are designed so current so you won't have to open Metasearch find the data set add the WMS and go back and do a get capabilities again and do that round trip and find the layer now you will be able to connect to the OGCI records endpoint find the find the matching record and whatever the link type underneath is which when it's an OGCI API link you can you'll be able to load that directly in QGIS so I think that's going to be a very valuable feature as as more and more of these things get get put online and server data providers make their metadata discovery metadata available through OGCI API records and the new link types so there is change management involved but that's a W you know OGCI web services and OGCI API change management thing that I think will be here for a while and this is what that some of what that change management looks like here in OGCI API records and what that means for the for in QGIS. Other other issues with regards to QGIS there are other OGCI APIs as we've mentioned we're not aware of any current current initiatives at this point in time that implement the other APIs but I would imagine over time that we'll start to see updates on clients for maps, tiles coverage, coverages and processes as those specifications mature and they you know they're formalized by OGCI and and you know clients want to have that functionality implemented. The OGCI does have an API roadmap which which is which will be valuable to I think is valuable for folks to look at if they want to see where what the the timelines of the specifications are and that'll give the QGIS community a better idea of when's a good time to dive into the standard and and implement it so I think this is a you know a great advancement in the OGCI API standards and QGIS is a you know is a very powerful GIS client and we're you know we're we've demonstrated that we're able to implement these standards with a relatively low barrier low barrier workflow. Paul? Yeah let me take that one so so while discussing with Ivan of course he introduced the aspect that the GDAL supports the full OGCI API actually a bit more than than QGIS itself by now so there's support for maps and and tiles and it's actually possible to to add those to QGIS also via this MOAW I don't know you say more file I don't know so it's kind of a processing file where you say okay load this data via GDAL so that's that's already an option in QGIS so if you really want to have your OGCI API maps in QGIS you can do it via such a file that's that's described on that endpoint. Next one yeah so so then dive a bit into QGIS server of course QGIS is mainly a desktop application but there has been some initiatives to have also QGIS as a server so you run it as a Docker instance and request it as a WMS or WFS and within that next one Tom within that area there was also initiative to add OGCI API features to the QGIS server and it works actually out of the box the only thing you need to be aware of is that once you before you upload your your QGIS project to the QGIS server you have to tick a box in the in the server properties of the layer saying that it should be exposed as a WFS layer it will then automatically be advertised also as an OGCI API features layer so that that's really useful and as far as I know no support for maps and tiles also on that side yet but yeah I hope that that work starts soon. Next one yeah so so that's about it do you have a QGIS there Tom or shall I maybe Joanna maybe you can add my screen to the that's a good idea but it's also maybe you can already prepare some some answers to some of the questions Tom because we entered the discussion phase of this presentation so I have here a QGIS instance with focused on the Netherlands with some of the provinces and those provinces are actually coming from the OGCI API features so if I add a layer of type in this case I have to go add WFS layer that that's the one that includes the OGCI API features here is the first you have to set up a server connection if you edit that you will notice that it you just put here the the endpoint of the in this case is a PyG API running on a sandbox environment and here you have those options like putting the page size and you can detect here and says oh it's an OGCI API features and then from that it will fetch from the collections page all the collections that are available in this service and you can then add that to the map so it works exactly like a WFS service and I wanted to add this aspect also related to the OGCI API very records is that this implementation also helps us at the server side to develop our server because it gives such an easy client experience that we always use QGIS for testing our server implementation and that's also a benefit because we make sure that it works on QGIS first before any other client unfortunately I cannot show the OGCI API records on my Mitter search because I had a problem with the OE or WSLip sorry Tom that's okay I mean we are also running out of time that's a yeah so we better go into the questions yeah thank you thank you very much okay so we have two questions here so the first one is speaking of caching caching is also needed for WMTS XYZ raster tiles QGIS QGIS is not responsible for the largest data load of the free map raster tiles according to the OSM or maintainers hope that becomes better with OGCI API it's a question yes I don't know the internals of how how this is managed in QGIS but I know that I am one of those users that that add local layers of street map almost every day so so I can imagine that it's very popular I mean having the white screen with a with a simple shape file on top just looks bad you always want a background and open street map is also my favorite one but yeah it's a I think providing these with these background servers background tile servers is really a point of attention like a geo network geo server open layers in all their examples they're using the open street map tiles which is a community initiative also so we should be careful I hope at some point OSGO board for example may set up a tile server that we can use for this type of demos and it doesn't need to go up till level skill 18 or 20 if it's for this type of demos skills 12 is just fine so that that's my idea and that I don't know what do you think yeah I don't know much very much about the internals but yeah I'm with you Paul okay so the next question is do you think we will need not only a QGIS OGC API section but maybe thinking in the future a different QGIS platform that support and display the requested information think on sparkle queries it will be possible to gather not only maps geocal localized data but also sounds measurements features papers videos etc I think that's a good question my my my initial reaction to that would be QGIS provides you know a strong mechanism for extensibility so you can develop your own for example your own Python plugins so you can I could see an ecosystem of plugins being made to satisfy those those work cases and those those use cases in fact metasarch itself is written in Python not not c++ so I can't see you know I can see an ecosystem of these plugins being built just the same to deal with sparkle and maybe maybe some other things I don't know Paul what do you think it's it's a when I look at for example to the 3d world in a 3d world view you you interact with what you see around you and this is also a bit the paradigm of QGIS that is layer oriented and it's really hard to make links from from like we see here the city cities of the Netherlands and then there's the open street map in the back but it's really hard to make a make an interaction between the two and that that's related to the QGIS and and all all most of the key of the the here's desktop clients that but in a 3d world you it's far more common to click to follow layers and see interactions between a traffic light in the street that it's connected to and and and and sound that it makes and then when I when I see a thing like sparkle typically is also object oriented you go through a network of objects and then I think QGIS by design is not really a tool fit for for such spatial navigation so that's a bit where I'm heading on one hand the little layer paradigm brought us a lot but now it hinders us to advance in a direction and I think at some point we need new clients that engage with data in a more object oriented way okay thank you thank you very much Paul and Tom let's give a virtual clap to the speakers and before moving to our next presentation so we're going to be talking about the modular OGC API workflows for processing and visualization.
QGIS demo as a generic Desktop capability. Alongside the various OGC API server implementations, clients are in the process of being set up to interact with the OGC API services. In this talk we present some new capabilities of QGIS and GDAL to interact with OGC API’s. Additionally to the various OGC API server implementations, more and more clients are being set up to interact with the OGC API services. This talk will focus on on QGIS and some new capabilities of QGIS and GDAL to interact with OGC API’s. - The WFS provider in QGIS has been extended to support OGC API Features. The functionality builds on top of the WFS provider. - The QGIS Metasearch plugin is in the process of being extended to support OGC API - Records a dataset search plugin for QGIS. Metasearch uses internally OWSLib, a python library with extended OGC API client support. - Also GDAL, a swiss army knife for spatial data, has been extended to interact with various OGC API's. In case there is news to share on client support for OGC API Maps, Coverages, Tiles and Styles, you’ll hear of it during the presentation.
10.5446/57238 (DOI)
Okay, so I think it is time for our next presenter. I'm not sure that I have a Swiss watch. Marco can tell me. So it is my pleasure to bring in front of you today Marco Bernasconi. I think I'm saying Bernasconi. I'm sorry, I hope I was saying it right. So Marco together with his team at Open GIS, CH is bringing important contributions to the entire GIS project. So you'll be seeing him around and his colleagues in the in the Phosphor G conferences and gather social gatherings and so on. Today Marco is going to talk to us about the Q-FILD clouds. So without any further ado, Marco. Thank you very much, Kodrina and hi everybody. Yes, as Kodrina said today, I'm going to try to show you what Q-FILD clouds can do. I am Marco Bernasconi. I'm the chair of the QGIS project. And I am the CEO of OpenVIS.ch. If you want to follow what I do, what I say and things like that at Ember Nazochi or marketopengis.ch is where you can find. Now let's go back to Q-FILD cloud. A bit of context. Why do we need Q-FILD cloud? First of all, well, that's pretty easy. It's because Q-FILD is basically taking over the world. So we have users everywhere. We have plenty of users with the latest version of Q-FILD 1.9.6. We have more than 400,000 downloads and more than 110,000 monthly active users. So this is pretty massive. And when we see this, we have to think to make the life of so many people as easy as possible. And that is why today I have the pleasure to announce that finally Q-FILD is available for Windows machines and especially something very new, something that I'm really happy to announce today. It's that also everybody that has been asking about Q-FILD on iOS, on your Apple devices, starting from the day before yesterday, we actually have pushed Q-FILD on TestFlight. So it's out there. You can join. You can install it with TestFlight. It's in beta obviously, but it's out there for everybody to try. You can go to q-fil.org slash get and you'll get automatically redirected to whatever platform you have. Now, all those people, all those active users, we've been talking to many of them and a lot of them told them, hey, look, we're working online. And a lot of them told us, well, we're looking to work offline in places where we do not have any connectivity. So we have some issues when we go offline and don't have connectivity anymore. The data is not synchronized. We need to get back with the cable and so on and so on. So that is why we actually came up with something like Q-FILD Cloud. The other reason we heard very often was we have teams. We need roles. We need conflict management. So those are kind of the things that we kept on hearing our user telling us that we should do something to make it more easy for them to work. And this is what actually Q-FILD Cloud can do for you. And what I show from now on is screenshots of the version of Q-FILD Cloud that is available on Q-FILD.cloud. You can register. It is currently in a beta, say in a semi-closed beta. We are basically opening it up slowly to the people that were in the waiting list. We have had a waiting list open for a long time and we had a lot of interest. We had more than 5,000 people in the waiting list at a certain point. And now we are basically opening up accounts from the waiting list. As soon as the waiting list is empty, we will open up completely the infrastructure for registration immediately. So what I want to do today is just go through what you can do with Q-FILD Cloud in its current status. I took all the screenshots yesterday. So it is brand new. I actually have a couple of screenshots from the development version in it with fixes that actually some couple of features that we did today as well. So it is really the newest you can get. You will see it today. Obviously it all starts with your account. And that is when we login. And we get to your own personal projects. We get to your own place where you see what you are involved with, what kind of projects you have, which organization you are a member of, and so on and so on. This is pretty much something that people probably know from platforms like GitHub or GitLab. If you are using some of those, it is kind of your home where you see what kind of things, what is happening, what is the status of the project. I mentioned we do have organizations. I am Marco Bernazzochi. I am a member of two organizations, which we saw earlier on, OpenJS.ch, which is the one we use for work. And then we have the Honey Honey Incorporated, which is the one that we use for demonstration. And here we can see that the Honey Honey Incorporated has teams. So QfitCloud supports teams. So you can create teams that have different members and different roles, which means that when you are creating a new project, in the organization itself, you can add the user with a certain role. So it can be a member, it can be an editor, it can be an admin. And this gets reflected there on the project. Obviously you can then go on and do it project-wise. We will see later on when we get into the project to see which kind of permission the user has on a certain project. Next step is obviously choosing a project. Here we are looking at the Supertest project by Honey Honey Incorporation. And here we see the two people are involved. There are five project files in the project and we have had three changes there. So pretty easy way to see the status, what's happening and when were the last changes on the project and so on and so on. Up on the top left, the logo is actually a generated little overview of the project, the QGIS project that we have in the background. If we go into more detail, we click on files, we'll see what kind of files are part of our Cloud project. What do we have there? Where we have some geo-packages in this case. We have a QGIS project and down at the bottom we see also some images that were taken on the field. So here we can follow kind of a versioning of all the files that were pushed to the Cloud, all the changes in the data, all the changes in the project and so on and so on. So here we go over view of each file at each different version of the file. And the next step that we have is seeing the changes that were done. So while here we can say well we do have a certain amount of files that have their own versioning, which I can download as well as a certain time stamp. In the next menu item I can see the changes. I can see what actually happened, where the changes from the field that were applied or where the conflicts and we'll get into that later on. So here we can really see who worked when on the project and what was updated. When we have a change, we can click on a change and we'll get more detail. And this is part one of the things that I like most in QFIL Cloud is that I actually can see what has changed in a specific change set. So let's say I take here the first change set, I click on it and I get the possibility to have a look at what attributes were changed, what geometries changes were done, and I get also the possibility to see a role change. For example here we see that it was an action of creating a new item. It has a certain feature ID and there is a picture down at the bottom which you don't see much. And you see that a new point was created. Also on the bottom we see a JSON representation of the role change. So if you would want to get that out of the tool and use it for your own integration in something else, there is no problem. You could get that via API. So very, very flexible ways to work. Next point I mentioned before, collaborators. Each project can have collaborators. And collaborators can be either a real user like here we see Ivan Ivanov or can be a team. For example we see here the demo users and the Ninjas team are both added. The interesting part is that we can set per project a level of collaboration which is allowed to those users. For example the demo users are allowed to be set to the role of reporter which means that they can only bring new information. They cannot delete existing information. Then we have the Ninjas group which are managers. They can do plenty of things. They have more or less the whole freedom. They are not as powerful as administrators. They cannot delete the project but they can change a lot in the projects. And then we have in this case Ivan which has a role of reader meaning that he can use the project on a read-only base. So he is not going to be able to push from Q-Field to Q-Field Cloud. Next thing we can see the jobs that were performed by Q-Field Cloud. We see when deltas are applied we see who made the changes. We see at what time they were done, what status they had. So here you really see the status of what's going on in the cloud itself. Last point regarding projects is the settings. There's not much there yet. There is the ownership which you can change. You can give your project to somebody else. You can obviously delete a project and you can make a project public. And then there is one more setting which is very important. It's the override conflict setting which when checked basically means that if you have conflicts Q-Field Cloud will automatically use the latest version that came in as a new data. This might be what you want but also might be what you do not want. So take care, remember to go and check this setting and unclick it if you don't want conflicts to be managed automatically. This is an overview of what the web interface looks like. What I'd like to show you now is a complete workflow via Q-Field via Q-Field Cloud. So the first thing we do is obviously login in Q-Field Cloud using the Q-Field Sync plugin if you've been using Q-Field before you already have the plugin. One very important thing is that we are absolutely committed to keeping Q-Field working the way that it used to be. We are not locking you in into Q-Field Cloud. You don't have to have an account for Q-Field Cloud if you do not want. You can still use Q-Field the very same way that it has always been because we believe that that's the way that it should be. Q-Field should work also simply by cable or with whatever integration that it builds so far. Once you login with your account you can click on the Synchronize button and you'll get to see the same list that we saw earlier on the web. We'll see it in Q-Field, sorry, in QGS. From here we can click on Synchronizing. We get the project that we are interested in. We can select it, click on it, everything gets downloaded, whatever is needed. Then we see that the project is downloaded and it's opened. Here the little checkbox will show us that the project is available locally and being both means that it is a project that we are currently actually using. I can go ahead, I can change my project, I can add new data in QGIS, I can modify the rendering of the points, I can modify colors and so on and so on. Then I can just synchronize again and my changes get pushed. Here we see that I had changed the project itself plus I had changed the geo package. Obviously this also works if you have a post-GS database in the background that will push the data out to the cloud in a seamless way. Same thing, if something happened on the field, if somebody changed something on the field and pushed it to the cloud, we can from here download the data from the cloud. The one thing that we are still missing currently and we are implementing is a warning on the side of QGIS telling you, hey, you have something new on the cloud, you should download it. Currently you need to have a look at the cloud and see if there is new things and then download it. QGIS is always seen as the master in this case. Advanced, per layer actions, you can decide to ignore certain layers, you can decide to not use, to do offline editing on certain others and so on, or just use them directly and then just push it. This was if you want to get a project that already exists. Very often when you start, you are not a member of an organization yet, you are using your own project, so you take the project that you already have and click the Create New Project button, and which will start a wizard and which is wizard, you can just click twice next, select where the project should be saved locally, go further, it gets converted and it gets uploaded to the cloud, ready to be used in the cloud. And if we want, we can go and set the per layer actions in the advanced settings if needed, and if not, do more changes if needed and then push everything up. Once we have pushed everything up, it's time to move to the field. With Qfield, we can log in easy as well here with the button Qfield Cloud Project. And once we log in, we get the list of all the projects we see here that we have the Honey Honey Incorporated Supertest Project and I'm refreshing the list. Open the project and I see the very same thing that I had earlier on in the desktop, but now I'm on the mobile, same rendering, same powerful tools as we are used with QGIS. So we can edit something on the field, we can see that there was a change tracked with the little number one, we can go and push changes. And once we're done, we are up to date in the cloud, we'll see that now we have a new change, we see that there was a new value, something was created, we see a job, delta was applied. And it's finished and we see that there is a new version on the cloud available. When I go back to QGIS, I can download this, which will replace my local file and the three attributes get updated like it was in the field. I was planning to show you how to recover version data, but I'll skip this and I'll quickly go to the build conflicts. So if we created a conflict in the field, we can see this in the conflict layer, in the conflict mask, where we can choose either to ignore this change or to apply this change to the tool. Qfile cloud is released under an MIT license, so you can customize it the way you want, implement it in your own workflows, you can build on top of it, it's all on GitHub, go customize it, give back if you can help us fix it, give comments and so on. We are hosting it for you on Qfile.cloud or you can deploy it on your own cloud obviously with your own infrastructure. If you are going to use Qfile.cloud is in secure and sustainable hosting in Switzerland, so your data are under very strict laws. And we do have different tier pricing with our community version, which is free, a pro version which is for individuals that want all the features, and then we have a team version that is more for companies with multiple users. What's really interesting there is that we are actually only billing active users. If you have any questions, do not hesitate contacting us either at openjs.ch or via Twitter and one of all the handles. If you want to find out more, go to Qfile.cloud. Currently you still have to subscribe to the waiting list because we haven't finished emptying it, but very soon the list will be empty and we will turn on the registration. Thank you very much and have a great first for G. Thank you very much Marco for the presentation. Let's see if we have any, oh you have a lot of hearts flying around in the venue list. Let's see if we have any questions. I see nothing here, if there is something, if there was one and I missed it, please repeat it, but I think I didn't miss anything. Maybe I would have one out of curiosity, you mentioned that there is a plan for if using your cloud and you mentioned there is larger data hosting. Could you give us a bit of an idea what larger means? No, what I said is that you can host it on your own infrastructure or on our own hosting and basically the hosting that we are building is going to be software as a service with limits on the, let me just get to the correct slide, on the amount of users that you have. What you've seen the screenshots now in the presentation where the team version where you have all the functionality for organizations as well. And the pro version is mainly for a person that works by himself or that doesn't need the organization part of thing. And what I mentioned about our own hosted solution is that, yeah, it's going to be in Switzerland. Oh, it is in Switzerland already, it's not going to be so pretty strict low on data protection. Thank you. A couple of questions now. Yes, so we have one, do you have a plan to implement QJS server on Q field cloud so it became able to distribute web through Q field cloud seamlessly. Yes, that is something that we plan to do later on, not before release so that when we're going to go to general availability is going to be without that part. There is already a full QJS, there is already QJS running in Q in Q field cloud so it's not going to be a big thing, but we definitely plan to allow that as well. Okay, thank you. And now one can we host Q field cloud on premises or private cloud. Yes, you mentioned yes. How about localization of GSS. I'm not sure. Yes, I think it's GNSS. Otherwise, I don't know what GSS stands for. Maybe it's related to smash still. Q field cloud, I mean Q field can deal with GN external GNSS devices that deliver EMEA strings over Bluetooth. So I can answer that question for Q field as well. The question is yes to get all the things in Q field as well. Okay. Okay, so I see a lot of. Smash. Yes. No, I'm not sure about that, but Q field can. Okay. So it's, it's good. Checking to see how I see you're also here with a open jazz. It's you right. Yes, I don't have a colleague here and answer as well. Okay, perfect. We can both check and we don't lose anything from, from these questions. For some reason, I cannot see the question tab for me. It's empty. So, I'll have a look myself. Thank you. Yeah, can we host about any connection? Yeah, we already answered all that any connection with QGIS or other open source desktop GIS. Well, I guess it's not for Q field. Yeah, okay. So I think that is it. We have a few more minutes. If you'd like to add something or we can have a little bit of break until our next, until our next presentation. So. I can show this. Oh yeah. Oh good. It's a nice one, but it's real. It's for real. It's here. Perfect. Perfect. I'm not converted yet. I mean, I take away the yet. The other device, but very, very happy that we finally can. We can have everybody on board and do an amazing too. Yes. So, um, if you haven't tried Q field yet, obviously go get it. Q field.org slash cats. And if not, try Q field cloud. It's, it's a pretty cool or at least subscribe to Q field cloud is a pretty cool too. Perfect. Okay, Marcus. So I won't take any more time from you. I'm going to take your pitch since we're here in one hour. They've seen it is going to show us all the amazing things that we have in Q field in a in another talk. Yes, I'm going to introduce him as well. So you are helping me a lot. Oh, good. Thank you very much for the presentations and for answering all questions and enjoy a phosphor G and I hope to see you soon in person when you know things will go to a next phosphor G or whatever other conference. So, um, thank you very much. Thank you. Bye bye.
QFieldCloud's unique technology allows your team to focus on what's important, making sure you efficiently get the best field data possible. Thanks to the tight integration with the leading GIS fieldwork app QField, your team will be able to start surveying and digitising data in no time. Discover what QFieldCloud has to offer and how, thanks to seamless integration with your SDI, it can help make your teams' fieldwork sessions pleasant and efficient. And if you want to roll out your own customized version, nothing will stop you, QFieldCloud is open source! QFieldCloud is a SaaS (software as a service) solution built by OPENGIS.ch that allows your team to seamlessly integrate field data to your SDI. QFieldCloud is written in python using the Django Web framework that encourages rapid development and clean, pragmatic designs. QField is the mobile data collection app for QGIS with more than 110K active monthly users and 400K downloads. Discover how the seamless synchronisation with QFieldCloud can help make your teams' fieldwork sessions pleasant and efficient.
10.5446/57239 (DOI)
And we are alive. Hello everyone and welcome back to the second session of Phosphor G 2021 Buenos Aires. I am Kodrina Eiliye and I'm going to be chairing a wonderful session on a lot of things relating to We Are Alive mobile apps. Hello everyone and okay there we go. Apologies. So without any further ado I am going to introduce to you Andrea Antonello. He is an old friend of mine and one of the important contributors to the Phosphor G and to the Osgeo community. He has a lot of wonderful and interesting things to tell you about Smash and Geo-pepperazis. So Andrea you have the floor. All right thank you very much Kodrina. I really wish my mother was here listening to how you presented me because that was really really awesome. So I'm going to give you a chat about what Smash and Geo-pepperazis look like these days. And well this presentation will start a little bit sad. Maybe mostly for me but also for many users we are officially sending Geo-pepperazis into end of life. The reason being well long story short remember for me Geo-pepperazis is really like a little kid and it was very important to me and the problem is Smash the project I will then show you a bit later has really rich maturity and is even more feature rich at this point. Development is much much simpler and faster and well the company that supports it cannot support two mobile projects so at some point we really had to make a choice. So what will we really miss? Well at the time being there are just two or three things. One is the 3D view but I figured that talking to surveyors and whoever uses Geo-pepperazis well they were not really using a 3D view. It was nice and and cool to see but it's not really used on surveys. And then there's special lights. Special lights is a quite complex project and we had lots of issues to build it for Geo-pepperazis then Geo-peck package came sorry and that started to be used by everyone. So for mobile purposes Geo-peckage is perfect and we implemented that part and left special light behind on mobile. And then there are translations and Geo-pepperazis was really translating many many languages. They take time and they take your involvement. So what do we gain? First and foremost a modern Geo-pepperazis. So a new review of the user interface. It's modern, slick, responsive. It's in my opinion really cool. So I had no big problems to step into Smash. We have iOS support was what that's quite important also and it has already shown how many new users it dragged in. And then there are a pile of new features being for example Kalman filters. You can enable filtering in the settings and the in at the moment you start filtering you have both the original GPS log and also the filtered one and you can switch in the main view between the two different logs. And in case you have situations where the GPS is bad, the signal is bad, as in this case for two tunnels, you see how filtering which is on the right features much much better. We have a nice log profile where you can for each GPS log you can have a little look. You can tap on this longitudinal profile and it will show you on the map where you add it. It shows you some some statistics like duration and duration at every point speed at every point. It's really quite nice to look into it. And then there are lock teams which I really love. You can team the GPS logs with a color table and it will draw it with a gradient and you will see maybe the the log color by elevation by speed by slope. And you can also see where you have up slow where you're going uphill or downhill. And then there is something I really love. I don't know how many really will find this extremely useful to surveyors. It's the on-screen log information. When you are recording logs, it will show you the duration of your log, the distance, and it will show you a little chart, a graph of the last 100 GPS points in the profile, longitudinal profile. Lockdown brought in fences because where we live, they give us at a certain point you can just go walk 200 meters around your flat. So we thought we would add fences where you can just say, okay, create a fence of so many meters of a certain radius and you can enable ring tones or alarms when you enter or exit the fence. And then this is something that Giova Barati had, but I want to highlight it because Smash brought it finally back, Contour Alliance on the offline maps, the OpenMendro maps, and this is extremely useful to certain types of surveys. So it's a big feature that we got in again. And okay, Vect Data, the concept changed. If you remember in Giova Barati, GPX files were imported inside the database. Now, GPX files, shapefiles, GeoPackage, Pox, GIS, they are all vector layers you can load in a GIS modus. And all these layers can be styled through SLD, which is a nice or a complex, but a good OGC standard. And they can also be reprojected. Reprojection, how is that done when you drug in a new layer? If the projection wasn't supported yet, it will give you an error, you top it and it will connect to the internet and retrieve the information about the projection. So you will then be able to work at it also in editing mode. It will do a projection on the fly. GPX layers, since they have always an elevation, well, they can also team with elevation, slope, and things like that. And this is important because I, for example, use often GPS layers to prepare for a survey to walk apart. And that gives a good idea of what I'm going to work. Shapefiles, same as GPX, they are supported in read-on mode. And they can be styled also with a unique value teaming. As you can see here, this looks maybe like a raster file, but it's really a shapefile of many, many polygons that has been teamed with unique values categories. So that's quite cool in my opinion. Shapefile is not really the way to go. Best is GeoPackage and PostGIS. Those two are supported in read-on-write mode, even if PostGIS at the time being only in online mode. So you have to be connected to do changes in geometry and table values, also in these cases, you have a user, simply user interface with which you can change the style and select labels and do some styling stuff. Regarding editing, GeoPackage and PostGIS allow you to select the feature, and it will enter in editing mode. You have the nice vertexes. You can drag around. You can add vertexes in the middle or new vertexes by tapping or by putting them in the GPS position. And you can also edit the alphanumeric values that by default are presented just in a tabular mode that you can edit. But GeoPackage and PostGIS now also support the forms mode, that if you are a GeoPackage or Smash user is what you usually take the note with. So you take a note, you open your complex or structured form, and this is now possible also for GeoPackage and PostGIS. And I will show you later how you can prepare your forms for these data formats. Rasters. Geoparazzi supported MBTIZ and GeoPackage tiling. We now also support images with world file definitions and GeoDivs. Both of these also support projections, but mine because the projections are just a bounding box wrapping and warping. And so on some projections you will get probably strange results. This is for example a JPEG that has been loaded. It's IPS-G32632. It's in a region where there is not so much distortion. Even if you look at the border, you will see some of the roads have a slight effect. But this would be quite okay. GeoDivs on the other hand are a hassle already on desktop. It's a quite complex format, so you don't have to expect really a lot. But we have been able to load a lot of ortho-photos and even GeoDivs technical maps like this that usually are compressed in a brutal way and might give problems. So try it out. Most of the times anyways if you have bigger surveys, you are better off with doing tile sets with GeoPackage or MB tiles. This about the specifically SMASH device features regarding centralizing surveys. Well, we have the survey server. It's around for a while now and it has been enhanced with SMASH a lot. So you have this map with all your logs and notes on it. And it gives you the possibility to synchronize stuff your surveys from SMASH directly on the server. Team coordinators will also have the possibility to upload data and forms and also projects that then the user in the lower part, you can see it, the SMASH users can download. So coordinators can provide data sets and forms for teams of surveyors. One big thing in my opinion, well, one thing that I really love is that the server now, the default client for the server visualizes you the notes if you open them exactly the same way as it does on SMASH. So you really find yourself also from a visual point of view. But what I find extremely cool is it supports versioning, which means notes that are modified but in the same position are identified as being another version of a note. And that means even if the same user, different user with the same project, different project, whatever uploads, saves and synchronizes and you note at a different time, but in the same position, you will get a versioned note. And that means if you, when you open it at the very bottom, you will see like a previous button. With the previous button, you can browse back in time of your note. And that's quite cool. I was talking about the default client and that's because recently, well, we have this client where we see the notes very nicely. And recently I talked to Francesco Flaccinelli from the Norwegian Institute for Nature Research. And it has been very cool because they have been testing SMASH and the server to map alien species. And they made like around 2000 notes and they brought a system to the limit because they have notes that are very large and they surveyed a lot and synchronized them and the default client wasn't performing enough in the visualization of these notes. And what they did was actually very cool. They decided to use a different client and they took just the backbone of the server without any necessary development from our side that we didn't even know it. And they attached a different client and it's Apache Superset, which is a dashboard application, a very, very advanced dashboard application, an open source project. And they got something like this, which I find is an extremely cool representation of the Geoprase service server. So just know that you can access the server in different ways. Supporting tools, very important, excuse me, I'm very fast because there's a lot of stuff to show and very, very short time. Supporting tools are necessary when you do this kind of stuff because you have to prepare that, you have to analyze data to look at data. We supply a tool in the HODM machine, a couple of tools. HODM machine, I leave you to that. It's just something you can download and start up. It has executables for Windows, Mac and Linux. And with it, you can prepare data like ambitiles. So this is a module where you just add your dataset and it will generate an ambitiles database. You can then run up into your device. Then you can also take folder shape files and it will create a geo package out of it. If you style them with SLD, QGIS, for example, also supports exporting to SLD the style. If you use that, it will generate your geo package styled ready for smash. There is another tool, the DB viewer. It's a simple tool to visualize spatial databases like geo package H2GIS and POSGIS. With that, you can create a new geo package and it will be empty and you can just right click and say, well, create me a table from a shape file. It will take the schema. It will ask you if you want to change the name, if you want to change the projection, and it will create a table with it. Once you have the table, you can just import data into the shape file. It is visualized as you can see. You click on the table and you will see the content, you see the geometries, you see whatever you need to see. If you right click and need to style them, there will be an open in SLD editor option. When you click that, it will open the SLD editor, which is a standalone application, but here it is opened directly on that layer. From here, you can do some simple styling, but you could also right click on an attribute of your table and look at some statistics and decide that please generate for me a themed style where you then can go through the rules and maybe change some colors or do some fancy stuff. This is then supported in Smash also. Tilesets. This was vectors. Now about tilesets, you can in the same DB viewer, you right click on the geo package and you can say, okay, import a raster map to tileset. So you can load a geotiff. It will cut it into pieces and create the tileset layer for geo package. And if you have shape files that are too styled in a very advanced way, which is not supported by Smash, you can still decide to import vector to tileset, which means that on desktop, it will properly interpret the whole styling and it will generate the tileset.
All the new features of the digital field mapping app SMASH. All you should know about the future of the Geopaparazzi project. If you are a surveyor, that's the right talk for you. For over a decade Geopaparazzi has been one of the few digital field mapping apps of the Osgeo firmament. After that many years in use a natural evolution happened and lead to SMASH, a more user-friendly, modern, faster to develop and cross platform app for the eyes of IOS, Android, but also Macos and Linux users. In few years SMASH has covered the featureset of geopaparazzi and is moving forward quickly. Geopackage and PostGIS editing support, Kalman filter on gps logs, geo-fences, native geotiff and shapefile visualization support, SLD styling for vector datasets – are some of the features that were added, that geopaparazzi doesn’t have. The Survey Server has been redesigned with the same technology used by SMASH and has now the ability to visualize data in the same look and feel as the mobile app. Notes serverside-versioning has been introduced to enhance synchronization of data by teams. A redmine plugin is being developed by community members to create a geo-ticketing system. This presentations gives an insight about the state of the art of the SMASH and Geopaparazzi projects and their current roadmaps.
10.5446/57241 (DOI)
Good day everyone and welcome to our morning session here in Cordoba. So we have a few interesting and exciting presentations from Evan and Angelos and Astrid and finally Ratclav on some popular projects. Our first presentation is from Evan Ruhl and he will speak to the state of GDAL and he will present the latest updates on the GDAL community as well as new features, drivers, tools to other new versions. Evan is the chair of the GDAL Project Steering Committee, the manager of Spatialist, which is a consultancy specialized in free and open source geospatial software development and he is obviously very familiar with GDAL, Proj, MapServer, QGIS and so on. Without further ado, I will move it over to Evan. Thank you Tom. So welcome everybody. My name is Evan Ruhl. I'm an independent free and open source software developer mostly focused on the GDAL, MapServer, Proj, DeepJotif and Q... I'm sorry, I have an issue with the sound going back to me. You might have to mute your venueless connection. Just mute the tab I think. Sorry for that. Okay it's better. Okay so I will give a quick update of what happened in the GDAL project during the last two years with 3.1 and I will talk a bit about the future direction. So what's GDAL in just one slide? So first GDAL stands for the Geospatial Data Abstraction Library. This is the black box you use often without realizing it if you want to read or write geospatial formats in most C++ open source or closed source GIS software. As of today it handles more than 250 different formats and as of recent years it also handles network protocol and services. It is delivered with an application programming interface for C++ and other languages such as Python, Java, C sharp. It comes with command line utilities to inspect file contents, perform file format conversion, image reproduction, rasterization, vectorization and many other operations. It chooses an MITX open source lightens which is super permissive and we release a version with new feature every six months and the BFIX releases every two months. So given that the last date of GDAL talk was two years ago I have now three releases to sum up. So in 3.1 we added a new dedicated driver to generate cloud optimized GeoT files. So typically you know just have to use a single GDAL translate evocation. There have been also a number of improvements in the internal layout of Cog files and the GeoT reader has been announced to reduce the number of HTTP GET requests needed in particular if you use a JPEG compression with binary transparency mask. If you're curious about the details of cloud optimized formats such as Cog you can see a later talk about that today given by Pyramid CalBear. A major work in this version was also the addition of a new API to read and write data sets that contain multidimensional arrays. So by multi-dimensional we mean something more than 2D such as XYZ or XYZ time. This is mostly of interest for formats such as NetCDF, HDF4, HDF5 or GRIB that are naturally multidimensional. Apart for this new API also being added to the in-memory and VRT drivers. So with a VRT driver you can create a virtual multidimensional array from different sources that can be then self-multidimensional or just classic 2D files such as GeoT. There are two new command line utilities, GDAL MDM Info to inspect the content and GDAL MDM Translate to convert or subset between formats. And in the future GDAL 3.4 version we will also have a new driver for that. I will speak about that a bit later. Other improvements, so we have new drivers like the E-AxRY, one for high dynamic range images. We have one for GeoEAT models and another one for the proprietary format by Regal. I'd like to draw attention on the flat GeoBuff vector driver which under reading and writing a new format which is optimized in particular for cloud access. It uses Hilbert R3s to enable fast bounding box spatial filtering and it also be mentioned in the talk I mentioned is a private slide. A map ML is a new candidate specification for W3C to have a standard inclusion of maps in hypertext documents. So this driver under the geometry part of the specification. GDAL warp has been improved to be able to directly create output in formats like PNG or JPEG that naturally don't support random writing which was a condition previously. And we have also GDAL ViewShade which is a new utility to compute ViewShade or Intervisibility. So here you have an example of that, it's an observer that is set on the Montmartre Sacré-Cœur Basilica in Paris and it uses the default settings of the utility. So in green you can see what the observer can see and in red what is masked. You can tune the observer position and 8, the observer is 8 and you can also tune the curvature coefficient which takes into account atmospheric refraction and so you might have to tune it depending on the wavelengths considered if it's invisible or radio. It is also now possible to write read-only vector drivers in Python. This can be used mostly for quick prototyping or conversion of two-storm or occasional file formats where we have a few examples, for example using CTJSUN specification. The OAPI driver which stands for OGC API feature has been updated to the 1.0 core specification. The GeoTiff driver has been improved to fix a long-standing performance issue when one created internal overviews on large files. It also has been updated to support the OGC GeoTiff 1.1 specification. For people who complain about cheffile being a multi-file and the issue that it could be, so the cheffile driver has been updated to be able to create and update ZIP to cheffiles. It works of course, you can experience some slowness because there is compression involved. For people liking NetCDF, we also now have written write support for the geometry part of the CF convention. GDAR 3.2 has received a new driver to support the not yet finalized OGC API for tiles, maps and coverages. It was based on the state of the specification and end of last year, so it might require some adjustment as it advances towards finalization. We have also a driver for the catch format that is used by S3 ArcGIS. We have a driver to read the HEIF and HEEC files. Dutch people will be happy to learn that we have no driver to read the cadastral vector format that is using the Netherlands. The new utility, GDAR Crate, has been added. It's a simple utility where you can just initialize the blank raster file from its size, its extent and the value to burn into it. Other improvements, so we have no multi-studied overview computation. We have almost speed up by a factor of 2 for the deflate compression. This requires a quite recent Liptif built against the LibDeflate open source project. The Cog driver can now generate tiles that are aligned with a well-known tiling scheme, such as the Google Mercator one. One can use a Cog file for example as a caching backend for a tile server, where each tile of the tiling scheme will correspond to a GeoTIF tile. The open file GeoData based driver has received a very welcome announcement with support for reading spatial indexes. So now it's really competitive to its appropriate equivalent for all reading operations. If you use bathymetric data sets or astronomical data, you'll benefit from improvements in the bugs and fits drivers. The NITF driver has also been improved to decode the specific metadata segment that are used in a new profile for multispectral and hyperspectral data sets. The vector feature model has been extended to support Unic constraint and alias properties which are used in a number of formats. And people still relying on the old way of importing the Python bindings will have to upgrade to the new way, which is actually 10 years old. So you have no excuse for not having a data get. In GDAL 3.3 we added the stack tag driver, which stands for spatial temporal catalog tag assets. This is an extension of the stack specification. There are several talks in the conference about stack, so I would like to watch them to get more details about stack. So this particular driver is a kind of WNTS in stack formalism. And you can use quite large metatiles, not just simple PNG or small JPEG files. So this can be very convenient to have both the advantage of cog and tiling. We have a new virtual file system for the Azure Data Lake storage Gen2 file system. We have also, you can also now store your GDAL configuration file in a ROSOS file, which will be loaded when GDAL initialized. We have enumerated and constrained and glob-filled domain support for the five geodab database drivers and geo package. And the Python utilities are now available in the GDAL utilities, Python package, and they can be used as Python collable functions. We have also, they procured and removed a few things. First, Python 2 support was dropped in favor of Python 3.6 and above. A few quite esoteric drivers have been completely moved away from the main repository to an auxiliary one, where they can be built as plugins if you really need them. And we have also marked a number of drivers for as they procreated and for removal in GDAL 3.5 unless we year for solid reasons to keep them. All of this is done in an effort to limit the continued growth of the code base and to be able to add new code for topics of more current interest. And to finish on the feature side of things, a small preview of what will be in GDAL 3.4, which will be released this November. So we will have a new driver, StackIT, which stands for StackItems, which use the projection extension specification. So we can build virtual mosaic for each image, which has published information regarding its projection and extent. And as I mentioned before, we will have also a new driver for ZAR, which will use a V2 specification, which is the one widely used and also for the ZAR V3 experimental specification. It supports both the classic 2D GDAL NPI and also the multi-dimensional one. And it will be optimized for multi-threaded decoding. Now going on more organizational topics, there's a big news item that you have probably heard about. So to make it short, we have set up a sports archive program to help funding maintenance activities. And maintenance here is to be understood in a broad term and encompasses all activities that are of general interest for the project, but are typically hard or impossible to fund through usual funding sources. As you may guess, GDAL is a large code base, about 1.5 million lines of code. We do have a regular flow of contribution by many people, but it's underlined by a relatively small pool of people that have a role of maintainer, which causes a sustainability problem. And GDAL is really used by a number of other geospatial software open source or proprietary. We have a page that lists more than 100 of them. So it's really a core foundation of geospatial technology and its well-being is quite critical. So we have approached NumFocus, which is a US charity to be our fiscal host. So NumFocus hosts a number of well-known projects such as NumPy, PANDAZ, SciPy, KandaForge, XRRA, Dask, and many others. And for other purposes, GDAL will remain a West Geo project. So this initiative was really successful as we have managed to raise about $300,000 per year, and many of the donors have done pledges for several years. So this will help secure funding for several regular commandeers and increase the best factor of the project. So currently we have two part-time commandeers, myself and Niel Dawson. We will be able to address activities such as bug fixing and triaging, timely review of pull requests, maintaining the continuous integration setups, and making the needed change to adopt for updates in the extreme dependency address security fixes and do release management. One topic that we will probably start soon thanks to this project is the addition of a CMake-based build system that will ultimately replace existing unix and Windows build system. And another good news is that with the sponsorship program we will be able to also sponsor enhancements in the upstream projects like PROD, GDOT, and FLEPTIF that are core foundations for GDAL. So I've put at the bottom of the slide the links to the two documents that explain the government rules of the sponsorship program. So if we have a look at your sponsors, you can see that we've managed to attract interest from three major cloud providers, two major satellite imagery providers, and a number of big and small and medium enterprises in the cheese industry. So many thanks to them for their support. And I also would particularly like to thank our Butler, Chris Holmes, and Paul Ramsey who have helped a lot setting up this initiative together with the support of the GDAL project steering committee. Thank you for your attention and I'm happy to answer any questions you may have. Great. Thank you, Evan. If anybody has any questions feel free to put them forth in the chat and we'll be sure to relay them. Maybe I'll start off with a question. How can – so it's exciting to hear about the sponsorship news and the sponsorship program. How can new sponsors approach the project? So on the GDAL.org website we have a new page that is dedicated to sponsorship and we have an email where people interested in sponsoring can contact us and we will make them in relationship with the staff at NumFocus that will take part of the details of setting up the sponsorship. That's great. And in terms of influence, what sort of value proposition, what kind of influence would the sponsors have on the project as a result of their contributions? That's a good question. So we have – so the sponsor, when they give to the project, is intended to be a non-directed way. That is, the GDAL project steering committee will remain responsible for deciding what will be done. However, we have put in place a structure where the sponsor will be able to explain about their use of GDAL, their ideas, and so we will take that as input for any decision on the project we may have. So basically the governments, the governance of the project will remain quite similar as it is today. We will continue to receive input and contribution for the whole community and sponsors will be able to give also their input and we'll try to make that as best as we can. Cool. Just looking to see if we have any questions from the audience. One more question on my side at least. You mentioned CMake which is encouraging. Is there a timeline on when this will land in Master? So first we will have to put together requests for comment document for approval and comment by the community. So the timeline I've in mind is the first capability in GDAL 3.5, so in MainExtra, which will be mostly for developers so they can start testing but probably not ready yet for production. In GDAL 3.6 it should be really close to be production ready and we will probably officially deprecate the existing file system and GDAL 3.7 will only keep CMake so that's my current idea of how it will go soon. A few more questions now. For the Cogdriver user wants to do cubic overviews on two bands but nearest on the third band using regenerate overviews works but starts emitting a warning about optimized layout broken. Is there any way to do this cleanly? Probably a detailed question but I'm not sure. Let me look at the question. Sorry, I must. Not really because if you go regenerate overviews it will indeed break the optimized layout of the file so I don't really have an easy solution for that. It's not super typical. Maybe the solution would be to create a regular JOT file using those different resampling methods and then use the option in the Cogdriver to use existing overviews to avoid them to be recomputed. That should work I think. Another question. How do you decide who will benefit from the sponsorship program? So for now it has been an invitation from the Gidal PSC. We will see how it works. It's really useful but probably at some points we will call for proposal to the community people have ideas that they want to be funded. We will probably have such a mechanism. Another question on the Cogdriver in relationship to JOTIF. What are your thoughts on when we should use plain old JOTIF as opposed to Cog? Cog is mostly plain old JOTIF so there is no real disadvantage in using Cog file. So it will work just the same. Yeah, Cog just makes sure that some things are put in the right place in the file for it to be in the most optimal layout for cloud usage but there is no real drawback in using a Cog file in a non-cloud context. Another question, when do you see Python 3 support being potentially dropped? Python 3.6? I don't know. I don't think there is any pressing reason for now to drop it. We haven't thought about that yet. Another question, does the line of sight prediction take into account any additional atmospheric refractivity information aside from just the wavelength of observation? I'm not sure I can answer the question. I was not the author of this utility. It was written by Thomas Secker. I just know there is this curvature coefficient that you can put as input to the utility. Okay, great. I think that's all of the questions. Evan, I'd like to thank you for always an interesting presentation. I'm glad to see the exciting news and developments in GDAL. And thank you very much for joining us here and have a good rest of the week. Thanks. Thanks. Great. Thank you.
We will focus on recent developments and achievements in recent GDAL versions. In particular, new drivers such as FlatGeoBuf, Cloud Optimized GeoTIFF, EXR, HEIF, OGC API Tiles/Maps/Coverage, STAC Tiled Assets or the infrastructure to write vector drivers in Python. We will also present the multidimensional raster API and its tools. New utilities like gdal_viewshed will be introduced. The state and health of the community and its challenges will also be covered.
10.5446/57242 (DOI)
I'll go to the background and I'll catch up later. So Floor is yours. Thank you. Okay. I thank Jaster. Hi to Tom and Angelo. Good afternoon to everyone. Good morning. Good morning. It's night here. It's 9pm in Italy. So I'm going to speak about the state of Gionnode, the latest release and the ongoing development. As just said, I'm Alessio Fabiani from Gio Solutions and we prepared this presentation along with Giovanni Allegri which is also part of the Project Steering Committee. And I also have to thank a lot the other two members, the other three members which are Francesco Bartoli, Florianne Gallenbaum. It's always difficult to pronounce. And Tony Sospringer which are also part of the PSC. Also, of course, I want to thank a lot the Pisces W contributors because as you might know Gionnode relies a lot for the metadata part and the catalog services on Pisces W. So we are waiting for more updates, more releases and see for what if we can somehow bind a bit more together the two frameworks. Okay, so Gionnode, very quick introduction. It's basically a web framework written in Python based in Django currently which aims to make more easy for non-GIS expert users to publish special datasets for the time behind the web interface you can upload simple special datasets like shapefile, DOT, stuff like that. But under the hood it relies on a very powerful Gio special server which is Gio server. So it potentially supports every Gio special datasets that is supported by Gio server. The thing is Gionnode would like to provide user friendly interface and also would like to add a sort of simple I would say in respect to Pisces W and Gio network, a very simple layer of metadata. Mostly it wanted to make it easy to share the data between the different users, create maps, styling, there are several editors that you can access directly through the web interface that do not require I would say a deep knowledge of the Gio special standards and the OGC standards. Okay, let's move on with the presentation. I'm going to present you the latest updates on Gionnode, I'm not focusing on the let's say standard things that Gionnode does. Okay, just few words about us, our company we are actually located in Italy and also in the USA and we provide support for those main open source projects, Gio server, maps to Gionnode and Gio network. But also we of course strongly support open source in general, we actively participate in the OGC working groups and we support standards of course critical to the Gio int. Okay, so let's move on with the Gionnode, so this is just a quick overview of the current release. So the latest stable release of Gionnode is the version 3.2.1 which has been released on July but we are almost ready to ship hopefully during the mid of October the other two releases, the 3.2.2 and the 3.3.0. Not yet the 4.0.0 because as you will see during the presentation it's still under development with huge changes especially at the architectural level so it is not yet ready I would say to be released and to be shipped as a stable product. Yes, here there's a list of the releases of Gionnode since the August 2010. Okay, so which is the difference between the two branches, at least the 3.x branches. So as I said before the current stable release is the 3.2.1, the stable branches the 2.0.1 and the 3.2.x that will become very soon the 3.2.2 release. We here try to keep of course the product as much as possible stable so we backport only the major fixes, regressions, blockers or translations so something that has been tested or that do not impact or do not change easily the development and the code of Gionnode. In the maintenance branch the 3.3.x that will become the 3.3.0 release. We are backporting also a few of the most stable features, well tested features that have been introduced in the development branch which is the 4.x branch, the master branch actually. The architecture of the version 3.3 is backward compatibility compatible with the 3.2.x of course. So here you will have not only the fixes that have been backported to the 3.2.x but also some new interesting features and improvements that I'm going to show in the next slides. Okay, so let's start with some warnings, some breaking changes introduced since the release 3.2.0 which are of course are still present on the next releases. So the first one is the BAMP2 PostgreSQL version 13. So whenever you developed or deployed your Gionnode as an instance by using Docker on an older version of PostgreSQL you might need to make a dump and restore of the database in order to switch to the newest version. You can still use the old version, nothing prevents you to keep using the old versions. There's no issue as far as I know. Just be careful if you want to upgrade your node to this breaking change, an important breaking change. The MapStoreClient version has been aligned to the versions of Gionnode. So basically you will have, by the way, the MapStoreClient now it's the default client, GIS client of Gionnode. So for the branch 3.2.x you will have a branch of MapStoreClient 3.2.x and so on. Each branch has its corresponding branch on the MapStoreProject. This is because the MapStoreClient are not backward compatibility with the other versions. So you will need to keep using the same version associated to the version of Gionnode. The base model of the 3.2 train, 3.x train and above has been changed. So the base model now introduces a bounding box as a geometry, as a special geometry. There are fixtures in the Django that allow to automatically convert the old resource bases into the new geometry. But you will need to be careful anyway because there might be some issues, hopefully not, but if you plan to upgrade, be careful to this change to the model. There was a general cleanup, unfortunately sadly, that we had to remove because they weren't supported since a long time. So basically no one could say which was the status and most probably they don't work with the newest version of Gionnode. So as a PSC we decided to remove the stuff from Gionnode just because there was no support at all. So it could create confusion to the community and potentially also issues that we could not fix in any case. So we had to remove the support for the Gionnetwork catalog for the QJS server and the SPC Gionnode. There were some adjustments to the advanced workflow, the advanced workflow of Gionnode. It's basically a set of settings that allow you to take more control on how the resources are published into the web framework. So basically you put some constraint on the registered users so whatever they publish must go through an approval workflow basically. So you will need a manager or some super user to validate the resource before actually making it public to the other users. So there were some adjustments to the logic in particular. The most important one is that whenever a user publish a new resource, of course, and the advanced workflow is activated, the resource of course it's not public. But the Gionnode will automatically assign some editing rights, some editing permissions, especially on the metadata, also to the managers of the groups belonging to the user. So Gionnode will automatically give to those managers the permissions to edit the resource and so on. Whenever the resource will be approved by a manager, only a super user can then publish the resource. At that point in this stage, so from approved to published, it won't be possible to edit the resource anymore. You will need to ask for a permission to edit the resource. So basically Gionnode will send a message to the administrator and the administrator in order to allow the user or the manager to publish the resource again, must unpublish or unapproved the resource back to the previous stage. So this is the main change that has been done to this logic. From the version 3.2.1, there's also a major change that could break somehow the current deployments. It is the introduction of salary bit and workers. So basically it's not a big deal. I mean it's just a matter of changing the configuration of the salary worker, in particular the scheduler, instead of using the scheduler based on the file system, you will need to use the database one. So it's just a matter of changing a parameter when you start salary. Okay so what are the main features of the new release? There were a general clean up and speed up of the code of the core, we focused on trying to remove as much as possible hard-coded stuff, try to reuse as much as possible some object-oriented style of coding instead of huge scripting or big functions all developed into the views of Django. And also we put some hardening procedures in order to prevent or at least tell the user where an error is and why the error happened instead of just throwing an exception here and there. We in this new release the generator of the thumbnails has been completely refactored. So before it was relying on GioServer, so it was consuming a lot of the source on the GioSpecialService. We removed this custom module on GioServer, so now GioServer use in practice only official modules on the GioServer packages. And the thumbnail generation is completely done at the Django level, at the GioNode level. There are a lot of improvements for the Tessauri and the controlled vocabularies, so now you can import more RDF-based Tessauri and you can enable more than one Tessauri in order to allow the users to insert and search keywords directly from the Tessauri and the vocabularies instead of just adding the free text keywords. There is also the possibility to create some themes, so as an instance you can enable an inspire-based Tessauri or a custom-based Tessauri of some other kind and you will see on the interface the different Tessauri divided and grouped by category. There were a lot of improvements at the documentation part, thanks to Tony for this big work. And there are also a lot of improvements on the permissions assignments. Now it's also possible to provide permissions to the remote services resources before it was not possible basically. So whenever you add a remote service resource, it was not possible to change the permission of that resource now. The libraries have been updated, just moving a bit more quickly because we are running out of time. So this is the new improved MapStore JS client. There are a lot of improvements, especially on the VisualSlide editor, but also there are a lot of options for the map, a lot of tools, and also there are some annotations that you can save on the map. There is, and not even with your node now, the possibility to create those that we call Geostories, which are basically a bit more than a map. It's a way to create storytelling by using a mix of static media contents and the Geospatial data. It is possible to modify the output directly from the Ingenode interface of the GetFissureInfo template so basically you can assign, you can turn some properties of the Geospatial dataset into images, audio, streams, or video, or URL links. There is a lot of improvements for what we call the REST APIs version 2. So basically we abandoned a bit the old-fashioned API of GeoNode based on TastyPie, and we introduced this new REST API based on dynamic REST framework that allows us also to perform complex filtering directly through the REST endpoints. A lot of improvements have been done to the upload part, especially on the asynchronous management of the uploads, which is much more stable now, and it is possible also to resume some uploads. So if you leave the page, you can resume an upload later on, so you won't lose the dataset that you just uploaded. There is the possibility to append data to the Vectorial dataset directly from the interface, so of course you need to use a schema compatible with the original one, but other than replacing completely the Vectorial datasets, you can also append new Vectorial features by uploading a ship file. There are some improvements to the representation of the legends, especially on the maps. So you will see here on the detailed page and also on the map the legends of the layers of the overlays present on the map. What's new on the development branch? The architecture has been completely revised, as I said before. We introduced some fully asynchronous API interfaces to what we call resource managers, so basically we centralized the way how you configure the resources into your notes, so instead of relying completely on the Django signals like before, now there is a module which is plugable, of course, and that basically allows you to perform any operation against the resource. So basically you can use this module not only to create, update or delete a resource, but you can also use this module to ingest a new resource or to replace a new resource or to append it to the resource. It's a sort of centralized component that includes all the logic to manage and handle the resource management. The client has been completely, the interface of journal has been completely revised, so basically now the journal, I'm speaking now of the development branch, so the journal version 4.x. So the interface is basically a single page application, so you want, need to change page, the new client uses of the rest APIs in order to perform operations and ask the journal to filter the data sets, show the data sets and so on. Also the operations are almost all asynchronous, so whenever you as an instance try to, let's say update or delete a resource, you don't need to wait that your node finishes the operations. The client just will block the resource and will show you that the resource is currently is going to be deleted or updated or something and when it will finish, the resource will be ready again. So you can continue working in the meantime, continue working in the meantime. And also they have been introduced the dashboards. The dashboards is another feature. It's a bit different, it's something that may be different from Azure Story and dashboard. It's as the term say a dashboard, so you can basically have a component on this dashboard, on this dashboard, elements like maps, widgets, counters, charts of several types, media contents and so on. And that's it I would say. Thank you for the time. Thanks very much Alessio for making us current. There are very interesting developments here, there were no questions on the chat, but I have a question. You talked about a new feature called Geo Stories, what is it called? Yeah, yeah, yeah. Yeah basically let me show you quickly. As any sense if you go to the stable, you know, the demo. There are a few examples of those Geo Stories, so if you click on apps here, you can see a few examples of what a Geo Story is. It's basically a sort of, let's say, storytelling that go from the up to the bottom and allows you to mix together media content, static content and dynamic content by using not only the resources that have been uploaded and created into Geo Note, but you can also use external resources. So basically you can do something like this, so basically by scrolling down you can tell about your project, you can show videos or what you have done, describe the things, zoom into the maps in order to show to the people what you have currently done and so on. Very powerful. There's also kind of data journalism you can do and maybe if I have like a couple of hikes with EPS tracks and photos, can I make a Geo Story with that? Yeah, sure, sure, sure. You can, as I said before, you have the choices in order to add the contents to the Geo Stories. You can either upload the resources directly into your machine through Geo Note, and now Geo Note by the way supports different types of media types, not only static images or PDF or documents, but also videos, audio files and stuff like that. And in that case, the Geo Story will bring automatically the content from the Geo Note catalog, otherwise you can link to the Geo Story some external resources taken from the web. Okay, that sounds very good. So to interrupt you, in the meantime, two questions came on the board. I'll show the first question. You can read it. Can we manage Raster Time Series data within Geo Note as a collection? So we have one minute, if no, is this planned for the future? Okay, so there was a functionality developed a long time ago for the Raster Time Series which was abandoned because there were no support for that one. Currently for sure it is possible by creating the Time Series on Geo Server and then configuring the attribute time on Geo Note. So if you do that, let's say semi-automatic procedure, Geo Note will be able to support such kind of data. It would be nice to resume this old functionality that basically allowed, back in the days, allowed you to choose the Time Series. During the deployed phase, you could choose the Time Series that you want to update. So basically you could directly add some granules automatically to an already existing image mosaic. Okay, thank you. I'm sorry to interrupt you. Oh, yes, sorry. We were one minute over time and there was a question two times asked, when will 4.0 launch? Everyone's waiting for. Yeah, I can't. Give us a date. Yeah, no, I cannot. Three times. I cannot respond now to this question because this is still really unstable. So hopefully at the beginning of the next year, but I cannot promise anything actually. But it's usable. I mean, if you want, you can use it. Okay. Well, thanks again, Lesjo, and of course, all the other developers of GeoNode and, well, I'm eager to try out. So we'll bring in the next speaker. So thanks and we'll talk again. Bye.
This presentation provides a summary of new features added to GeoNode in the last up to the latest releases of GeoNode together with a glimpse of what we have planned for next year and beyond, straight from the core developers. GeoNode is an open source framework designed to build geospatial content management systems (GeoCMS) and spatial data infrastructures (SDI). Its development was initiated by the Global Facility for Disaster Reduction and Recovery (GFDRR) in 2009 and adopted by a large number of organizations in the following years. Supported by a vast, diverse and global open source community, GeoNode is an official project of the Open Source Geospatial Foundation (OSGeo). Using an open source stack based on mature and robust frameworks and software like Django, MapStore, PostGIS, GeoServer and pycsw, an organization can build on top of GeoNode its own SDI or geospatial portal. GeoNode provides a large number of user-friendly capabilities, broad interoperability using Open Geospatial Consortium (OGC) standards, and a powerful authentication/authorization mechanism. This presentation provides a summary of new features added to GeoNode in the last up to the latest releases of GeoNode together with a glimpse of what we have planned for next year and beyond, straight from the core developers.
10.5446/57245 (DOI)
How long can you stare making the impression you're just an image? Okay, seems like I'm here now. Where am I? Why am I not in Buenos Aires? This is such a pity. So we'll have to do it that way. So sorry. But we'll have our beers. We'll have them on our own because of, you know. But what to do? This is just what we can do. And I brought you a little song so it's not so sad. Here's a little song I wrote. You might want to sing it note for note. Don't worry. Be happy. So hopefully we can be happy and we don't have to worry too much. But you know how it is these days. It's not easy. Let's have a look at the cloud and how it devoured open source and then choked on free software. This is thoughts on the sustainability of open source software projects in general. A talk by Arnov Christo, also known as 7. What are my roots? Well, my roots are actually the Borg. We will assimilate you. This is how I started doing open source. Eddo is a co-founder, ex-director, ex-president and then in Foschist, Germany. Ed's a co-founder and a regular speaker. And I'm an entrepreneur. So I actually make money doing open source and helping other people doing open source. The roots. This is where I come from. It's a Borg cube. And this is about assimilation. And it worked. And I'm doing everything. Everything is now open source basically. So the open source geospatial foundation, I think there's enough talks about this. I don't have to go into much more detail. I was here with incubating grass and postures in marble and geo tools incubation and some other things. And I've been with geo from 2006 to 2012. So six years of my life I spent building this thing. And now I'm a little less here, but because look at this isn't as nice. It's not so nice as being in Buenos Aires, but it looks a little like Disneyland or Fantasia land. I don't know. This is the location of Marburg and where I started my studies many, many years ago. And this is going to be the location for the next German language Foschist conference. So what's my affiliations? Let's have a look into Terrestris, which is one of the companies that I work for. Consultant here and project manager. And if you look at this, it's one of those small, medium sized enterprises putting maps into the web, having their own open source project, a third dimension mobile maps. And if you look at the company, it seems like they put people at the top, which I think is a great, great thing. And if you look at the staff, you have the CEOs and project managers, application developers, data analysts, and also the senior consultants. And you know this guy. This guy is one of the ones who are on the board of officers right now. And if you look at, oh, where is he? Oh, he's here. He's an advocate for good food. Okay, my own company, we're working with Mectar Maps, and you can also already think maybe this is one of the things why I'm talking about sustainability of open source projects. What we do is we provide services, software development and implementation. We do scrum, as everybody seems to be doing these days. We do it also for the enterprise. That looks a little interesting with lots of product managers, architectures, engineers, and you know, those guys, that's us, that's the software developers. So it seems like they're not so important anymore in the big picture, but I think they are. And if you look at the right mindset, then this is the manifesto for agile software development. And we put individuals and interactions over processes and tools. And customer collaboration over contract negotiation. And we respond to change more than we follow a plan. This doesn't mean that we don't do any of these things, but we think that comprehensive documentation is good, but working software is just more important. We value this a lot more. We'll come back to this, especially about the people. So another company in this room is Mundialis with Raster Data, and they have this fantastic satellite image of the month where you can read, obviously, today about Buenos Aires and Argentina, and it's also available in English. And if you look at the company, then again, there is a team. And in the team, you have all the nice people who work on, look at this, on the grass stuff, and we even have dogs working for us, which is fantastic. Here is Marcus Nettler, who is like the grass guru of the second world. And he's a lot of times, again, seems to be a multiple entrepreneur, if you ask me. So what about open source and SAIS? What is SAIS? SAIS? SAIS? SAIS? SAIS? SAIS? SAIS? SAIS? SAIS? SAIS? SAIS? SAIS? SAIS? SAIS? SAIS? SAIS? SAIS? SAIS? And you enter your postal code here, and if you enter it correctly you also get a result, and then it will tell you whether at your place you get fast internet, come on, shout and viag in this one here. And yes, we do owe, we add 100 and 50. So this one, if you're owner of this place, then you can actually get a thousand megabits per second. And this is what we're working on for the Deutsche Telekom with cloud and open source and grass and PostgreSQL and Shogun and all the other tools that we need to dig trenches where we can put grass fiber. Another project that I'm working on is the German base map, which is being put on a phone. So that's all of Germany data and Germany data is quite a lot and it compresses down to about 10 gig. Then you can put this on a phone, which is pretty cool. And I'm going here to Frankfurt for a reason because Frankfurt is the city of the money in Germany. And here you see all the high-rise buildings where the money sits. And what we see here is MapOxGLJS library, which is fantastic if you come to using vector maps. And also this works on the cloud, obviously. So how the cloud will divert open source, this was met as a in August 14 in 2015. This is six years ago, more than six years ago. And it seems like, this is all the go, there's even an image missing here. But what I stuck on was this, this sighting of Mike Olsen. He said it's pretty hard to build a successful standalone open source company and met essay underlines this by saying having spent 15 years trying to do exactly that, I would go one step further. It's impossible. So this may be, but I actually don't think so. Now, met essay is really, he's an evangelist and MongoDB and previously he was at Amazon web services, head of developer ecosystem for Adobe and so on. So he's coming from the big money side, which is okay. But then you also think that open source is selfish, which it is not. And he says, yes, companies don't support open source for purely altruistic reasons. Companies cannot be altruistic because they're companies. But people can. And this guy here obviously is not altruistic. So please don't mix up things. People and companies and are different. So yeah, this happens all the time now. Another multi-billion IPO for open source. But in the end, I don't really care if another greedy business discovers that open source is the better tool to get things done. That's like blatantly obvious these days. Open source risks being devoured by the very cloud, which it gave birth in the case that it gets eaten up. So what does that mean? As Facebook emerging team noted in a blog, Facebook is built on open source from top to bottom and could not exist without it. And Matt Isaiah underlines this again, could not exist. This isn't a matter of personal preference, personal preference, but existential reality. So this is open source in the cloud. So the cloud obviously did eat open source. The question is whether it really devoured it. So if you're in the source, why should you renunciate from open source? Let's have a look. There's a few examples. And this one is elastic. And they said, not OK. Why we had to change elastic licensing? And there's a long explanation. One of them says, we think that Amazon's behavior is inconsistent with the norms and values that are especially important in the open source ecosystem. Maybe that's right. Our hope is to take our presence in the market and use it to stand up to this now so no others face the same issues in the future. They stand up for open source by turning their product into non-open source. Let's have a look at what elastic search is. It is a distributed free and open search source analytics engine. Again, free and open search. There is no open source here. But elastic search is built on Apache Lucene. Apache Lucene, let me think. Let me think. What is this again? Apache Lucene. I've seen that one before. This is actually a cool thing, which is the basis for solar and its open source. So why did the last key take something open source, then turn it into open source, make it better, and then now it turns it into something else, which it calls the SSPL. And the SSPL, what is the SSPL and how does it work? It's a source available license originally created by MongoDB, who set out to craft license that embodied the ideas of open source, allowing free and unrestricted roof. On the occasion, the man was on the roof. This is really open source. Continue to read. And you'd see down here that SSPL has not been approved by the OSI. So to avoid confusion, we do not refer to it as an open source license, and you better should not because it's not an open source license. Now why did all of this happen? Because of stock quote, obviously, there's lots of money. There's 162 million of funding going in and elastic is acquired. 10 other organizations, if you need to do that and you need an ESO fund provides funding for employees of venture-backed companies to absorb the financial risk of exercising stock options. So now we're in the big business of money. And if that's the basis for your business, then you can renunciate from open source because then you're not the developers anymore anywhere. So let's go to the next topic, selling software. How does that work? The dark side of commercial open source, it's an acquisition. This is again a long time ago, 15 years, five years. Apple acquired FoundationDB as warning to all. What they did is they took away all the... Okay, I lost that. Yes, they took away the community download area and took away from the GitHub. They're back now, but it's just an example for how difficult it is if you're a company and you're selling something, you get bought and then you get out of it. So going back to licenses, the CloudDRAW model, this is again Mike Olson. And he reports that in 93, he dropped out of PhD program Berkeley to join Illustra, which was a database company, and open source at that time was firmly entrenched at Berkeley. This is where we got the BSD license, which he used for PostgreSQL. Then he followed a path that succeeded usually and he was the Ingress project and company in 1980s. He created a closed source proprietary variant of PostgreSQL and built Illustra as traditional closed source software company to sell it, which they did and went out of the deal with the former 40 million in the pocket, which is a cool thing. So great on that. But let's look again at licenses. BSD license are a family of permissive free software licenses imposing minimal restrictions on the user. So you can do anything you want with it, basically. But there is another one, which is the copy left one. And it's a series widely used software license, the guarantee end user, the freedom to run, study, share and modify the software. Similarly, we just heard about Elastic trying to do, but then not doing it. And this comes from Richard Stallman. Some people don't like this guy. I think it's great. And let's like a little thank you because without him, I don't think that we would be actually here right now. That's all I'm going to bore you about licenses. So death of an open source business model, which is the next one. And this one is where it comes in with Mapbox is that they had this whole idea is insane. No one believes it could possibly work when they first learn about it. But all have managed to achieve valuations in the billions of dollars by pursuing this batch of crazy ruse psychology, let it all hang out strategy. At least they thought that it would go. But in the end, you will see that the cloud killed open core. There was a long discussion. It's a really long discussion when this happened nine months ago. So the baby should be out by now. And the discussion was more or was a lot there and tried to say that it's probably not nice if Mapbox just goes away because some other big company bought it. But the more important thing about the changes here is that we need to use a token and developers use need to map box account and access token. So that with every request, because it's a JavaScript library, you have to ask somewhere else whether you can actually use it or not. And there's a communication going through the web. It's like a big cookie, a total surveillance where you can see what is going on. Otherwise, how are you going to protect a software that is JavaScript and runs in the browser from being used by anybody? So difficult, difficult topic. And Vladimir Agafonkin, he's the leaflet guy and did a lot in Mapbox. And he's a great guy, I think. I met him once or twice. He gave a great talk on how to give awesome public talks. I should learn how to do this. And he's a nice guy, but he's a developer. He's not a money guy. So he was torn between this big discussion about what Mapbox is doing there. Let's have a look at OpenCore. What is OpenCore? Again, for business model, we competed as an article with the exact name. Interesting. So it's called OpenCore model. And the article has multiple issues because OpenCore is dead. It's not working anymore. If it ever did, it did some IPOs and did some money on big companies, but I don't think it's a sustainable model for doing OpenCore's business. So one of the outcomes is Mablibra. And I'm not going much more into this because we'll still have to see how this baby is going to live. As I said, it's nine months ago. I haven't heard the first cry yet. But yeah, if it's worth, you can use it. And we do. So it's dead. Let's look at how OpenCore actually can also work. From Google to the world, the Kubernetes origin story. This is an interesting one. Nice read. And it was summer 2013 when Craig McGlucky was in room with Orl Telsler. And they had the head of the technical infrastructure and actually tried to, let me get this straight. You want to build an external version of the Borg task scheduler, one of the most important competitive advantages Google had at that time. The one we don't even talk about externally. And on top of that, you want to open source it. And yes, they did. And it's explained here. And this is one of the most important ones. It's 230 plus years of effort by the community that they won by actually opening this thing up. So it does work. But it works differently. And now it gets a totally twisted thing because Red Hat, which is like the big, big company doing open source, one big company doing open source. What does it do? It takes this Kubernetes and turns it into a project. Into a project that is commercialized software derived from an open source project. Wait a minute. These are the open source guys. And they take open source and make it proprietary and then sell it again. What is Red Hat OpenShift again? Yes, it's exactly that. They put something around it. They make an extra effort to package it, to make security, to add additional tools and give them as a product to a customer who doesn't want to implement all stuff on their own. It works. It's okay. But let's look at PostgreSQL. And again, here you can look at the developers and you find people behind PostgreSQL. And if you have a look at them, you can actually count them. It's one, two, three, four, five, six, seven. That's the core team of PostgreSQL. How cool is that? But do you need $3 billion to fund these people? And even if you look at the major contributors, they all have their own income. They work for EnterpriseDB, for CreditE, for CrunchyData. And if you look, you will also find our friend Paul. He's from CrunchyData and Clare Elephant. He does the PostgreSQL FDV post just due to Spatial Extension. And you find Regina, Regina Obe at PCorp US. And she also does this. So Trusted Open Source, PostgreSQL for the Enterprise. Who's that? That's CrunchyData and what they offer for all of these companies, which I think are actually quite known. And so IBM and SAS and ASIA and Rival IQ, they provide Enterprise PostgreSQL support, CrunchyPostgreSQL for Kubernetes and CrunchyHigh Availability PostgreSQL. So they're right at the top of the cloud and they're providing the services and they're doing it with PostgreSQL and PostJS and giving this to everybody who needs it for money. This is commercializing open source. And Regina, you may have known her for restarting services that are geo that haven't been restarted in 10 years. Thanks a lot for that, Regina. And she runs this program corporation, which we just consulted with a big, big government organization in Germany to tune our PostgreSQL database. So yeah, there is quite a lot of people actually making money out of open source, but it's different to venture capital funding. So open source works. Now the last slide that I'm going to go is remember when open source was fun, going back to Matt Asse and if you go right at the bottom, I think this is like one of the takeaways from this talk and Matt says, which makes me wonder, are we too concerned with trying to turn open source into money, into work? Maybe just maybe we need to rediscover the fun side of open source as these developers clearly do. And I think with that, he means you at the conference and behind all those screens watching these talks. So please enjoy a force for G, have a nice one. And later on, maybe we can have a virtual deal. Thanks a lot. I'm Paul Bassingen.
The Cloud Devoured Open Source... but then it choked on Free Software. A freestyle intro on how to help Free and Open Source Software manage to avoid getting obsoleted by shareholder value. A short note on how business functions and why Free and Open Source makes a good combo when creating sustainable software architectures.
10.5446/57246 (DOI)
Hello, anybody there? Can you hear me? So I see us on the when you less platform now with a backup. Okay, good work. Alright, so I guess... Hello, anybody there? Can you hear me? Alright, I'm ready to go when you are. So I see us on the when you less platform now with a backup. Okay, good work. Alright, so hello, anybody there? Can you hear me? Okay, I shared the link on... Hello, anybody there? So I see us on the when you less platform now with a backup. Okay, so... Hello, anybody there? Can you hear me? Okay, guys, they can obviously hear us. There's a time delay between the when you less and the stream yard. It's about 20-25 seconds. They can hear me now. They can hear us now. And I would say we just start because we are already two minutes left. I would remove you, Gonzalo, from the stream and just introduce now Michael Turner, who is going to talk about the intersection of geospatial open source and commerce. Thank you, Michael, for your support in this hectic minutes. And yeah, it's an honor to introduce Michael to you because Michael was the chair of Phosphor G 217 in Boston, I think. It was 217. And yeah, we are really happy to hear you now. And yeah, it's your stage, Michael. Thanks, Tel. I appreciate it. It's good to see that, you know, in the virtual world, I think we just had the equivalent situation of your laptop isn't talking to the projector, or you can't share your screen slides or something. There's always something that goes wrong with the technology. And we had a little hiccup with the stream yard platform, but I'm really pleased to be here and really proud of the whole community pulling this virtual conference together. It's been great so far. So those of you who know me, kind of my passion, I've been in business my whole career is, you know, understanding and talking about the commerce that surrounds open source software, geospatial open source software, essentially how do you make money? How do you support Phosphor G communities and give them the money they need in this ecosystem? So the story begins with, you know, what is what is free and open source software? You know, we if we understand what the software is and where it comes from, it helps present some of the opportunities. And, you know, it's much easier to explain open source software because if you look at these brands here, which are all open source projects, Firefox, Linux, WordPress, QGIS, Postgres, SQL, Android, these are all open source software tools. So they're they're well known brands. It's not just little niche niche stuff. Things people use every day are part of this ecosystem. And free and open source software is just developed differently. The software is created and maintained by a group of people. Generally, there's one person who's in charge of the project. Everyone knows Linus Tarvald for Linux. And they make the decisions and build the team who makes the decisions about what comes into the software and out of it. And super importantly, source code is freely available to use and change so people can see what the software looks like at a code level. Generally, the software is governed by an open license. It's open and freely available, but there's some rules that you shouldn't abuse. And generally, software is available free of charge. It's not always the case, but most open source software can be obtained free of charge. So what kind of free are we talking about? And, you know, I think these phrases have been tossed around by a lot of people over the years. You know, one of them is, you know, it's not the free and free and open software is not free of charge. It's really about free speech. It's really about the freedom to see the source code. But the other thing is, even if you are expecting it to be free, like free and charge, it's not really free because any kind of software has some cost in maintaining it in downloading it and getting help if you need it. Just like free puppies, someone may give you a free puppy, but then you're buying food and you're buying the veterinarian services and a leash and a dog bed and all of those kinds of things. And what's happened recently is there are many examples of open source software being big business. Who is our diamond sponsor, Microsoft? You know, and who's one of our gold sponsors? Google and many, many large brands, IBM, Trimble, Esri, Heardel have all been important and consistent supporters of Phosghy conferences. And again, you know, back in 2019, kind of the mother of all open source deals, IBM pays $34 billion for a company that's built around an open source model about presenting versions of Linux to enterprise businesses and providing the support to do that. And one of the other things, you know, indicators of the importance of open source software has been the effort that large companies may associate themselves with open source. I was at a conference a few years back that General Electric held. And in their keynote speaker, the first speaker gets up and he's talking about open source software inside of GE's products, particularly for energy grid management. And why is a company like GE trying to associate itself with open source? Well, A, they actually do use it. And B, what customers want to hear. People understand that open source software can be very secure open source software adhere to standards in a way that commercial software doesn't. And on and on. And, you know, even in our own geospatial world, I saw this, I kind of dropped my jaw when I saw this slide, this is Jack Dangerman at a United States GIS conference, putting up a slide saying that ArcGIS is an open platform and highlighting open source software and open openness and open math. And, you know, I'm not saying I believe him, but the market conditions and people's interest in open source software is making a guy like Jack Dangerman believe that it's important to understand and talk about open source software. So, open software is here and it's not going anywhere except up. So, you know, now would like to pivot to the core of my talk, which is, so how do you make make money in this kind of ecosystem understanding where soft opens our software came from. And there are three main models and I'll go into each of these in a little more detail in a minute or two. And the first is, you know, providing value added services and support to projects that use open source. The second is building a new product and maybe incorporating open source components into that into that product, sort of using the analogy over here on the right side, where you know you're making dinner, and maybe some of the ingredients are organic, but one of the ingredients is open source or you grew it in your own backyard. It's just one of the ingredients in your product. And then the third model is open source in your own technology for your own business's benefit. So let's dive into those things a second so you know, red hat this $34 billion company that IBM bought and crunchy data, you know, a big business that supports the postgres database, their main business is providing support providing enterprise helping enterprises adopt those kinds of technologies. The software remains free, but people spend a lot of money to implement the software right, and potentially when needed, developing new features that aren't in the current version that they need for their particular implementation, you can hire coders who will extend these open source projects. I think it is worth noting that some of you may remember a company called boundless which tried that model for several open source products, geo server, post just and things and things like that. And, you know, unfortunately they weren't able to make a go of it. They hung around for a good long time and, and my recollection is that some of the talent and the end of the company was was purchased by planet, who brought some of that talent over into their image processing stack which is powered by open source. So the idea in this model is there's an open source foundation there's a foundation of the building that everyone can use, but that there's work to be done and people pay for that work to put new things on top of that foundation. The second example is leveraging and incorporating phosphorus technology to deliver products you know this is phosphorus is one of your ingredients. And so what you do is you implement phosphorus in a way to create new products that other people will buy from you. And really it's pretty hard to find cloud based products that aren't using open source in some way, whether it's Linux or Postgres or WordPress. Or, you know, in the geo niche post just and GDAL. And again, you know, two of our sponsors Google has a ton of open source stuff I'll show in a second and geo cat which has created a very nice business, you know, building national and regional data clearing houses on a stack that's entirely open. And they get money for those services, and they support those projects. And, you know, Google is kind of an extreme case, but you know, we shouldn't forget that they take open source very seriously Android is open, Kubernetes is open the go programming languages open, angular is open, etc, etc. And they put these things out there and they do manage and moderate and decide who are committers and all of those things. And I really believe in open source. And, you know, there's plenty of you know don't get me wrong. There's plenty of things that Google does that I don't agree with, but, but in this area I think they really are good citizens and leaders and how you can really leverage open source in a smart way, and very much to the benefit of the community Google has been a consistent sponsor, believe for every, every single conference since Barcelona 2010, and including this virtual conference in 2021. And then the last model is open sourcing your commercial technology, and you know MongoDB is a good example of this they developed a big data and no sequel. They have a platform, very intentionally, and they want to make money on it. And, you know, sometimes you just do it for love and passion, no one uses it, sometimes there's all kinds of open source software that's come and gone. But having it be free and freely available to look at the source code lowers barriers to adoption. And over time potentially attracts contributors who may be very talented and become committers and help improve your product. But really how they make their money is through the the premium model, where you can get the basic product for free if you want to, you know, muscle your way into learning MongoDB, you can go out there and get the free version. But it, but if you like it and want to start doing more complicated stuff, their advanced features that you could pay for and it's cheaper to buy them and develop them yourself. So that's the last example. The one other thing that's a little different about the PhosRG, the open source community, the Phos community is, you know, it's important to be a good citizen. There's some basic rules of the road. And giving back and sharing is one of them. It's part of the openness. If you can see the source code, you should be grateful that you can see the source code and support the people who create it. And things of that nature. It is really a community. And in these kinds of communities, nobody likes people who only take. There's give and there's take. And we all try and do our best on that. And there are many, many ways of giving back. You don't have to be a committer and contribute code. You can contribute documentation and, and Angelo described this very well this morning and the keynote. You can contribute your time to volunteering for conferences. Our moderator, Till, is volunteering his time on volunteering some more time, as well as attending this to support this conference. And you can also contribute money. Hit that support QGIS button next time you download the new version. Give them 10 bucks. Makes a difference. So, I think I have a few minutes left and just want to finish with, you know, two examples of how this works. One is a very small company, my friend Randy Hale. I hope he's watching or watch this later. I was speaking to him yesterday. He was giving a QGIS workshop. And he's one of these guys who's in the value add commerce model. He trains people to do QGIS. He gets money for that. He finds customers who are sick and tired of expensive commercial software and he helps them find alternatives and then implement those kinds of alternatives. And he's doing this in, in Tennessee, often in rural communities. And when we talk about his customer, Henry County, you know, it's a small place with about 30,000 people in it. And his job was, you know, the state has a requirement for next generation 911 for each county to give the state all the addresses that might need emergency response. And they were using Esri and just weren't feeling they were getting value out of it. And Randy worked with them to implement a different stack with Linux, QGIS, Postgres and Geo server. He found another product, Fulcrum, another one of the sponsors of this conference who powers their solution with a lot of open technology. And, and they were able to get, do the field work to get the new addresses with Fulcrum and then manage all of those data in an open fashion. And importantly, the state had open standards and said, we don't care where the data comes from just give it to us in this format, which was a perfect opening to do this. They didn't say give us an Esri shape file or a Geo database. They said, give us data that looks like this. And in the end, it's more of a free puppies, not free beer model. The lower cost was a very big driver in this county's decision. And, you know, essentially what they had was desktop RGIS, they had a server, they had some LiDAR software and a total left about close to 34,000 US dollars. And, and the cost for Randy's services, there was no cost for the actual software, you know, they downloaded QGIS, they downloaded PostGIS. They did buy one copy of Fulcrum that costs $360. And then it was about $6,000 of Randy's labor to help them set it up to do the testing to make sure it worked in association with the state. And then he now gets an annual maintenance fee to help keep their versions current. If anything goes wrong, he helps them out, etc. And he has a great summary. These were his slides. He gave me permission to use them. Free and open isn't a black box. It's supported commercially. It can help your organization do important things, but it's not free. It does cost some money. Standards are your friend and help give back by supporting companies like his and maybe even giving QGIS 10 bucks the next time they download it. And then the next example is my big giant company and I won't say that open source moves the needle, but I have found as I've moved around in Hexagon, there are people who are very aware of open source software and who take it seriously. And there are a couple initiatives that are done. Our Hexagon US Federal Division, located in Washington, DC, manages a product called Google Earth Enterprise. Google used to sell that. They then open sourced it. And Hexagon US Fed is now the moderator of that project and they do most of the commits and decisions about other people committing. Another division of ours, the Hexagon content program, serves streaming imagery and they power that streaming through GeoServer and MapServer for imagery tiles. And again, they contracted with certain companies. They needed certain kinds of performance enhancements or optimizations for their kind of streaming and we're able to find people who could add that into the code base. And then there's also just more informal. Some of our customers have open source solutions and if we're selling commercial software, we need the commercial software to play nice with the open solution. So it's important to have people who understand both sides of the handshake. And so, you know, how do we get back? We actively manage an open source project. We actively contract with companies that support open source tools that we use. And Hexagon has been good about supporting my interest and other people's interest in attending these conferences and volunteering time to the conference ecosystem. Could we do more? Heck yes, always. But it's a good start and it's been an important part of my job to have the freedom to do some of this kind of work, even if it's not directly related to my primary mission. So with that, I'd like to wrap up. Free and open source software commerce is alive and well and growing. There are a variety of business models that fit different needs. But don't forget to give back to the community and help grow this community. So thank you very much. Have a great time at this event and hats off to the Buenos Aires, LOC and all the sponsors for working so very hard. It's really exciting in one way and also a little bit sad that we can't all get together and hopefully that'll happen next year in Italy. Thank you Michael. Great talk. Thank you very much for this bright overview about all the business model stuff. I had the experience also for nearly 20 years now. To the audience, a little sorry, I'm sitting in the south of Greece and it's raining the first time since six months. So if there's some back noise, I'm really sorry, but I can't change it because I'm sitting outside still. But that shouldn't stop us from asking questions to Michael. I have the first one. Michael, you have talked about the business models of the open source companies, but what about the business models of open source using organizations? Maybe you can give us some ideas about that. Yeah, it's a good nuance question. I mean, I try to give my sense of it, the lens of Hexagon, where there are different initiatives with different people that have found open source. And part of it is the business model is doing these open source initiatives is going to help the company make money. When Hexagon is managing the open GE product, they also have an open GE professional version that they sell and make money off of to various government customers. When it's government, it's a harder business model. Government's just trying to do its mission, solve its problem, whether it's dispatching police vehicles or making the sewer pipes have good maintenance or whatever the case may be. And, you know, unfortunately, you know, in my home country, the United States, government is not very popular. People hate paying taxes. And so there's a lot of stress on budgets in many governmental agencies, and governments are trying to be as creative and cost effective as possible. And that's kind of their business model is to solve their problems in a good effective way and with the limited resources that are at hand. Okay, thank you. I have one another question that popped up. Should OSGEOMAKE it easier for users to contribute to the projects? What are your ideas about that? That's a very good question. Thankfully, I haven't coded in a really long time, and I know that no one should let me, even in my best days, be a committer to these projects. Yes, I think it should be made as easy and accessible as possible. It shouldn't be a question of do you know the right person or something. At the same time, these are very important decisions. You can't kind of have a free for all. People need to be good strong coders. People need to understand the project, and people need to collaborate well. If you have lots of people trying to commit code that's not ready to be code, because of their skill level, it wastes time. So I don't know what the best solution is. That's over my head, but that would be a great kind of, I'd love to see that as a panel discussion amongst people who moderate projects. There should be a better way, an open way, and the criteria for what you need to bring to the table to get the invitation should be clear. Yeah, that's a huge field. Very good question. Thank you for that. Thank you for your answer on that. I have one last question received for the next speaker. Please put your questions in the question tab. But I got that one, which is what are the key limitations for companies like Hexagon using FOS more extensively as a business model? This is my personal opinion. I can't speak on behalf of my company on this. But I think people haven't spent the time to learn fully about the open source ecosystem. And they're parts of our company. The division that I'm in used to be called Intergraph Corporation was a giant geospatial innovator in the 1980s and 1990s. And there have been people who have been working at this company for 30 or 40 years selling commercial software. And that's what they know. And they have, you know, we're a publicly traded company. They have pressure to create revenue, legitimate pressure to create revenue for the shareholders. And this is what they know. They know how to make commercial software. And they haven't spent the time. They're very busy, legitimate too, to understand some of the new opportunities that are out there. And that's why I think these conferences are so important. It's why people like me, who are, you know, every chance I get, I try and tell people about the possibilities within Hexagon. And you find some people who are interested in listening. You find some people who sort of say, yeah, in Europe it's really important. More and more governments are saying we have an open first policy that it's harder to sell commercial software if there's an open alternative. So how do we present ourselves as being friendly to open solutions and then selling the things that we have that may not be available in open solutions yet? So it's just, it's a long battle and many of us have been just doing our best to continue the education process. It's good to know people like you, Michael, in the position, I think. So thank you very much, Michael, for your talk. And look at the kudos in the chat.
FOSS4G conferences have helped generate interest in, and adoption of free and open source geospatial tools. Whether it is the business-to-business conference events, or the support of commercial organizations sponsoring FOSS4G conferences, it is clear that commercial interests and open source communities intersect in a variety of ways. This talk aims to describe several of the different paths that commercial organizations take to leverage free and open technologies for business success. The following three real world examples will illustrate these paths: Very small organizations providing FOSS4G consulting and training services Product companies including FOSS4G tools in powering niche products Platform companies that have built their platforms upon open source frameworks The case study examples will include further details including how my current employer utilizes open source technology. Finally, the talk will speculate on why large, commercial companies such as Google, routinely open source their own technologies such as Kubernetes and other geospatial examples. This presentation looks plainly at the business aspects of the FOSS4G ecosystem. In short, how does free and open source software for geospatial help cultivate business success and sustain livelihoods?
10.5446/57247 (DOI)
Hello. I guess we can start. Welcome to Fosforge. Welcome to the event. It's great to see you. It's great to see so many people that are joining us online. Welcome to today's session. When is I this Wednesday morning session? I'm John Unen from Istanbul, Turkey, and I'm going to be the chair of today's early session. So I'm going to start, give way to first presentation right now to Andrew, Andrew, I'm interested, sorry, to this stream. So Andrea is active in open source development for the past 20 years and has a long time contributor and steering committee member for geo tools and geo service projects. He's interested in GIS at large, that's on data access referencing and reading processing and OTC protocols. And Ian works for Aston University as a geo server and mapping consultant during the day and supports geo tools and geo server users by night. Welcome Andrea and Ian. Thank you. The stage is yours and you can start the presentation. I'm going to share any questions or questions from the audience. Okay, I hope you can all hear me. Good. As long as Andrea can hear me, that's probably the important bit. Maybe not, but let's go. Let's go. So the secret life of open source developers, this is a somewhat updated version of the talk we gave in Romania, the last time we all got together in person. Quick state just to say, I didn't tell my boss I was doing this and he doesn't know what I'm about to say, so you can't hold him responsible. So I think we all can agree, all those of us that are open source developers can agree that there's a image problem. Everybody thinks that open source is this big happy room full of people all getting together discussing things. And mostly actually it's just somebody sat in their office on their own thinking, I could just get one more ticket done before I go to bed. And as we said, Geotools, Geoserver, GDAL certainly have a problem in terms of our commits to the interpreter. If you see that there's up here on the right hand side, that is presumably Andrea. This will be 2018 data, I guess, because I couldn't find a way of getting this data out of GitHub this morning when I was trying to refresh it. Geeta has a slightly bigger problem, that will be Evan. So they've actually taken some steps to overcome this now. So it's still Evan doing most of the work, but they're paying him to do it now, so that makes it much more like it will keep going. Generally speaking, all these projects have most of the activity concentrated in a small bunch of people, either four or five or one or two, and that can be a problem because it means that we all have a small bus factor. That's it. One of the blog posts I was reading this week while I was thinking about what to say during this conference was saying that somebody was saying 75% of open source developers have contemplated quitting and just walking away from their project in the last two years. I'm quite sure where they got those statistics from, but feels through. Yeah, that's what I thought. Sadly plausible. So what do we do? We work, we have proper jobs, we have to pay the mortgage, buy food, all of those important things. We write code, we look after our family, interact with our family occasionally, and we sleep. Yeah, this is a comment that I found on the internet and I recaptured it towards open source maintenance. The speaker says, who wants well-maintained open source and everybody wants it, of course. Then he asks, who wants to contribute fixes and everybody's looking at the ground, sad because contributing fixes is already one step up and already too much for many. The real hard question is, who wants to be a maintainer and poof, everybody's gone, because being a maintainer is the really hard part of participating in open source. There you go. This is a question we had on Twitter. People have jobs, that's eight hours work, time to move to and from the office, sleep eight hours a day, that's 16 hours. They want to do other stuff like college or kids or exercise or shower or eat. So is it a one tissue or a can't tissue? Yeah, at that point I was like, well, I have a job that involves open source and I also have a family with two kids. I work, cook, do the grocery, get out the family, and yet I put extra weekend hours on those same projects, reviewing for equestrics, in bugs, future improvements on my own. So I'm kind of wondering who these people that he was talking about are because apparently I'm not in that set of people. That's it. So this was Andrea's timetable pre-wok, pre-pandemic. Right. And as you can see, most of my time is actually spent either sleeping, working, or being with my family. And pre-pandemic, I could afford to spend like three hours with my head concentrated on doing bug fixing and reviewing for equestrian and the like. And then four more hours, but looking after my kids while I was doing that. So not very concentrated. However, the pandemic changed things for the worse. I think it's common experience that, especially for those having a family, that we spend much more time looking after the family. And as a result, I ended up with the two hours of my spare time dedicated to open source. And the rest is basically being busy all the time. Besides the Sunday morning. Sunday morning, I take my bike, go out five hours, I'm gone. I need to vent out for a bit before taking on another week like that. Mine is slightly different. On a really good week, I might get in eight whole hours of working on open source code. On a less good week when I'm not feeling so well or I'm just more tired, I might only get an hour in. I might actually sleep through those mornings, Friday, Saturday, Sunday morning. My parents rang me up on Saturday this week at 11. And Leslie said, I'll go and wake him up. And they were horrified that I was asleep at 11. I said, no, no, it's fine. I had to wake up. The phone was ringing. But I've built you a lot more relaxing. And, you know, I try not to go back into the office and do some coding in the evenings any longer the way I used to before pandemic. I'm much more aware of the fact that I ought to try and get out of the house occasionally. Right. And so this comes back to some discussion that we had on Twitter about open source having to be maintained by paid developers rather than volunteers. And there was a bit of a back and forth. Okay, but you know, what are the allegiances of the people just doing it for work rather than, you know, being attached to the project and putting their own time in. And the parallel with a doctor and so on. But well, we all know that being paid is not necessarily the guaranteeing that the project will get good with attention. As you said, the UK, we've decided that we're going to make our doctors work for next to nothing anyway. It's good for them. They want it to be a doctor. They don't need to be paid. There you go. So yes, I'd like open source. It's no longer toy product should be maintained by paid programmers rather than volunteers. But also why should I pay for your free software, which is the majority of our users? Right. So at the same time, we have this contrast between people saying, yeah, it's big enough. It's important enough that it should be paid. But if at the same time, most of the users don't want to pay for services around it, then we have a problem because where is that money coming from? This is one of my favorite oatmeal comics that I stole from the oatmeal. I didn't actually pay them for it. I confess. It's here. Hello, creative person. Thank you for making that thing. You're welcome. Here's an invoice. I know you're doing it for exposure. And turns out you can't actually spend exposure. It's no good to you. If all of your customers want you to do it for free, then you do end up with no money. Here are things that we do on a pretty much daily basis. I'm guessing even on days where I don't do any coding, I will do at least the first three or four of these. So I'll answer questions on the mailing list. I will answer questions on Stack Exchange. If you're asking an interesting question on Stack Exchange, I will spend a substantial amount of time often working out an answer for you. Just because I'm good like that and because I've got 70,000 internet points now. One day I'm going to work out what I can spend those on. We review the pool requests that other people have made. Hopefully other people have made. Sometimes it's just, you know, I'll review Andreas and Andreas reviews mine. We both review JODs. And there we go. That's the GSN and the review system working at hot pool speed. And then we go and answer questions that are on the bug list. Another one I get to do quite often is to look at the questions on the security list and spam on the security list. The number of people who feel that trying to sell me a new drone is recent email. The security list is just amazing. Five or six a week. We answer questions on Twitter because often people will ask a question on Twitter and copy me into the question. Or they'll ask Andrea about the question. And then you've got like 140 characters to write an answer in. It's not very easy. Unless you can just point them straight to the manual, which often you can. And then finally, we get to write some code. Which is what we wanted to do in the first place. That was why we got into open source development in the first place. Then we get people like this. So apparently we're idiots because the plugin page takes you to the GeoServer total download page, not the actual plugin. This apparently means that this guy could actually scroll down to the possible page where the plugins were. I didn't hear any more from him off. We pointed that out. Right. This is the key. People looking at the project as if it was somebody else's. As if it's all the responsibility of the developers and maintainers. Why didn't test it? Why didn't document it well enough and so on? Why don't they build an OSX installer for me? It's like, dude, there is no day in a community. It's a shared good. It's always us. Let's be honest. GeoServer works just the way we want it to. Because if it didn't, we'd have fixed it by now. Well, actually, there are some annoying bits that I keep meaning to fix. But mostly, if you find something that doesn't work the way you want it to do, it probably works the way I want it to. Yes, this is a rather annoying discussion I had on Twitter a few years ago about the fact that the QGIS community didn't serve Mac OS users. We tried to point out that he was part of the QGIS community and that QGIS was not serving Mac OS users correctly. How I jumped in at that point and said, you don't really understand this, do you? You haven't paid any money for this. And it got abusive after that. Yeah. So basically, people keep on having this kind of relationship that they learned from the commercial proprietary world or from going to the supermarket or going to buy a car. It's a product. So they spend money on it and then they got rights about it. And then they got a warranty. They sometimes get embossed with a customer service or stuff like that works. But in our case, it doesn't. And it doesn't for a very specific reason, because open source is a duocracy and our licenses say otherwise. So Ian. Yes. So if you want to test it or something, then by all means test it. Particularly if you've got a hardware or an operating system that I haven't got, then you need to test it. And you need to be polite. We're not just a chatbot somewhere. We're not corporate AI chatbot. If you're rude to us, we will just block you and never talk to you ever again, no matter how much problems you get into. And then, yes, the GS tools and GS server released under the GPL, general GNU public license. It's clear. It specifically says there is no warranty. We prescribe it as is without a warranty of any kind. Should the program prove defective, you assume the cost of all necessary servicing repair or connect correction. We didn't promise you that it would work even right. You should be grateful. It does work. So it's not like we are going to give you the cold shoulder and say it's your problem because we typically open source developers are people that tend to share. I mean, they did it the first time they opened the code, right? So they have this tendency of trying to be helpful, but you have to be respectful and understand that we cannot just solve your problem because you have it because we have our own life, work, family, and so on. But we will bend over backwards to help somebody who is willing to help us, who's prepared to work with us to provide the information we need to debug their problem. Just emailing us to say it doesn't work. You should fix this is no use at all. And sadly, that's quite a lot of the emails we get. I wrote a blog post earlier this week. I've been thinking about why I was. I did open source and why I contributed. And basically, though, I was saying there are three things that will get me interested. One is you offer to pay me to do it. And it's still going to be relatively interesting before I'll do it. Even then, if you want me to be ordered to do it, you have to pay my boss to tell me to do it. And he charges a lot more than I do on Friday. It's got to be an interesting problem that I've always wondered how it worked. So I did some stuff with contouring and topologically correct simplification. And then the center line labeling. I was just, you know, interesting things I wondered if I could get work out how to do them. And then the other one is if I could get work out how to do it. And then the other one is if it's embarrassing, you know, if it's a bug that embarrasses me. So I did some basic technical debt reduction where I finished off something that boundless and started before they went bust. And it was, you know, it was bugging me every time I went to that page when I was doing a training course that the fact the modules list wasn't completely bugged me. So I spent some time fixing it. But those are the sorts of reasons that people do. We all have our own reasons for being involved in open source, but it tends to be things like that. It's because it's interesting. We want to poke it. Or it's because we, you know, we want, you know, we're embarrassed that there's a bug there or we want something to do something that it doesn't do at the moment. Yep. And then this final slide here. You can support open source. Obviously, you can donate directly to the project via OSGO, or you can go and support one of the companies that that employees people like me and Andrea or Jody and listed on our support contract. Go and buy a support contract. You think nothing of paying, you know, thousands of pounds, dollars, euros per person. And for some of your your GIS software from commercial vendors, there's for about the price of a couple of seats on licenses, licenses, you can buy a year's worth of support from somebody who will actually fix your bugs for you and his supports the software. Right. And that's a. Oh, we have a conclusion slide as well. Yes. So we've already haven't changed anybody's mind today, but might make you think a little bit more about the developers that make and support your software. I think, yeah, it's true. We don't we really do think of you as part of our community rather than customers. And we need you to think the same. Right. Yeah. People to people not consumer to producer. Yes, producer to consumer. There we are. Yeah. Okay. So we have any questions. Where would I send questions if we had any? Yeah, thank you. Johnny is going to tell us. Yeah, I just heard that there was some microphone problems. That's how are you able to give them well now? Apparently worse than before. Yes. But still somewhat possible. Yeah, look at what you're saying. Okay. I'm sorry about that. I will try to work on that. We have some questions. And two questions are very similar to each other. So how could someone begin at importing can start on contributing without messing up everything? You have to get involved slowly and little by little. Big, big and old projects like like Geo server, Geo tools, map server, QJS and so on. They built over time a large set of checks, procedures, habits and the like. So the worst thing that you can do is actually stumble in and say, yeah, I have this poor question that changes 200 files. Please review it for me. That's not the way to do it. But it's beginning by little things like sit on the mailing lists of the developers of the users, try to get a feeler of how the community works, try to find a bug that you're interested in too that seems easy, maybe ask pointers on the developer list like, hey, I would like to contribute something little to get involved. And how do I go about it? And you probably will get some pointers on where in the code that bug is likely happening or where you can start debugging. And instead, most of the people just sitting in a corner, they are doing apparently nothing and then boom, all of a sudden, they show up with a large and urgent and very important code change that they absolutely need to contribute before the end of the week. And that's just incompatible with our ways and with the availability of time because reviewing poor question the like is expensive time wise. That's it. We have a whole bunch of issues. And I think there's a filter for that. Yes, we have a filter somewhere for sprint. And if you ask on the list, somebody will point you to it. Right. Right. And then only take the ones that are open and not in progress. But then there are, you know, there is also a lot of documentation and it needs improvements of various kinds of completing documentation that is missing, making that existing documentation better, writing tutorials, because most of the documentation tends to be reference guide instead. So step by step instructions on how to do something are welcomed, translations, and also just sitting on the user list and try to answer each other questions that helps a lot. But yes, I had a conversation with somebody yesterday on GIS stack exchange who was trying to do something and appeared to be basically just picking random things from his autocomplete to try and combine the method he wanted. And I was gently trying to suggest him that maybe he'd like to go in and add to the Java docs for that file to make it clear as to what the different functions did after I'd explained it to him. I don't know if you'll take me up on that offer, but that's another good place to start. But it would be nice. Find a class that's poorly documented and work through, work out what's going on and then write some documentation about it. We'd always be grateful for that. Yeah, jump into another question. Someone is asking about the parallels between volunteer in open source and unpaid housework. Yeah, well, okay. Sure. I mean, cooking, cleaning, and washing. It's stuff that we do for our family, for sure. But I think it's not quite the same because unpaid housework is something that we do for our family. So for our close relatives, the relationship is much tighter. It's more close to volunteering in some other activity where you are actually giving something to strangers. And that's also the distance that we have with our user base. We are a tight knit between developers, but somewhat distant to users in terms of human relationship. Yeah, it's most similar, not so much to me cleaning the house or cooking meals, but more one morning a week, I go to a repair cafe and fix broken electrical equipment for people. And we're quite open about the fact we need some money for doing that. So we have a donation tub on the table next to where we're repairing things as a hint to people that they might like to put some money in when we fix their expensive piece of electronics. The funny thing is that probably in that case, you see the people working for you and you see the tub for donating and it's almost like automatic to donate something. And instead with open source, you say, yeah, okay, thank you. Bye bye. That's it. You can download it from SourceForge without ever seeing us. Right. Okay, thank you. I'm just going to combine the last question. So I apologize. I have another presentation and I really have to go, but Ian will probably field it. Bye bye. Thank you. And I'm going to remind the audience you can always get in touch with the presenters afterwards after the sessions if you have more in-depth questions. But I'm just going to go to the last question. Okay. So one point is how much a person can delegate tasks to somebody else in open source development and combining another question. How many requests per month do you receive requesting to do free work, practice work for as an S&P in open source development? Well, obviously we can we can delegate tasks. You know, we love delegating tasks. It's that's ideal if somebody else wants to do something. If somebody turns up on the mailing list and volunteers to do something, we've always got open issues we can point them to. I forget how many we have at the moment, but we've always got, you know, yeah, several hundred. Let me see how many open tickets do we have currently? We have 924 open tickets at the moment. So there's always something we can delegate to people. As people asking me to do things free, I don't know, probably three or four a week in a good week. In a bad week, could be several tens of people. It's, you know, it's probably, it certainly would be an odd week if WD hadn't asked me to do something for them in that week. Okay, thank you. Thanks for your comments and thanks for the thanks to the audience for the great comments and questions. If you want to follow up with Andrea or Ian, just feel free to just ping them. We have a message through Benuelus. Thanks a lot, Ian. Have a nice rest of the conference. Thank you. Bye.
A common question seen on many open source mailing lists is "When will you guys fix my bug?" It is critical to my company This is often followed by one of the developers replying to say "When you write a fix or pay someone to do it". This leads to the user complaining to everyone that this snarkiness is not a welcoming response or how unreasonable it is to expect them to learn to program, or to pay. The discussion often descends into a rambling maze of twisty insults and justifications. When the fuss dies down, all the developers go back to doing what they the were doing something useful and the user becomes either a dissatisfied user or an ex-user. This talk by two veteran open source developers will help users see that play out from our the developer point of view. We ll look at the reasons that drive developers to share their code, the licencing conditions covering it, the real life of developers and associated constraints, and what is actually reasonable to expect from both sides. This is a reprise of a very successful talk that was given at FOSS4G 2019 and that has been view more than 4000 times and lead to an interesting discussion of HackerNews amongst other places.
10.5446/57250 (DOI)
Can you turn your cam on or don't you have any? I'm sorry I didn't have a camera that was available. Okay, no problem. Alright, so then let's get started with this. Alright, hello everyone. Welcome to Swartz 4G. This is Wednesday afternoon. Following this round, we will be having your graph presenting transportation engineering with free card. So, it's a yes, I will leave them the same like this. Okay. Alright, just a second. Alright, well, and my name is Joel Graf. I'm a licensed professional engineer in the United States, and I work in transportation engineering, highway design, traffic signal maintenance. I've done a number of things in the course of my career. And a couple of years ago, we started getting into where I work, we started getting into 3D design for highway systems. And 3D design is something that's been a long time in coming to transportation engineering. It's something that's been very common in mechanical engineering and various other engineering fields, but in transportation, we haven't needed it so much. And so one of the things that I've been involved with most recently is looking at an open source transportation engineering 3D CAD solution. And kind of the reason for that is because traditionally, in transportation engineering, we've relied on generally 2D processes. In the early 90s, when we started moving into CAD based drawing, the translation from drawing on paper to drawing on CAD was one to one. There was really very, the skills were very transferable. It was very direct. But one of the drawbacks that happened with that, and we'll get to that in a second, is that you went from drawing on paper or vellum with pencils and ink to drawing on a computer. And not only did you have the capital cost of the computer, but you also had the licensing cost of the software. And so that represented a large shift then in the way we do design, which has since become more of a problem, especially with the advent of 3D design. Now 3D design has given us a great deal of additional value in that we can do 3D scanning and grab data points for very large regions all at once and bring that in and use that for our terrain mapping. And then we can use that in our geomatics or surveys components. Then from there, we can continue to do 3D facility design, like you see in the picture below, by simply drawing a cross section of the road and sweeping it down a predefined alignment to build our 3D road and then merging it with the terrain. And one of the long term benefits to this is the ability to use BIM-like tools for life cycle maintenance. So not only are we doing the preliminary engineering and the highway design, but that model once it's complete then goes to construction. And all of the quantities are built into the model itself. There's no estimations. It's directly from the model. So contractors can build directly from the model. And then after that, in theory, as we maintain the road and prepare it for the next cycle of maintenance, we can update that model with changes that have been made to the roadway and use that as the starting point for the next construction project in the area. It's all fine and dandy, but it's really a bit of a pie in the sky type of plan. But it's pretty neat. The problem is, is that when we rely so heavily on these sorts of tools, we're relying on proprietary software. The market for 3D engineering and transportation engineering is controlled by principally two vendors. That's microstation and Autodesk. Their licensing for it is recurring and expensive. And it's very prohibitive. Prohibitive in that it's not available everywhere. It's not easily available to smaller agencies that might benefit from the ability to do 3D design, but don't have the budgets to support the licensing. And because it locks you into their proprietary data formats. Now, that's not as so bad of a problem as it was, say, 20 years ago, but it's still an issue. So in the course of going through this, and I can tell you from my own personal experience with with one of the products, I found it very lacking in terms of being able to do genuine highway design in a 3D environment. It was very poorly suited for our purposes. I looked at this and I thought, surely there has to be something better. And so I started looking around. And I discovered FreeCAD. And I started to explore FreeCAD because I realized that not only was the tools poor, but we were locked into these expensive licensing structures and these proprietary data formats. And it all just kind of sunk into me that, you know, as professional engineers, you went to school, you learned your trade, you bought your materials and you started doing design, right? And you could do design by your own, by your own ability, by your own hand. Nowadays, we have to pay a corporate interest for the privilege to exercise a profession. And that should not be the case. I should not have to pay substantial sums of money to any particular company or vendor, just so I can do professional engineering. So that's really my impetus for doing this. But also I want to develop a more intuitive system of doing highway design. So as I explored things and I came across FreeCAD, one of the things that really impressed me is that FreeCAD has all of the elements to build a comprehensive free and open source transportation engineering solution. And I began developing this in 2017. The module is called FreeCAD Trails. And it's built right now entirely in Python. And there's been contributions by industry professionals from all over the world. So it's really been kind of a neat environment. Now, there haven't been a lot of contributors, but there have been a wide variety of contributors for sure, in terms of where we're located and diversity of skills. Now, I was initially focused on highway alignment design, horizontal alignment design, excuse me, and that sort of thing. And I was leaving surveys to another time because frankly, surveys is not my area of expertise. And in that time, a fellow from Turkey, his name is Hakkan. He goes by Hakkan 7, I think, on the forums, who's a geomatics or surveys engineer, stepped up and he started building a geomatics component for FreeCAD Trails. And let me tell you right now, Geomatics is where it's at right now. He's doing some really incredible stuff and he's integrating GIS and we'll get to that in just a second. So FreeCAD Trails right now consists of four key elements, alignment design, SWAP path analysis or turning template analysis. Those two elements I've developed and then Hakkan has worked largely on Geomatics surveys and then more recently, we've had a contribution for GIS and importing GIS, tiling and mapping. So just to go over these components real quick, SWAP path analysis is just the ability to track a vehicle down an alignment and just see where the outer edges of the wheels and outer edges of the vehicle body track. So as to determine whether or not you're overrunning, you know, curves or bumping into things or making a curve too tight that the vehicle has to turn into an opposing lane, that sort of thing. So the idea would be you'd bring in a completed 2D map or something of 2D plan design of a highway and then you would simply draw out your alignment and let the vehicle trace that alignment as you see here in the animation. So AutoTurn I think is kind of the industry standard, it's Autodesk's tool. So just in a few, in a few months between myself and a German contributor on FreeCAD, on the FreeCAD forums, we developed this sort of proof of concept, SWAP path analysis tool. Right now it's kind of broken, but it does work and it was really kind of impressive to see how quickly I was able to develop that using FreeCAD. Now I think I have to switch my screen share here. Give me just a second. So the other tool that I developed, I hope my screen is visible, it should be, the other tool that I was working on was Highway Alignment Design. So this is an example of a highway alignment that I was able to import using LandXML format. This is actually a section of roadway that's not too far from where I live. My goal here was to be able to develop an intuitive way of doing highway alignment design. And one of the things I discovered when I got into FreeCAD is that there are great at drawing lines on the screen, but every line exists as a separate object. And when it comes to editing highway alignments, you're doing a lot of changing and a lot of pushing and pulling and changing things. And the last thing you want to do is be, is constantly creating new separate individual objects for every little section of roadway. So what I did was I developed a sort of intuitive tool that allows you to edit highway alignments. Oh, here I got to bring up the trails workbench. So I developed an intuitive tool that lets you help develop highway alignments. And you basically go into the tool. Oops, something. There we go. This may iPad issues making this work in streamcast. So this may fail on me here. But basically you go into this tool and it highlights all of the points of intersection along the alignment. So these straight lines are the tangents of the alignment and then this and the curves of the alignment. And by grabbing a single node, you can drag and you can, as you can see there, you can adjust the alignment. And I see if I make an adjustment and my curves overlap, they turn red indicating that this is not a valid movement. So here I can very quickly, easily and intuitively readjust my highway alignment by grabbing basically any element along the alignment, whether it's the curve itself or points on the curve or maybe an entire tangent. And if I hold down a key, I can sit there and rotate the tangent. And that sort of thing. And so this was the beginning of my work with free CAD trails and developing an intuitive, easy to use highway design tool. Now, this is only 2D. We're not even into 3D yet, but I really had to start from ground zero in order to do that. And I'm just really impressed with how free cat has stepped up and really been able to make it easy for me to develop these sorts of tools. The next component is GM Addix. And this is what Hakan has been working on. And as you can see here, this is an example file that he's developed where he has created the ability to bring in 3D data set and skin it with a surface and be able to cut cross sections along it. So if I zoom in here on this 3D data set here and rotate it around a little bit, there we go. Our rotation skills aren't so great, but here you can see there's the actual roadway, the roadway surface and then underneath these are the cross section lines that he has automatically laid out. So if you switch to a top view, you can see how those cross section lines interact with the roadway. And then of course, for every cross section line, there is a corresponding cross section alongside. So, and these are, this is a very traditional design approach for highway development, especially in surveys. So he's really creating in free cat the way that we've always gone about doing surveys and highway design. And it's really, really fantastic to see this is exactly what I would have had to have created if he didn't come along and do it for me. And I'm very grateful to have Hakan involved because really, like I say right now, GeoMatics is where it's at. It's pretty impressive. Let's cancel out of this. He has some other, the other thing he has, for example, these are also existing terrains that he's brought in. And right now he's working on methods to merge the proposed terrain with the existing, which is an important thing to be able to do. Create pads or cut some fills and that sort of thing. He's also just recently enabled the ability to import completed surfaces from 3D CAD packages. Before it was you had to bring in the point set and build the surface in free CAD. But now if you've got the surface already and say civil 3D or something like that, you can export it as a land XML file and bring it into free CAD as a complete surface, which that's really neat to see as well. As long as the land XML format is supported by a third party supported, but according to the specification by a third party software, we can bring that data from that third party into free CAD because everything really lives and dies in the land XML right now. Let's see. Oh yes. And then the other thing that the other thing that's happened more recently and give me just a second here and you just switch back. That wasn't it. Here we go. I need to reshare a different screen here. Just bear with me. Window. This one. The other thing that we've been doing lately is a is we've been able to incorporate GIS into free CAD. And that has been the work of a user by the name of Dutch sailor. He goes by Dutch sailor on GitHub. And give me just a sec here. I'm trying to bring up the, there it is. Here on GitHub, you under his under his FOSBIM experiments folder, you've got examples of him using free CAD to import tiles in GIS. So here's a WMS, for example, or web feature service WFS tile map service, web map service. And then, you know, setting geographic locations, things like that. And there's some there's some animated GIFs. He's even he's also been working with blender GIS to make it. He's also been working with blender to make it work in the end is called blender GIS there. So it's really impressive. Now I've tried to actually use this myself and it doesn't work for me. And that might be because I don't have access to the servers that he's using. He says he says right at the get go this is produce prototypical. He's he only developed this just in the last couple of weeks. So it's brand new. I mean, very brand new right now, but it's really exciting to see this level of GIS integration in a free CAD where we've never seen it before. So that's something I'm really excited about. And I'd love to be able to get that in to be able to use GIS file mapping and stuff to lay out alignments and do things like that too. So free CAD trails, it's been it's been a long time and coming, but we're really starting to see some some active development going on and really what we're living on and dying on right now are contributors, you know, we need people who show up and stay at it. My own time has been kind of tight lately and I haven't been able to do the development I need to, which makes me a little sad, but I'm very glad to hook on is doing there and that, and that doing his thing and that we're starting to see GIS integration as well. So with that, I really don't have much else to share I realized I came up, I came up a little bit short of my time. I would be happy to take questions and answers or maybe try to demo something if anybody's interested. Thank you for the talk, Joel. I think, yeah, we have a question. We can proceed right away. I think that is no problem. So the question is, when are you seeing, when are you seeing the civil with grass is for possession. Maybe you'll copy it. I copied it for you in the private chat because I'm not sure how to pronounce. Okay. Yeah. Can you see it. Okay, so I know what grass is the civil I'm not familiar with. I'll have to look into it. Really, Hakan is just kind of taking this is all Hakan's baby. And he's taking his own approach but he's using. Now, I'll address this just a little bit here. What Hakan is doing is he's using free cats built in objects and techniques and stuff. The neat thing about free cat is it's a C++ infrastructure with a Python or a C++ application with a Python back end which is pretty common in open source projects. And Hakan has done a fantastic job of exposing the C++ API almost entirely and literally in Python, which is so useful. And Hakan has been doing a great job just using free cats built in objects to build this entire system. Now, why not use the civil with grass GIS. Again, we, I'm not, I haven't been involved in the GIS integration so I can't speak to that. I'm not familiar with what the civil is but I'm going to look at it because now I'm curious. But what I can say is that number one we've been reluctant to bring in any more third party dependencies and absolutely necessary free cat itself already has enough three third party dependencies as it is. And as a developer one of the things I've learned is the less you have to depend upon a third party for your code the better the better off so if we can get the job done well enough using free cats built in systems. So that's my preferable way to go. But again, I can't speak to the civil because I have no experience with it. Good question. Thank you. Next question is, does integration with QG assist. Does integration with QG assist. Yes. I'm assuming the question is, is it helpful if that if that is what the intent is. As far as, as far as that sort of integration goes. It's, we've we're already working in. We're already working in Latin longs, you know, and those sorts of things so being able to incorporate QG is tiles into our existing system is, is a no brainer, very simple. So, you know, I think it's, first off, I think the integration for this is going to go well, it's going to go very well I think I think probably the integration of GIS is going to be one of the fastest easiest ones that we're going to fastest and easiest wins that we're going to be able to have in this project. Because it's, it's already a mature platform. And I think it's going to help us as we do development in other areas as well, partly because it's going to give a lot of aesthetic appeal of visual visual appeal to the work that we're doing. And it's also just going to be very useful for us as we develop as we develop our code base and develop new objects and features and stuff, because GIS will give us a way to sort of check ourselves as we go and make sure that what we're developing actually matches the real world. All right. Thank you. Next one is it's possible to open 16 card projects with free card. Is it open to, oh, is it open to, is it possible to open what kind of objects. CAD card projects with free card. Can you type that into the chat please, because I can't quite sure. I might not know what these objects are. Maybe in the chat project. Okay, I thought you were like giving an extension or something. No, free cat has its own proprietary data format and it's actually just a zip file with, you know, with text files inside. We can import like STL files if you free. Okay, so one of the neat things about free cat is it will import a wide variety of data formats just a very wide variety it's really fantastic just in converting between data formats. So a lot of people have used it just for that. But as far as opening CAD projects like, you know, a civil 3D CAD project or microstation, you know, one of microstation's CAD projects. When it comes to proprietary formats, no, absolutely not if that CAD data is converted to a land XML file, what like what we're doing with transportation engineering then yes we're able to at least at this point pull some of that data. Thank you very much. And last question. What about that storage and sharing between free cat and yes, yes. I missed the first half of the question. I don't know, I sound like I don't know, I can't hear myself. I'm copying it for you. I'm sorry. Right now, as far as if you're talking about real time. If you're talking about real time interactivity between free CAD and GIS as long as as long as there's a Python back end on the third party thing a bridge can be built. I've seen people use inkscape to to use inkscape to draw vectors. In real time, import them into free cat and you can see free CAD be manipulated as you do the work in inkscape so you can share data directly and literally if you take the time to build a Python bridge between the two applications. Otherwise, right now when you're talking about bringing data in from GIS you're talking about importing static files. So if the file updates in GIS you're not going to see that reflected in free cat. As far as data storage goes I'm not exactly sure what that refers to. Okay, thank you very much for answering the questions. We don't have any anymore. Okay, so I guess that we can finish here. Thank you for the talk Joel. I guess that we will see you around in first for you. Thank you. Bye bye.
Parametric CAD has made inroads in transportation engineering in recent years. FreeCAD provides an excellent framework for the development of a free / open source CAD package for 3D parametric cad modelling of highways and related infrastructure. A broad view of the development of the FreeCAD Trails workbench for horizontal and vertical alignment design, 3D proof-of-concept and geomatics / surveys will be presented. The Trails workbench is being designed as an all-in-one workbench to provide tools for 3D highway design and modelling, from surveys / geomatics through alignment design and 3D models, including volumetric calculations. Integration with GIS is in its nascent stages as development efforts have been focused largely on stability and prototyping key tool sets and user interface elements.
10.5446/57251 (DOI)
Well, we are continuing. The next one is the presentation of PeerMean. PeerMean is a geospatial software developer since more than five, fifteen years. He has contributed to JDAL, FUJSTORX and several other projects. PeerMean is a computer-absorbed support, a Swiss company providing JD services and solutions. So we welcome PeerMean and we are going to start your presentation. Just one second and here we go. We also have WebJS and other OSGO projects. Welcome to my talk about game engines and using it for 3D geospatial development. My name is PeerMikhalberer, I work for SourcePole. We located in Zurich. We are doing software development for QJS, WebJS and other OSGO projects. Let's look first at the typical 3D viewer in the web. So that's a typical city view in the web browser. Let's compare that to modern games running on a PC. The first game is also playing in a city. It's doing real-time rendering of city with moving cars. You can freely move within the city and stream and render online. The next game is about rendering landscapes. The story is Viking times. Here you see nice trees, wood rendering, nice landscapes, water and so on. Then a classical game, Flight Simulator. That's the new edition of the Microsoft Flight Simulator. Also city display, city rendering from high above and looking into the city. So this was a comparison with recent games, which is not fair in many senses. But the question of this talk is how could we use this game engine technology? A few years ago game engines started to pop up that this game studio published their engine. Here is a collection of important game engines. The number one is the Unreal engine from Epic Games. It's free to use, but you pay royalty after 1 million cross-revenue. The second one is Unity, which is free for personal use and then has license per seat and you pay per year for commercial games. The third one is an open-source engine, Godot. It's community-based but also has funded developers. Currently about 10. It has about 20,000 daily active users, 1-2 million installations, more than 5,000 games on each I.O. 1,300 contributors, 30 core developers. So it's quite a big project. Godot has a graphical user interface for 2D and 3D but also text editors for programming. It runs on Linux, Mac OS and Windows. It's a tiny binary. It's based on notes and scenes, supports 2D and 3D, has an animation system and is programmable either in the built-in scripting language, which is similar to Python or with GD-native, you can program in C-sharp or C++ and others. And it has also a visual scripting interface. It has login, debugging, profiling, it has XR support, it exports to several platforms, so again Linux, Mac, Windows but also Android and iPhone and also VASM exports for the web browser. I prepared a demo to show how Godot looks and how it is working with Godot. I start with a basic cube scene with a camera. Here is the preview of the camera and when I can play this scene, not much happens, so it's a cube. With the background and what I want to do next, I want to add physics to this cube, so I change the note type from the regular notes to a rigid body and then it warns me that I need a collision shape, so I add one. I have to add, create a box shape and then set the size of the box. Now I have a collision and physics of a cube, so what happens now, it falls down quite slowly, so I want to make it a little bit more dynamic. I increase the mass or weight of the cube, increase the gravity and set the initial speed. So it's falling quicker. So that's my cube and now I want to have a Tira. There is an interesting website for that, the region API, where you can extract parts of DEM and add OpenStreetMap data. So here I'll set the imagery, OSM settings, buildings and roads and extract it as glTF. That's a scene format, so I have this scene already here, this folder, I open it and the scene gets imported, consists of multiple layers, one is this road layer, another one the building layer and the terrain layer with the aerial imagery on it. It's a mesh layer and what I have to do first is I have to reduce the size and here I have to add some more view in set direction and now I can start adapting the style, I add a material for the roads, simple color setting, that's for the roads and the same for the building layer, I add a material and set the base color. So I save the scene and next step is adding the scene to the main scene, so let's see the scene with the cube and the camera and here it's combined and I have also to adapt the view settings and now let's see what happens. The cube is falling and it's falling into the terrain, I show it again. So what we need now is to add a collision shape for the terrain and in the mesh options I have possibilities to add a collision shape, I'm creating one, you see the lines, I'm the viewer, I collect this collision shape into a node, I have a better organization, I use a static body and then I can collapse and hide it. So now we run the scene again and we see a physical reaction of the cube, so that's all made without any code, that's just a physics engine of Godot and the last step I want to add some code, I attach a script, create a new script, that's scripting language of Godot which looks similar to Python, I start with a constant which is the initial scene which is loaded here, that's this cube scene, so essentially the cube and then I write a function for keyboard input, I have to catch events, use events and if it's a key press, the space key press, I call this function throwCube which I write right now, so that's the main function, I create a new box variable which is an instance of this cube scene, so I create a new instance and add this instance to the current scene, that's the whole script and now let's see how it works, so it's falling down, now press space and new cubes appear and roll down independently. So this was the Godot demo, you saw there was different terminology, this game engine so when we speak about Rossa data in GIS, that's usually sprites or textures, vector data or meshes, different formats, point clouds, all the meshes or 3D tiles, styling is usually done as material with textures or other parameters and we have containers, scenes, one of the scene formats is GLTF we used here, the next slide is about OTC standards in the area of 3D, so there's CTGML and latest one CTJSON which are an OTC standard and then we have two community standards, the first one from Acery is the index 3D scene layer and the other one are 3D tiles from Cesium, they have similar capabilities, 3D tiles is more based on GLTF but it's about models, buildings, points and so on. This GLTF scene format is maintained by the Chronos group and the question now is how do you build such a detailed 3D model and the main software in this area is Blender which is one of the best tools for creating 3D art and in comparison to Godot, Blender is made for scene creation, it has sculpting and other creation tools, it has high quality rendering so it's not uncommon to render for days for one scene to have the best available quality, there are animation tools and usually the final product is an image or a video animation and Godot is made to be programmable for interactive scenes, interactive applications, it's well suited for XR applications, has run times for multiple platforms, desktop, mobile and web and the final product is an interactive application. Most of these things can also be done by Blender but Godot is made for that. And here are some activities on the GIS side, so Esri has published ArchGiz, Maps SDK for Unity and for Unreal, last autumn and Cesium has published Cesium for Unreal this year, so there are two major companies or two major products for game engines bringing GIS data into game engines and on OGC side there is currently a revision of the E3S Community Standard and an RFC of GML3.0 and there are also other activities like Sprints in this area. What could be done or what is available for using GIS data in Godot, there is this Elevation API I just used in the demo which has a web application but this also standalone, there are OSM data as we saw and the second one is the HateMap t-replugin which optimizes meshes from DEMs and it has advanced shaders for surfaces, grouse shaders for rock shaders and so on and that's available as an open source plugin for Blender. Then there is another plugin called Geo. which is based on GDAL and imports Geo data into Godot also an open source plugin and is used for instance for this Landscape Lab application where we can place objects in Landscape and see it rendered in 3D. What could be done in this area in the future, a few ideas for Godot, there are many interesting formats which are not supported yet, Flaggillbuff is interesting, Cocktiff, Geo package but also 3D tile support I would wish for Godot and CAD formats, BIM formats would be interesting, there is a BIM plugin for Blender but would be interesting to have a direct import in Godot as well, then another idea would be a good city GML converter which directly creates scenes usable in Godot and a dream of me would also be a 3D preparation pipeline using OSM data, there are many scattered projects but the common 3D OSM pipeline would be very interesting and there is a different area procedural city modeling which could be interesting and also exchanging textures, models and so on, so a platform for sharing models in this area would be interesting. So what are the applications, we saw a few of them, Landscape planning, City planning, Landscape planning but also indoor navigation or outdoor navigation especially with augmented reality, you could display historical data in 3D and you could do GIS with virtual reality user interaction, GIS functionality like measurement, shadow analysis, visibility analysis with augmented reality or virtual reality or simply GPU based GIS calculations which could improve performance a lot. So this was my short introduction into game engines and its use with GIS data, thank you.
When it comes to 3D graphics, computer games have been the technical leaders for decades. The game engines behind it have only recently been discovered by GIS manufacturers. This talk introduces game engines in general and the leading open source game engine Godot in detail. It also shows the status of integrating GIS data, which will play an increasingly important role in the age of AR and VR.
10.5446/57253 (DOI)
Right on time, so you have four minutes or so to prepare your setup. Yes. To go live. We have some delay, about 15 to 20 seconds between the stage and the broadcast. Yeah, I saw that, yeah. When I turned on the public broadcast. Suddenly two Regineers talking. I put your presentation here. I don't know if you have small letters, but if this is okay. Yeah, can you already see it? I shared it. Yes, yes, I think. Yeah, I can see it too. We are seeing it. Right. I can see some blue elephants. Yes. We had the artist at our company who was drawing some very cool images of our services. We will see more of them in the whole presentation. This is really, really cool. Yeah. So I think this switch between speakers is quite fast. Maybe we can have more questions or try to answer more questions. Yeah, I think so. Yeah. I thought about if people would raise their voice in the public chat and ask questions there. Yeah. And then listen to and then answer in this session. But if you are asking all the questions, then it's fine. Yes, but we have a lot of people participating. It's amazing. Really amazing people are. Okay. I think I saw 1600 people before this session started. So they are moving around. I think Michael is calling you. Or really? He said George question mark. It is indeed much faster on this stage to change the presenters. Good. So Felix, I will leave now the stage. And we will be the only speaker now one in three, two, one. All right. Hi, first for G. Good to be back. It's been a while. Yeah, 2018 was the last time I joined. So this talk is called watch after your post just heard. And what is it about? I want to explain to you how you can manage thousands of high available post just clusters and still get a good sleep at night. And I also want to speak a bit about my experience in developing database as a service tools because maybe most of you are only seeing this from the user perspective using some cloud provider and using the database as a service there. But when you are on the other side, you and you have to think about we want to provide as much automation and convenience as possible for the users. How can we do this? Yeah, the target audience for this talk would be DBAs and software engineers, but everybody else, I hope you still get some inspiration from this talk. Hopefully it won't be too techy for you. Okay, so yeah, I'm Felix. I work at Cervando as a database engineer since two and a half years. And in case you have not seen me at previous conferences and have not seen my previous talk, let me tell you that I like phospho G and the whole idea of open source in general. I joined before I was working as a geospatial scientist. And yeah, so but my interest in postgres and post just got me into this DBA field. So in case you don't know Solando, we aim to be Europe's starting point for fashion. So it started 2008 with selling shoes online, but now we sell like all different kinds of fashion and also beauty products in nearly every market in Europe. We are very dynamic, diverse and also very big company, very fast growing, I have to say it very different from what I've worked before. And at Solando, I joined team ACID by this acronym. You can already tell that we love relation databases in postgres in particular. We manage over 2000 postgres clusters in a distributed cloud environment at Solando. And in our case, it's AWS where we run stuff on. And we also have our own team dedicated to run a Kubernetes environment on top of AWS and we provide database as a service tooling for over 100 teams. Of course, it was not always like this. We also started with on premise data centers and everything was static and whenever teams needed something from the database team, create a new database cluster and change something, they had to file a ticket and DBA teams had to pick it up. It was very classical DBA job. So quite boring, maybe not set pleasant for both sides for developers and DBAs. But then at some point, yeah, company was growing that fast that we had to think, okay, we need to find better ways to scale faster and easier. And one requirement was to put everything into Docker images. Maybe it was a bit controversial back then in 2015, but we started doing it anyway and run it in easy to instances on AWS, which proved to be quite a nice experience. And so if you have not heard about this Docker image that we use, we called it spillo, which is Gregorian for elephant. And this spillo image usually includes the latest Postgres releases. So from now it's there's even in some branches, there's 14 in there and back to nine, five, I think at the moment. We usually do this to onboard other customers easier when they come from another environment, maybe RDS and so on. And they want to switch to our tooling and they have an old version running. We just migrate them and then we can do the major version upgrade. And then it also comes with some useful preload libraries. So PG start statements, PG mon, if you know Postgres, probably you know what I'm talking about. And it also has a bunch of extensions and Postgres is one of them. So there's also timescale, PG partner and so on, some useful extensions that you might need. But probably the most important and essential asset to the spillo image is the Petroni high availability demon. We came up with this because after moving to the cloud, I mean, in the cloud you have lots of moving parts, nodes can go down at like any time, of course, they run for some time, but you have to expect more failovers than if you run stuff on bare metal and data centers. And so we come up with Petroni, which provides you automatic high availability for Postgres. Does the leader election, it checks all the instances, there's a lot of digital and so on. So there a lot of magic comes from Petroni and it's becoming, it has become one of our most successful open source projects. So very popular one used by lots of different companies. I think even IBM uses it for their cloud and offering and banks use it broadcasting companies use it. So it's quite a huge success story for us, for our team. Yeah, of course, we also ship backups to Amazon S3 to an S3 three bucket with Wally. We also use it for restoring backups. And in case you want to configure this image to your environment, if you're not on AWS, but on Google cloud platform or Azure, there are lots of environment variables that you can let allow you to configure it to your own custom setting. So this is how it goes nowadays with Docker images. Just everything can be configured with these areas. Okay, but still we had our clusters then running on AWS. That was nice. But after like if you pass the mark of 100 databases and more and more teams joining and want to use Postgres, you're drowning in all these requests. We thought, okay, we need some more automation to automate all the basic DBA tasks of setting up a new cluster and changing stuff and so on. And so back then there was an offering for on AWS called RDS, but we thought, okay, maybe we can come up with our own solution around this below image because RDS was lacking some features like the freshest versions of Postgres and maybe also automatic failure where it wasn't. I'm not sure if it was available back then. So we thought, okay, let's create something. Let's turn our whole team of DBAs with operational workloads into a software engineering team and create tools that create this abstraction layer between our developers and the whole database infrastructure. And we were looking for a framework to do this. And back then we also had our first deployments running a production on Kubernetes in 2016. So we thought, okay, let's try Kubernetes with its extendable and API. We might be able to create a database as a service offering on top of it. And our goal was to stay as cloud native as possible to leave as much automation that Kubernetes already provides in deploying software. Leave that to Kubernetes and don't reinvent it. And whenever our components talk, have to talk with each other talk via the service layer of Kubernetes. So to make sure that whenever we may change the cloud provider that this is easily doable, we'll just have to switch them to another cloud of Kubernetes offering. Okay, speaking about Kubernetes, in case you don't know it, might be the case. I mean, nowadays it's quite popular. What is a very, very brief explanation? I cannot really go in depth here. So it provides you building blocks to deploy and scale microservices. That was the original idea. But it became so popular that people wanted to run just everything on Kubernetes. So, and even after some time it got, there were also very good abstraction layer for persistent storage. So description for storage classes, volumes and so on. So that made it possible to run stateflow applications there like Postgres. And yeah, and it also has a very pretty vibrant community with its own conferences and releases every few months. So, yeah, it was a good decision for us back then to move on to this train. But still you can also use everything which I show you. You can also use on top of OpenShift, but yeah, it was originally developed for Kubernetes. So speaking about some building blocks, you can see a couple of them, but that's of course quite a lot to digest. So you have pod services, stateflow sets and so on. You cannot really give this to your engineers and say, yeah, this is everything you have to create to run Postgres on Kubernetes. What developers want is just one resource type. And fortunately, Kubernetes allows you to create your own custom types. In no case is just a Postgres QL type which you can create. And then you can create custom controllers in Kubernetes, which then watch for these resources, pick them up and create all what's necessary in the background. So developers only have to think about writing one manifest file for the database cluster. And how can this database cluster look like? For example, like this, just a few lines. So those who are familiar with Kubernetes recognizes instantly, I would say, you have there a kind of type Postgres QL. This is our custom resource, which we created. And there you can specify, OK, I want to create a new cluster with three instances. That's one master to replicas. We always have only one master and then as a chrono replication or synchronous if you want. And you can specify the major version, volume size, and team ID. And that's mostly it. I mean, there are a few more fields you can specify, but this is the minimal version that gets you going. And you submit it to Kubernetes. And after one minute, you have a running cluster with high availability backups and so on. That's pretty nice. So one thing that I see in some talks when we get asked about a Postgres operator is that people usually think that it does all the magic with high availability and so on, and Postgres management and so on. But the Postgres operator, this is really just there for high level tasks on top of Postgres and Patroni. So the magic happens in the Spillo image in Patroni and Postgres. And operator in Kubernetes is only there to watch for new manifest created, update them, compare them with the existing state. For example, you can update the major version in the manifest for maybe in some weeks from 13 to 14. And then the operator will just go to the database POTS and then trigger the major version upgrade for you. So this is the abstraction and the amount of jobs it does. So it also has some nice features like it does rolling updates of POTS a bit smarter than the actual stateful set. So it watches for node changes. So for example, every month, we have to rotate all the nodes with new software. And this will also rotate the POTS and do a failover. And our operator will recognize when nodes were marked to be decommissioned and then move POTS around so that you have less failovers. One nice asset is also cloning. So you can create a new cluster and say, OK, it's a clone of my existing cluster and maybe a clone from two hours ago. So in case you had like data loss or so on, you can just clone your cluster and do a recovery from any point and time you want. And you can also create standby clusters also useful future if you want to do migrations. What we have seen usually during Cyber Week, that's like one of the big events at Salando, of course, is that services and apps scale out like crazy to lots of POTS. Then they are all connecting to a database and then easily hitting the connection maximum limit, even though they are using application-side connection pooler. But we thought, OK, let's create also a database connection side pooler. And you can just go in the manifest say, OK, enable connection pooling. And the operator will spin up a PGBARN site deployment for you. You have seen in the manifest that there is also the ability to provision users and databases. So that's then one part of the schema migration. The rest should come from tools like Flyway. The good thing is in Kubernetes when you create new users or app users, it creates secrets for you that store the credentials so you don't have to remember passwords anymore. So and or change it via email and so on. It's all there in the Kubernetes secret. You can just grab the credentials from there and then lock into your database. And if you're really lazy in writing YAML code, you can even use a browser-based UI where you just have to fill out some fields and then a new cluster spins up. So maybe in the end I have time to show it to you. OK, speaking a bit about user experience, so you should have now understood that all the developers have to do to interact with the database infrastructure is this single manifest. So whenever they need to increase the volume, add more users, maybe add some configuration for Postgres, they do it all via this manifest. And so we have to always think about when we create a new feature, how can we make it as simple as possible. So for example, we also have a documentation where we explain some best practices to our devs. And then for example, one idea here is to have like default user setup where you always have a reader role, a writer role, and owner role. And you can also in Postgres configure default access privileges. So when you create new tables, privileges are already in place. But that is something that the user has to set up on her own. So we thought, OK, maybe we can make this easier and avoid that people have to specify all these roles in the manifest. Maybe we can just, you have to specify like under a new key a certain database that is then prepared with some default features like default roles and even extensions maybe and also schemas. And so this is how it now looks like that if people want to create a database, for example, with the Postgres extension, because sometimes when they run a flyway migration, create this Postgres extension and only super users can do it, then they might have to fiddle around with some white listing extension that other users and super users can create this Postgres extension. But we thought, OK, it might be nice if the operator just does it for them. And so that when they create new tables, then all the privileges are in place. That's also pretty nice. So that's just one example of how this could look like if you have to create a new feature. How simple can it be? I mean, you see it's already a couple of fields. But yeah, at least it does also a lot of stuff for you. OK, so speaking about monitoring, what do we use here? So whenever you create a new cluster, we also have a browser-based UI where people then see the currently running queries. So if there's some blocking query and so on, we also have a library called BGMon, which exposes the background worker metrics of Postgres to a REST API that we can then grab to show the user's CPU consumption, memory consumption, and so on. What is also quite useful is query statistics to show them, OK, what are the most slow running queries in your cluster? How is the data distribution? We can even show the user's history of how a query behaved over time. So maybe at some point, suddenly the execution plan flipped and the query is taking much more time. And yeah, sometimes it's also interesting to show them who has locked into the cluster. So we created an extension there called PGLmon where we track all the logins of different users and application users, even maybe unexpected, unwisclable logins. And of course, we also have alerting. So we have one central monitoring unit of framework at Cylinder, which we call Zetmon. That is also open source. And this allows you pretty easily to send single SQL query queries to all your clusters. So whenever you create a new cluster, Zetmon already hooks in and we have then a dedicated role there that makes it easy then to create a new check on Zetmon and say, OK, I want to send this query to all my clusters to maybe check if there has been no update based back up in the last night. That also happened at some point. And so it's easy to go to know that you can have a quick disaster analysis when something goes wrong. OK, so slowly coming to an end now, what are some best practices that I've learned in the last two and a half years working with this setup? And I can tell you that when you work with Kubernetes, you really obtain this microservice perspective because of creating database clusters. It's so easy. I just I only want to see one database in each Postgres cluster. So really a Postgres, its own dedicated Postgres environment because then if something goes wrong, if your app crashes the cluster or just the pod goes down whatsoever, just one service, one application is affected and not like a whole fleet of services. So this is usually what I recommend to teams. And also they have to make sure that maybe yet that their databases don't grow enormously. So they should think about table partitioning early on when they see a table gets too big or even when a cluster gets too big. Usually we do sharding then we create separate Postgres clusters and then we have the sharding logic in the application. Because when your cluster gets bigger and bigger and something crashes, which you have to expect on the cloud, then rebuilding a new instance, rebuilding like the crashed replica takes more time. That's one example. Also something what I see in the environment in the community is that when people run our Postgres operator and they update to a new version and suddenly all the production databases have some problems, you should always do this in a first in a dedicated test environment and not in your production, of course. I mean even with cloning, it's easy to just test things in production like major version upgrades, for example, if your application is compatible. Okay, final slide. We've got some lessons we learned on developing database as a service tools. That when you create new features and you provide great flexibility of configuring your Postgres cluster, it will be abused. So we have one section where you can override single Postgres configuration parameters like share buffers, work mem and so on. If you can do it, some teams will do it and then they wonder why all of a sudden they pot's die because of auto memory. So there's always this should we really open, give users the whole flexibility or not. Then there's also the question should we use more auto scaling or less auto scaling. When we see oh, this space is getting narrow, should the operator automatically increase the volume? It might be fine if like data grows steadily, but sometimes you might have a query that just writes lots of temporary files and this fills up the disk space. Would you really want to increase then? Obviously not. So maybe your developers would like this if you scale everything, but your boss might not. Yeah, we look at the bill at the end of the month. So yeah, and one danger I always also see is we have this solution now. It's open source. Everybody can use it. People come along and see oh cool, it automizes all the things. Nice. They start using it and then they will come with when something crashes. Then you still should know where to look, where are the locks and you should know how to repair the stuff. So there is the danger of this autopilot effect. You should still know how to land the plane. There should still be someone in your company who is capable enough to know Postgres enough to fix the solution. And then last but not least, everything I've told you now sounds quite fancy, quite cool, but do you really need it? Do you really manage 100 or 1000s Postgres cluster? Do you really need high availability? Then if not, then you might not need Patroning for operator or you can go with another Postgres image. Yeah, that's just what I wanted to tell you. So it makes management super easy and sometimes when I work on private projects on my own laptop, I think about it's so pleasant to create a cluster in just one minute, but yeah, I mean, if you just have to manage one database, one cluster, you cannot do this without your ladies. All right, so that's about it on the last slide. I've gathered some links of the project I've talked about, the extensions we created, the tools we created and yeah, happy to answer your questions. Thank you very much, Felix. It was a quite clear presentation and we have some questions already. I've copy and pasted them on the chat and the first one was about the availability of the slides, but I think you already showed the links to your slides. Maybe you can show it again. Yeah, I will make it available here. You can copy and paste. And another question was about how do you manage the installation of the extensions? I saw in the YAML that you have a list of extensions, but for example, the extension dependencies like GDAL, Proj, and so on, install everything from packages? Yeah, we install from packages. We don't deploy from source. We don't build it on our own. Okay. And the third question you have there, if new databases are added, is the pooling dynamic? You mean if a database is added within the same container? Yes, I think so. And the question is... Patroni is there for the whole environment, for all databases. It's there for the whole cluster. I mean, all your database, it doesn't matter where the database is created, it's in the same pot like all the others. So Patroni takes care only on the pot level on each instance. Okay. So let me check. We have another question here from the audience about the threshold for big tables to the partition, for example. We usually tell our teams to, when you hit like the billion mark, yeah, and you know that your data grows quite steadily, then you should really do think about the right partitioning strategy. Yeah, we saw it for one team like a couple of weeks ago that we applied partitioning there. It's always not that easy when you have a table already there to turn it into a partitioned one, but then all of a sudden they could take up 10 to 15 more requests per second, so it was definitely worth it. Okay. And another question is quite interesting. Do you have any war stories about the automation going wrong? We have actually, yeah, there are some. You can check out, there are some talks online available from somebody from our Kubernetes team who talked about what could go wrong in Kubernetes. If you run stuff there and we also have some talks about our experience from the last two years or now three years running Postgres and production in Kubernetes, what could go wrong. The whole talks only about that topic or like the big part of the talk is about that topic, so check that out. I think it should be linked in our Postgres operator repository. Okay. Kubernetes Solano and you will find it. Okay. We have another question. How is the connection pulling is done? It's with PgBouncer. So you deploy and it's operator creates a deployment which spins up PgBouncer Pots and then you connect via the Pots to the database. So not directly to the database Pots but via PgBouncer. You always use PgBouncer in front of the database. And then not always. You can switch it on or off. It's usually our teams have applications like connection pooling that works fine but sometimes if you scale out pretty heavily then it makes sense to enable database connection pooling. It's not enabled by default. Okay. Felix, thank you very much. Thank you for the presentation and thank you for answering the questions. I think it was quite interesting. A use case for just a few of our users for working with such large environments but it's good to know how can this be automated as a show. So we are moving to our next presenter.
In this talk I will explain how you can set up PostGIS as a service with the container orchestration framework Kubernetes. At Zalando we are managing thousand of PostgreSQL clusters and had to find a way to make the database experience for developers as easy as possible. Today they can create new clusters or run major version upgrades themselves with a click of a button. High availability, point-in-time-recovery, role provisioning and monitoring come out of the box. Engineering teams are more independent and can move faster while not boring the database administrators with repetitive operational tasks. The Zalando DBAs on the other hand aim to improve the cloud native Postgres experience and develop open source tools such us Patroni, Spilo or Postgres Operator which will be presented. I joined Zalando in 2019 as a PostGIS user and want to share some of my learning of becoming a database engineer.
10.5446/57254 (DOI)
Again, sorry, there are some technical issues. We're actually now in the wrong room. I sent you the new link here. Yes, we have to use the afternoon session room. So that was a bit, we have to switch over. Now it's in the private chat. Yes. Okay. Can you hear me okay? Yes. Good. So, sorry for all the technical inconvenience and issues. Now we're ready to continue with the second talk today. The third talk today, sorry. Ken Golding is presenting about leaflet. Sorry, I'm a bit... Zara. Just a second. I have to get in here back. It's a bit further back because we're now using actually the afternoon session room. And that sort of cost quite a bit of trouble and the delay. So, okay. The Zaru platform for real-time spatial desports and that spatial dashboard topic has been quite of interest, especially during the pandemic. So, I'm very keen to hear what you have to present here, Ken, and the floor is yours. So, if you have a presentation to share. Great. I'll go ahead and share that. We'll add it to the stream. Okay. Just one second. All right. Can everyone see the presentation? Yes. Just to double check if I switch to the examples you can see that to. Yes, I can see that too. Well, now I see that we actually had scheduled because the presentation was relatively short. Yeah, I believe we were scheduled to start at 10 Eastern. I can certainly start now or we can start then. I think it's better if we because there are people switching from room to room. Yeah. We can wait for them. No worries. So, and just to be clear, it's a 20 minute presentation in 10 minute Q&A, right? Yes. Excellent. So, maybe just to fill the void, maybe you can tell a little bit about sort of what is your background of working with that topic, what's sort of your motivation behind it? And from what? Sure. A little bit in the presentation, but yeah, just while folks are kind of filtering in. So, yeah, we've developed this at Sasaki. So, we were a multidisciplinary design firm, I had quoted in Boston Massachusetts. And we, you know, we use Geospatial for a number of different things, but it's kind of a mix of, you know, professional designers who need to kind of be discovering this information, as well as, you know, we have a few kind of GIS experts. But what we found is that, you know, we often need answers to kind of be available very quickly. And so, I've started to develop some of our own tools to kind of help with that. So, one example, I'll mention that I'm actually not going to show in this presentation, but it's a tool that, you know, just allows people to kind of use another really nice open source tool developed by Conveil called R5, which lets you just understand, you know, who can reach what. And then we're looking to combine that with datasets, you know, whether it's the, in the U.S. Loads dataset, which tells you where all the workers are, or the census data, which tells you where all the people are, and kind of understanding how a design can influence and impact different people. So, yeah, I mean, they're, hopefully folks will enjoy the presentation. We'll have a lot of examples of some of the kind of challenges we found in the current Geospatial. Also, some, what we think are fairly interesting solutions. And also, you know, coming at this from a design firm perspective, we don't have huge resources to throw at this on the software front. And so, we're really hoping that the kind of open source Geospatial community can really pick up some of these ideas and run with them potentially, if they're good ideas. I mean, that's one of the tests, right, to kind of put them out there and see if there's any traction. So, that's where we're coming at this from. Yes. Good. Thank you. Thank you. Then I would just say now it's a bit early still. Yeah, I'm happy to wait until 10. Okay. Unfortunately, it's not possible to see how many people are there in the audience and waiting for your presentation to start. So, nothing to see there. But I think it's now one minute two or almost 10. And I think we just start and you mentioned 20 minutes presentation and then Q&A. So, the floor is yours. Perfect. Well, thanks Stefan. And thanks everyone who's joined for this session and also to FOSFORG for providing this platform. We really appreciate all that FOSFORG does to kind of promote sharing of ideas and solutions in open source Geospatial. So, yeah, I'm going to be presenting on Zorro and Zorro's approach that grew out of the needs of a design firm. Sasaki is a multidisciplinary design firm with global reach. And, you know, we really use Geospatial to kind of understand our design context in terms of like natural man-made systems, access and reach, really, who can get to what and how. And then also development forces like market forces and that kind of thing. And then how that relates to design strategy and making sure that the strategy is even aligned with those forces or kind of strategically kind of guiding some of those forces. And then also, you know, we spend a lot of time communicating our ideas as well as kind of exploring and uncovering patterns in data. So, you know, we do a lot of data exploration and storytelling. But, you know, in coming up with some of these solutions, we felt that there could definitely have a reach beyond just what a design firm might need. And that some of these may be generalizable to the broader Geospatial community. So, thanks again for having this platform to kind of talk about this. So, from our perspective, you know, in spite of so much great work in open source, we feel that, you know, Geospatial is still dominated by major players and many of whom actually repackage public data to make it easier to access. And this is because it's currently really hard to access through a kind of wide variety of kind of tools and platforms. And also, I think we're not keeping up with the kind of quantity of data that's coming in through big data. And so, you know, we definitely found that source file, the nose can be huge and difficult to work with. And there's few practical solutions at all for actual big data. We're talking about terabytes or petabytes of data. And then, you know, we often find that we're kind of dealing with a site that is across multiple scenes and there's different, you know, a lot of different sources and we have a bunch of different things. So, that's just going to take to use data can be hard to work with. And then one thing we see a lot is that a lot of work is repeated. You know, just practitioners are working through the same interim stages. Like you imagine you've got like highways or something. Everyone's doing the same buffers on the highways. But there's no way to kind of share the kind of interim products which could actually be very valuable. And then we feel like there's this big gap between simple tools that can give you very basic answers. And then like super sophisticated tools that require a lot of time and dedicated effort and a lot of expertise. And so we find it's hard for non-experts to really do anything sophisticated or discover new insights. And then, yeah, why are we excited about this approach is that the scale is practically limited. We'll kind of show how that plays out. It's hard to believe in some ways, but there's the magic of slipping maps that we're able to leverage for that. This is very cheap to run. There's no fancy servers. All the GPU stuff is happening on the client side, not the server side, which just means that the infrastructure is very easy to set up. And then, you know, really leveraging gaming techniques and WebGL graphics for real-time exploration. We find it's incredible to be able to work with these things very smoothly to test ideas. And then there's the seamless scale transition from like global level to detail level without needing to switch to a new level up the hierarchy. Like, you know, often you would see maps which kind of show your city, then you can assume up to the county, then the state, then the whole country. But in this case, everything can just be presented at the most detailed level. And a solution that helps with that is what we're calling this kind of dynamic density and then also this kind of pixel perfect mixing. And then, yeah, the other kind of cool innovation is a solution that lets us actually query millions of records like record-level data in real-time and be able to filter it. So, as I was saying, you know, Zario really takes advantage of the maps. I'm sure everyone's kind of familiar with the maps. I'm very, very widely used on the web. But the main thing I kind of want to folks to kind of be aware of is that each time you zoom into a city map, what it's doing is basically taking a tile, dividing them into four tiles, take and view zoom. And again, it takes that tile divides into four tiles. And that's happening at a tile level. And it's also happening for each individual pixel in the city map. And so the first examples that I'm just going to show here are kind of looking at that kind of unlimited scale. So you can, you know, be working with the data set here. This is a data set generated by MAPSEN. And there's actually, you know, it's an actual big data set. There's 70 trillion data points here that they've generated. And at the global kind of scale, we are able to just kind of, you know, tease out if this was like serious, serious, sea level rise. You know, that's what the world would look like. So, you know, I'm calling this Earth Bank. It's kind of just a very simple example of kind of being able to tease out some things. And just kind of one cool thing I noticed there is that, you know, off the coast of Lima here, you have both, you have one of the deepest points on Earth. If you want to then kind of change that scale, you have some of the highest points on Earth. So, you know, that incredible elevation change. But this tool itself is just kind of a test to make sure, you know, kind of understanding how we could be applying that scale. We can then take that a little bit further. And this is using the same exact data set. I can start looking at a little bit of fuel shading. And this is, you know, just again, just loading data tiles and then being able to play with some of the metrics there. And so, for example, here, I'm just adding a color ramp into, well, I'm kind of varying that color ramp. Like a little animation feature, for example. So, this is all just kind of testing the real-time nature of this dashboard and really being able to see how, you know, how we could be doing this rendering. And so, what's happening under the hood there is that we are just grabbing tiles. And these are coming directly from an S3 server where these tiles are being freely served. And all we have is in the one tile, we have elevation data. And the other tile, we actually have aspect, which is essentially the same as a normal when it's applied in kind of gaming terminology. And when we take those two things, we're then able to combine that through Zaro with some settings, as you saw us playing with there for colors and that kind of thing. And that gives us the final tile. But this is happening for each tile is essentially in HTML terms of canvas that's being rendered using Regal, which is a great little library for kind of low-level shader-based rendering. And so, that's how we're able to get the speed and so the tiles being loaded and then rendered all in real-time based on those changes. Obviously, the source data doesn't change, but as I change those settings, I'm getting different outputs. So, another solution that we're trying to solve for us is, you know, when we work with vector graphics, and, you know, I'm a big fan of Mapbox as an example of a Mapbox map. And there you're doing a lot of amazing stuff on the GPU themselves. But when you start with vectors as the kind of source information, what happens is that as some of those parcels start to get really small, it's very, very difficult to accurately and truthfully render those to the screen. So, this is why I'm starting to turn vector us to start to see the kind of fidelity of those vectors starting to deteriorate as things get optimized for the web, as well as just render that scales that don't. And this is where typically with kind of a vector-based rendering, you'd want to then shift to a different kind of representation going from like the block to the kind of tract or whatever else up that hierarchy. And so, you know, we big believers in vectors, we think they are amazing, particularly for cartography, but for data-vis, we're starting to see that, you know, there's cases where raster data can be a lot more effective. And so, the solution on the raster side is even if you're starting with vector data, you can render those at kind of a much more detailed level where you can still see the parcels kind of accurately. And then we're using that slippy map kind of structure to just sum things very accurately. So, similar to what I was saying about each tile being divided by four, here we're thinking about it in terms of pixels. And so, you know, if I imagine these four pixels here, when I go up one zoom level, that five, three and two become 10, when I then go up one more zoom level, these numbers add to 29, etc. And so, we're just truthfully representing values under each pixel. And the results are something that, you know, you don't get that same, the same gaps. This looks very crowded. There's a lot of detail here. But we're able to truthfully kind of render each pixel in terms of what's under it. And so, we can do the same thing actually for record level data. And so, if you imagine that each of these points is a record that's at a point level and those records are falling on a pixel. As I zoom in, we're going to be falling on different pixels. And so, that just get aggregated and all we need is a solution that will faithfully represent what is under each of these pixels at this high level. I have all of those dots, some brown, some blue. And so, you know, that pixel when it's rendered needs to be brown or blue. And we'll talk a little bit about how we make that determination. But first, I wanted to show this example of this great data set that we have Massachusetts, which is every single parcel in the whole of Massachusetts, combined from all the different cities. And so, it's about 2.5 million records. And I've, you know, it was kind of interesting to look at this vector-based representation on the website. And this is the furthest you could zoom out. So, this is a very detailed view of kind of Boston and Cambridge. And each parcel, so each record in this database is, you know, either a single-family home or a, you know, retail parcel or something. And, you know, so there are a number of different records on that. But there's currently no solution we know of that could render all 2.5 million parcels faithfully at a state level, which is what the data actually represents. So, just to kind of give you a sense of what that data looks like at the detail level. So, that is 2.5 million unique records represented as pixels. I'll get more into, if anyone's really interested in getting under the hood on how these data tiles work. I'm giving a presentation on that on Friday, if you're interested in tuning into that. But basically, what we do is we represent every single parcel value as a pixel. We're just encoding the number, a pixel, or colors, actually just a number in RGB. And so we just use that as a representation of the actual dollar value for each of those parcels. And so that, you know, we can represent all that data in about 5.7 megabytes. For the land use layer, it actually compresses a lot better because there's a lot fewer unique values through the magic of PNG. And so what that gives us is a solution where we can actually be looking at the entire database. There's 2.5 million records. We're able to actually apply filters dynamically. So right now I'm just playing with the value. So these are the lower value parcels in Massachusetts. These are the higher value parcels in Massachusetts. You can see some pretty distinct trends there. I can also start to apply a record level filtering to that. So, a turn of single families, apartments, and condos. You get a very different impression. Expanding that range, etc. So pretty seamless. And then I can zoom in pretty close. You'll see that this starts to, this is where you might want to, in a real tool, start moving into a vector-based representation to actually show those entire parcels. But you can see the trends and a fair amount of detail across the entire state. So then another interesting area that we found is, and this is a really great tool, if you're not familiar, I encourage you to check it out, Global Forest Watchdog. A great open source tool built with Mapbox, Earth Engine, Carto. But one thing that I've been noticing with a lot of these tools is that when you get really dense, interesting data sets like this, and here blue is tree cover loss. It's very important that you're able to kind of read that color mix accurately. And one thing I've noticed is that if you look over here, for example, this is a call out of this, just through the layering that's happening here, it looks like everything's fine in this area. There's very little tree cover loss, but when you actually zoom in, you'll see it's pretty much 50-50. And that's definitely not being faithfully represented here, which I think can certainly be a problem because it can lead to kind of misunderstanding and incorrect conclusions from the data. Something I've noticed that Esri are trying to do is deal with that a little bit through some color blending, but it should be pretty clear that this could pose issues because obviously if you have blue and you have yellow, you get green, they already have green. So except in very niche circumstances, I think that's a solution that's not going to be very helpful. And so the way that we solve this in Zaru, and I'm just going to go quickly through this, is if you kind of imagine that these are all values that are competing for these four pixels here. So if I had four red and two yellow on that pixel, it could represent it like this as a little stack. If we imagine like a slightly larger area, it's kind of a different mix of different values that are competing for each pixel. We have a very simple solution to kind of resolve that, and that's just that we randomize. And so we're able to randomize and then if we were to re-randomize, and literally it's just saying, I'm just going to pick up whatever value is in that slot. Obviously there could be more than six values, but with the dice example, I just have six. And then it's literally just picking, including the gaps here. So lower density is kind of represented by fewer pixels in this stack here. And so you can kind of see as we kind of move between those like the micro scale changes radically, but the kind of wider scale is not really affected. I'll just quickly give an example of that in a real tool. And then also how we can kind of adjust the kind of density representation here. So this is the dynamic density with that pixel mixing. And so you can start to appreciate as we kind of play with some of these colors, how those kind of pixel mixes happen. And so we can adjust color here dynamically, kind of mouse over and actually see like what that mixes through some queries that are happening under the mask within that circle there. And then the seed is kind of what I was showing there with the randomization. So you can see that even though we're changing the randomization. And personistically, this map is not changing very much. And so that's very important. We don't want to see false patterns that aren't actually there. I'm just going to quickly kind of show how this tool also combines additional data sets. So this is all happening within Zorro. We're looking in this case and this is part of a research project. We're trying to understand the relationship between park access and equity within the US. My colleague Kai will be giving a presentation on that on Friday if you're interested. And so this data is all coming out of Convail's R5 tool. So we're kind of understanding those walk isochrones as well as we can understand transit isochrones, for example, a huge amount of transit in LA County. We could switch to bike, for example. And you can see that people can easily bike to all these parks, but if you're just looking at a five minute bike. So there's looking at coverage and then we kind of combining that in terms of who has access and who doesn't have access for the different needs. And then the last example I'll show is a multi criteria analysis. And so, you know, just it's kind of a great use case for pulling together multiple different considerations. We're familiar with multi criteria analysis, you know, that yet in this case we're looking at development, desirability and where development should go. And so the factors that we're kind of pulling into that are proximity to build up areas, slopes, agriculture, as well as a number of other factors. And what we're looking to do with Zaru is rather than making those determinations upfront before we necessarily know how that's going to influence the decision. We can kind of just delay those decisions on what are somewhat arbitrary metrics. And so this is the last example I'll show. But basically what we're able to do, for example, this is the waiting for agriculture. If I'm able to tweak that, we can kind of see how that influences the color ampere. In this example, blue is showing the areas that are desirable for development. Red is showing the areas that are undesirable. As I make those agricultural areas less desirable, it's kind of changing the overall impression of that. We could also play, for example, with the waiting for the roads and like the buffer distance from the roads. And all of that is happening in real time. It's all being pre-calculated as kind of interim values. And we're able to tweak that out in real time in this dashboard. And then what that leads to is being able to essentially understand the future growth. I just want to move through this quickly, but basically this is existing city limits. This is next five years of growth and then five years after that. And then as I play with these different metrics, you can see how that kind of changes those growth boundaries. It ends up actually exceeding what our predictions are, but we just have this way of kind of calculating the breaks and kind of pulling that back. But that impression keeps kind of changing based on these different settings. So we're delaying all those decisions and then using them to really kind of inform and kind of understand the dynamics of how growth might work in this area. So just to sum up what Zaru is, so it's a proof of concept at this stage for like a different way of using GIS data. It's really, you know, I think a demonstration of the power of WebGL for these kind of real-time geospatial visualizations. We're big fans of raster data tiles. So again, I'll be talking about that on Friday. Hopefully trying to sell those. And then, you know, a promising solution ready for handling vast quantities of spatial data seamlessly, dealing with an understanding uncertainty and kind of delay decisions. Then viewing and browsing data from disparate sources. I mean, you can pull in data from anywhere just into this platform and kind of combine them on the fly on the front end. And then, you know, I think it's a really interesting way to combine those data sources dynamically with some tools to really provide new insights. And we feel it's really ripe for open source collaboration. And then just to be clear, what Zaru is not. So it's not a tool you can use out the box currently. It's definitely not a replacement for GIS tools like QGIS or cartographic tools like Mapbox. Unfortunately, it's not being super actively developed at the moment. We're kind of pushing it on select projects. But we wanted to put it out there because we feel that there could be some of the different use cases for some of these solutions. And then, you know, we're not planning to take this down the VC route. We don't think it would be a very profitable tool, kind of very cheap, very open. And so, yeah, we wanted to talk about that here. So if you're interested in getting involved, check out the repo and the demos, start the repo, if you're so inclined. Spread the word about this on social media and please reach out on GitHub. And so the link is right there. But if you just look up Sasaki and Zaru, you should hopefully find it. And then quick shout out to the Sasaki team, Eric Engberg, Kajganal and Alikhan Mohamed, who already helped develop this and pushed on projects. And then also just calling out some of the open source projects, leaflet, Regal, JMP and Mapbox that we used to develop this. And that's that. Can we jump over to any questions? Yes, okay. Thank you, Ken. Are there questions from the audience? Please type them in in Vangeles. There is one first question already about all the attributes shown in property maps stored in separate images as pixel values. That's an excellent question. Yes, that's exactly how that's working. So each field gets an image and each, you know, so and yeah, that's something I'll be talking more about on Friday. So I apologize that I had to gloss over that aspect of it. But yes, all of the data is encoded in those images. There's actually two different types of kind of image encoding. One is for record level lookups, which is the one I showed. And the other is just more kind of straightforward, geospatial kind of representation. Okay, more questions. I can imagine that there are some people curious about more technical aspects of it as well. So for data preparation that goes into the Zaru application, what is what has to be done? I mean, very often when you work with geospatial data, the preparation of the data is at least equally as time consuming than running analysis. So the question is sort of how would one or what has to be done to prepare data to get it in this application? Yeah, so that's an excellent question. And on the data side, you know, ideally we'd start, I'd love to see more people producing data tiles that are ready for this kind of thing. I've noticed that Esri are starting to use data tiles a little bit more in their tools, and it's an open source kind of encoding called lurk, LERC. And they're starting to actually, for their own tools, kind of serve those up as data tiles. So I think there is some movement happening, obviously that maps in this terrarium tiles as they called. So those are all kind of pre-prepared. But in terms of kind of taking data and prepping it, yeah, there's definitely a lot of work that still has to be done just on the tooling front. We've done a little bit of work. We have like a QGIS plugin that we can use for kind of exporting data in this format as tiles with these encodings. But yeah, I mean, the idea behind Zaru is that you can take data tiles in any format and kind of remix them into meaningful visualizations. But yeah, I think the data tiles themselves, there's a huge potential there for that to be like a really kind of primary way of sharing a lot of geospatial information. I just think there's huge potential. Yes, another question. Do you need to know about WebGL shaders, etc., to visualize data? Is abstracted away to something similar to Cato's CSS? That is such a good question, not at this point. So right now, and you know, Zaru is already at this proof of concept level, there's very little of that kind of happening. There are a few things that are kind of abstracted into settings and that kind of thing. But it's still at the level where there's a fair amount of kind of shader code being written and being kind of customized in each use case. So yeah, that's such a good question though, like how you would take that and kind of abstract it to CSS would be a fairly large lift. But I absolutely think that would be the way you probably want to go. Yes, and then a similar question to what we discussed earlier, what data format is used in Zaru? Yeah, so a couple of different formats. The maps and tiles, the first one I showed, which is that global massive data set that they produced, that is using, they call them the terrarium tiles. Well, that actually just refers to the elevation tiles that they have. And so that's a different format. It's also encoding numbers within the PNG kind of space. We've kind of built on that a little bit because their format is really just suited to just elevation data. So we have a format called GOPNGDB. It doesn't really roll at the tongue, but and we'll be presenting more on that on Friday. And then there's also, I only have like a very basic proof of concept using lurks, but that as a format is also something that you can consume. So really, the idea would be that you could represent the data in whatever format and then be able to kind of combine it within Zaru. Okay, and then one last question, maybe I understand Zaru loads some raster tiles and accumulates them in one layer, right? Yeah. So, yeah, so each tile remains an individual slipping map tile. I know there's some other solutions which essentially take all those tiles, render them across to another image which then gets to the GPU. And so you're rendering the whole screen at once. In this solution, each individual tile is rendered using regal. Okay, thank you so much, Ken. And then we switch over to the next speaker. Thank you so much for the opportunity. Thanks everyone. Bye-bye.
Zaru is a new system for creating real-time spatial dashboards. Zaru uses video-gaming techniques and a novel method of encoding data in images to enable real-time compositing and visualization of potentially giant data sets. This solution was initially developed by design firm Sasaki as a better way to understand the relationship between urban amenities and the people who can access them. The platform is powerful and scalable, but requires no back-end infrastructure to run. As a solution for data sharing and visualization it has broad potential and room for creativity - making it ideal for the FOSS community. Zaru grew out of a need for easier access to large datasets and the desire to find a better way to query and visualize them. We believe interactive models and visualizations can help provide deeper understanding of complex relationships and lead to better-informed decisions. Smooth, real-time feedback lets us keep all assumptions fluid and allow users to understand causal relationships intuitively. By using numeric datasets that can account for probabilities, but also allowing arbitrary inputs to behave as “sliders”, we can play out these scenarios and quickly explore a dense set of possible outcomes. Zaru can be used to support decisions around urban growth scenarios, environmental threat analysis, site selection for development and many other geospatial analyses. Zaru can also be an effective web-based storytelling tool. The underlying technologies borrow from the gaming community, but are very close to standard geospatial practices. We use the WGS 84 web map tile schema and encode data in PNG format (GeoPngDB). Data tiles are loaded exactly as image tiles would be for an aerial or street map, but by keeping the data in raw format, we are able to manipulate the visualization in real-time. This allows us to apply filters and apply color schemes to tease out patterns instantly. In addition to raster datasets, Zaru supports record-based geospatial data using a novel combination of spatial and non-spatial encoded images. This allows specific queries to be run over millions of records in real-time. This presentation will showcase Zaru’s current capabilities using proofs of concept and real-world case studies. We hope to begin a dialog about what’s possible using these techniques and potentially inspire collaborations or spin-offs from these solutions.
10.5446/57255 (DOI)
Hello, Charlotte. Good morning. Hello, good morning. So we first talk today is a tool for matching learning based a day-symmetric map approach in grass GIS and Charlotte Fraser. I think this is a good Fraser. Charlotte is a research at the University Libre, the Ruchelle's version, with a background in geography like me and specialized in remote science and GIS. So Charlotte, this stage is all yours for your presentation. Thanks. Thank you very much. So hi, good morning everyone or good afternoon or good evening depending on where you are and thank you for coming to my talk. So as just mentioned, I will talk about a tool for machine learning based desymetric mapping approaches in grass GIS. So and the tool specifically is called R Area Create Weight. So first let's have a bit of context. In recent decades, there has been significant progress in high resolution Earth observation data which enables geospatial data to be available at increasing resolutions. This in turn drives the analysis of spatial data at higher resolutions. So socioeconomic and demographic data, however, while it's collected generally speaking at the individual or household level, tends to be aggregated at course scales such as the administrative unit, which of course is too coarse for high resolution spatial analysis. This leads us to spatially disaggregate data to finer resolutions. So often data is disaggregated uniformly, but in the case of human population mapping for example, this is unlikely to represent the spatial heterogeneity of human activity. So as such, interest has grown in desymetric mapping, which is an approach used to disaggregate data non-uniformly from a coarse spatial resolution to a finer level of detail. It assumes that knowledge of an area or proxy indicators can be used to produce weights at a higher spatial resolution to unequally spatially disaggregate or reallocate the data and therefore create a more realistic, finer scale gridded layer of disaggregated data, for example, population data. In Grass GIS, there is already an add-on called V Area Way, which carries out desymetric mapping, so which disaggregates data from a coarse scale to a finer scale using weights. But while this tool is very handy, the user has to provide a pre-prepared weighted layer, which brings us to the question, how can we determine these weights? So determining these weights based on a set of ancillary geo-information data can be quite a challenge. Here I'll take the example of human population counts. Our four human population counts often land cover and land use maps are typically used, but often the weights are subjectively determined by an expert who will attribute higher weights to urban areas, slightly lower weights to suburban or rural areas, and a weight of zero for forest areas or water bodies, because this corresponds to our understanding that more people are found in urban areas than they are in forests. Of course, for other variables, for example, if you are mapping populations of wild animals, you would be more likely to inverse this weighting. More recently, however, research has taken advantage of the power and the efficiency of machine learning algorithms to create weighting layers for desymetric mapping without any apriery knowledge. For example, the WorldPop project, this project works with population mapping. The WorldPop project developed an approach that uses the random forest algorithm as a flexible means to predict the weights for reallocation of population intergrid layers. This has been found to improve upon existing freely available population mapping approaches. A similar approach to that of WorldPop was developed by Grieba Ed El. Now, the references can be found in the publication that comes with this talk. This approach also uses the random forest algorithm to create a weighted layer. While the code is openly and freely accessible, which allows the approach to be reproduced, but it was designed for specific experiments and this may not fit the needs of other scientists. Furthermore, it requires an understanding of computer code and computer language. In this case, Python and R, which makes the code less accessible to non-programmer users. Part of this code or part of this approach has already been implemented in the grass GIS add-on, R-Zonal classes. This consists of the zonal extraction of class proportions from categorical raster data. I'll come back to this a bit later. But more needed to be done to make the rest of the approach more generalized, to be able to apply in more cases, and to make it more accessible to more users. And so this is why another add-on was developed. And this is where we get to the R area create-weight add-on. So this tool uses random forest regressor algorithm to create a weighted layer, which can be used for decimetric mapping. So it's a ready-to-use tool that is accessible to non-programmer users via grass GIS. So how does it work? Well, of course, the user does have to provide some data sets. After preprocessing of these data sets, the tool will calculate statistics on the data sets. Now, the statistics are calculated at two different levels. They're on one hand calculated at the level of each spatial unit. This could be, for example, administrative units. And there's the course scale. So this is the information that is used to train the random forest model. The statistics are also calculated at the level of the output higher resolution grid. So for each pixel of the output grid. And these grid level statistics are then used within the random forest model that has been trained to predict weights at the grid level and therefore create a weighted layer, which can then be input for decimetric mapping. So in more detail, in its simplest form, the add-on requires three different types of input information. On one hand, you need a vector of special units that contain information on the variable that is to be disaggregated in the attribute table. So within the random forest algorithm, this is the response variable. This could be something, for example, population count per administrative unit. On the other hand, the user must also provide raster data sets that provide information which is related to the response variable. And this can be used to predict weights in order to disaggregate it. So in their add-on specifically requires at least one categorical raster. Here we also call it base map. And for example, this could be a land cover map for population mapping, for example. And there is also the option for the user to provide a second base map so the user could provide both a land cover and a land use map, for example. And it is also optionally possible for the user to provide a map of continuous values. And this could be, for example, the distance to roads. Lastly, the user must also define the desired pixel size for the output weighted grid. Obviously, this pixel size should be coarser than the input base maps, because of course it makes little sense to make predictions at an even finer resolution than the information that is used for prediction. In a second step, the data sets are processed in order to be able to correctly extract the statistics in the third step. So I will not go into detail here about this processing, but I just want to mention that it's also in this step that a template output grid is created using the user defined spatial resolution and the spatial coverage of the spatial units provided. In a third step, statistics are calculated both at the level of the spatial units, which you can see here on the left and in the middle. And these are used to train the random forest model. Statistics are also calculated at the level of the output grid, and this is in order to predict weights. So let us first look at the statistics calculated for the raster maps at both scales. So for each input categorical raster, so for example again, Landcarver or Landuse, the proportion of classes present is calculated both at the level of the spatial unit. So the proportion of each Landcarver in each spatial unit. And this is also calculated for each grid cell of the output grid. And these proportions are calculated using the graphs GIS, our zonal statistics add-on that I mentioned just earlier. The other add-on that implements the random forest approach, together they create the weighted map for decimal trip mapping. Let's see for each continuous raster, if it has been included by the user, it is the average that is calculated. Here in the figure you see only a categorical map, but the statistics are calculated for each of the input rasters that are input by the user. So these statistics are what is used in the random forest model to predict the response variable. This is what we call the features in the random forest on which the predictions are made. So to train the random forest model, it is also necessary to have known information on the response variable, so on the variable to be predicted. And this is provided in the attributes of the spatial units layer. So here, in fact, within our area create weight, we calculate the log of the density of the variable of interest. So for example, the log of population density. So in fact, the log is used as previous research has suggested that this improves the quality of the weight prediction. Then we get to the random forest model. So these statistics that are calculated at the level of the spatial units are used to train and fit the random forest model. So here we'll have to train the model for each spatial unit, the expected value of the response variable. So the expected log population density, for example. And the features, for example, the proportions of land use classes that can be used to predict these values. Once the random forest model has been trained and fitted, the features or statistics calculated at the grid cell level are input into the model, which can then make predictions on the response variable for each grid cell of the output grid. So obviously here we are still predicting the log of variable density. So these log values are then back transformed to obtain the variable density. So then you would get population density. And this produces the final weighted grid. This weighted grid can then be used as an input with the existing grass GIS tool I mentioned earlier, the area way for that metric mapping. So to finally desegregate the variable of interest. So for example, this could be to predict the population count per grid cell. So just a few words on the random forest model, specifically within our area create weight. So the random forest model is a nonparametric supervised machine learning algorithm. And it's relatively resistant to overfitting and has a strong degree of generalize it. So the tool are area create weight uses the Python psychic learn machine learning library to implement random for us. So within this, the importance of the different features. So the features used for prediction, such as the land cover, class proportions, the importance of these features are evaluated. And in our area create weight by default, the features with little or no importance are removed. There is also the option for users to remove this and keep all the features no matter their importance. In addition, a range of parameters, a range of random forest parameters are tested and optimized for the model. Now within our area create weight, a range of default parameters are provided. But there is the option for more experienced users to alternatively define their own set of parameters to use or to test. And the model itself is assessed using the out of bag accuracy, which is a pseudo independent validation measure known to be reliable to assess model performance. In addition to the output weighted layer, the area create weight provides a log file providing details on the random forest model. And the tool also provides a graph of feature importance. There is more detail about this in the publication if you are interested. So here an example on the desegregation of population count for the city of Wallenubu. So this data originates from the React and Malk projects. As input layers, we have the administrative units that contain population count for each unit. So here we have inputs to base maps, a land cover map at 0.5 meter resolution and a land use map at 5 meter resolution. And these provide the features for the random forest model, so the features to predict weights. And a 100 meter output tile size was specified. Here on the bottom you can see the output weighted grid. Now this is the result when simply using the default parameters of the our area create weight tool. So with no programming knowledge and very few inputs. So here on the graph on the right, which was also output by our area create weight, we can see the importance of each feature. Here it shows that the land cover class low buildings was the most important in determining the weights. There are of course a few methodological considerations to take into account. One aspect to take into account is the spatial coverage of the data sets. So the best practice for a more robust model is for the spatial units to be entirely covered by the ancillary data sets. So firstly this because weights can only be predicted for output grid cells where there is a coverage of all data layers. So in this example there will be no predictive value for example in the north east areas not covered by land cover. Also in the case that the spatial units are not completely covered, the random forest algorithm may be missing important data for training the model. The can however be cases where missing data can be acceptable, but this is up to the user to decide. In this example in the image here, the aim is to disaggregate population count. So take the large mainly uncovered administrative unit in the north east, which you can see in the red circle. So firstly weights will only be created for the small covered area in this unit, which means that the population of the whole unit will be redistributed only over the small covered area with weights. Now this can strongly overestimate population numbers, but in some cases like this one with expert knowledge, we know that the area not covered actually has sparse or no population. Meaning that the population estimates what they will approximate what it actually is. Another aspect to keep in mind is that this approach assumes that there is another relationship between the ancillary data and the variable being predicted. And this relationship might not be so strong, so it's important to think about the data being used. Obviously like with most models, the results obtained depend strongly on the quality of the input data. If you put rubbish in, you will get rubbish out. So finally there is also the assumption that the relationship between the features and the response variable stays constant, both at the scale of the spatial unit and at the scale of the output grid. In reality, this is unlikely to be true, and there is some research that shows that different features can be important at different spatial scales. And lastly, it is important to remember that random forest is a predictive modelling tool, and it can only provide an approximation of reality. Although this does not mean that the model cannot be useful. As we all know, all models are wrong, but some models are useful. Thank you very much for listening, and I am happy to take your questions. Thank you, Charlotte. Excellent talking. We have some questions. I will put it in the battle of the screen. So the first question is, what calculation speeds can one expect for the plugin function? That really depends, actually. It depends on the spatial resolution of the dataset you are using. That depends on the spatial coverage you are using. And it depends on if you are defining your own parameters for random forest. It depends on how many parameters you input as well. For a light dataset, it can be quite fast. For simple datasets, it can take a few minutes if it is very small. But with much more dense datasets, it can take hours. It also depends on your processing speed. There is an option also in the add-on to use more GPU units if you have them available. And that can speed up as well. So there are a lot of factors to take into account for the calculation speed. The second question is, how do you validate and estimate the accuracy of the ML population? It is a very good question. The model itself produces an out-of-bag error or score. This is how you can have an idea of how well the model is predicting the values. This would need to go into more discussion about random forest and bootstrapping samples. There is also a grid search algorithm using k-fold validation. There are different steps where the out-of-bag score is calculated. With the final model that is created, it is the out-of-bag score as well. Testing the model using part of the dataset that has not been used to train it. I will have time to go into more detail, but feel free to send me an email. There is one thing to take into account also with the out-of-bag score. The out-of-bag score is for the model that is created based on the input data that trains the model. Obviously, as we are predicting at a different spatial scale than the data used to input in the model, this can create... you have to be careful when using the out-of-bag error when validating the model. It needs to be taken into account, but it is the internal validation of the random forest that produces the accuracy measure. The next question is always the same, but the tool is able to quantify the uncertainty of the final product based on input data. Again, this comes back to the out-of-bag score as the only accuracy measure that is provided by the tool. If you are looking for some spatial uncertainty, then there is no other uncertainty or accuracy measures that are included based in the tool. Okay, so Charlotte, thank you for your talking. We are now in a few minutes. We start the second presentation. You have 14 minutes. If you can share a message or say something about your presentation, or invite the people to know more about your project. Okay, well, already thank you for being here and listening to me. For the tool, the tool is not currently available on GrassGIS, but with the next release of the next version of GrassGIS, it will be included in the add-ons and online, but currently it's not yet available. For more information, you can look at the paper that is published in relation to this talk for FOS4G, if you are interested in reading more about it. And then as soon as the tool is published, then I invite you to give it a go. Thank you very much. Thank you, Charlotte. Bye. Bye. So in a few minutes, we will start the second presentation. You will be finished some details about the presentation, but in two minutes, we will start. Thank you.
Socio-economic and demographic data is usually collected at the individual or household level, and numbers are then aggregated and released at the level of administrative units. The spatial extent of many phenomena, however, do not correspond to any existing administrative limits, making them difficult to exploit. Additionally, geospatial information has started to be available at more and more detailed spatial resolutions, thanks to progress made using high-resolution EO data. Consequently, scientists often aim to perform spatial analyses at a fine resolution, but face issues related to the fact that the spatial resolution of administrative units, on which socio-economic and demographic data are aggregated, is too coarse and does not fit their needs. Dasymetric mapping can be used to create a more meaningful gridded layer of disaggregated socio-economic data, but the major challenge resides in determining the spatial distribution of a variable within aggregated spatial units. The dasymetric mapping approach has been made more accessible with an existing GRASS GIS addon “v.area.weigh" (Metz, Grass Development Team, 2013), available on the official GRASS GIS add-on repository. It provides a tool for dasymetric mapping, however requires that the user provide their own weighted layer. Grippa et al. (2019) published a replicable approach that implements the random forest algorithm for the creation of a weighting layer for dasymetric mapping with the related computer code. While this code allows replicating the method, it is very specific to the experiments presented in the paper and may not fit the needs of other scientists. Moreover, since it is computer code, potential users not skilled enough in Python and R programming could be reluctant to use it. An important step of the approach has already been implemented in a GRASS GIS add-on, “r.zonal.classes” (Grippa, Grass Development Team, 2019), which consists of the zonal extraction of class proportions from categorical raster data. The tool presented today completes the implementation of this approach, in a more generic and user-friendly manner. To our knowledge, there is no other existing open-source and ready-to-use tool, with a Graphical User Interface (GUI) for creation of dasymetric mapping weighting layers, using a ML approach.
10.5446/50113 (DOI)
Okay, cool. So Victor, he's here. Maybe he joined us or we need to see what the meeting is. I don't do zoom, that's for sure. Okay, cool. Victor is here with us. So I guess we can start. Hi everybody, welcome to the Volto add-ons training, the second edition. I'm Kibayu and I'm a developer with all the web Romania. I've been working with Volto since 2019, something like that. And before that I was a developer in the Python developer. Since around 2003 I've been working with Plone. So I got quite a history with Plone. And Victor, you want to introduce yourself? Yes, can you hear me? Yeah, I'm Victor. And I'm in the community since, I don't remember when. I've been developing Volto since the first iteration, since Plone React, and doing projects with Volto since also the first iteration four or five years ago. Yeah, I'm also the release manager of Volto. And yeah, I guess that's it. Okay, so in this training, this is a training that's more or less thought as an advanced Volto training. There is some other introductory trainings for React, for Volto, and this one is the advanced Volto training. In it we are looking at how to develop a simple data table block. But with this tutorial we'll look at actually best practices because this training was developed from based on the experience that we have working with our biggest client, EA, European Environmental Agency, and in that work we've come up with some best practices. These are shared in this training. A lot of this experience went into Volto core already, and this is a training just to share that experience and to make you really, really, really productive with Volto. And yeah, you can make quite advanced things quite fast with Volto and we'll teach you how to do that. In the process we will build this. So I'm pretty sure that you're already familiar with the Volto interface in case you're not. It's an interface on top of Plone. It's all running in React. It's a single page application. So we are developing this data table block where we're going to be able to pick a file, a CSV file that I've already uploaded into the system, like this. That file will be read through the network into a table and then displayed. We will have customization possibilities for this, like some styling for the table. And that's really really fast to implement. And then, which is the most interesting stuff, we'll be able to customize how to display this data table. So we will be able to pick, for example, from the file some columns that are in that file, and I don't know, let me pick something like this. And we'll be able to apply formatting and we'll be able to pick templates for it. And when we save it, it's going to display like this in the view. So that's going to be the endpoint of this training. By that time, we will go through the process of bootstrapping Volto, bootstrapping and add-on into that Volto project and then start developing. And this training is already published at training.plone.org. I think this address will change. Let's see if, no, it's not already done so. But in principle, I think this file will no longer be there. In any case, you can access it from the main page, go to Volto add-ons development and like this. Based on that last year's experience and yeah, what I've, let's say, found out since then. It is pretty hard to have this more like a hands-on training because we are, I'm here, I don't have any immediate feedback with you guys watching. So in many cases, it is possible to regard this training like as a walkthrough, walkthrough this training. I will walk you through this published training and we can, I will give comments on what we see and we can discuss. You can raise questions and if you want to try, you can try to follow along. From my perspective, the biggest, biggest, biggest, let's say challenges come in the first steps where it's really easy to mess a path. And if it's a new environment, you don't really know much about it. So every error, no matter how simple it is, it's actually quite difficult to understand what the problem is. So I think the introduction, the bootstrapping of this Volto add-on should be a hands-on process and we can stop and dedicate time to that. But after that, we will go through more quickly with the training and then I hope by the end of this training, which will happen tomorrow, I hope that we will have finished early so that we can really try to develop new things based on what we have already built. So let's make sure that we get this, we get Volto running on your computer and we get this add-on that we're building, running on your computer so that you have time today or tomorrow morning, you have time to play with it. And so that you gain actual experience with this. So far so good. Any comments? Any questions? No? Okay, so I'm going to share my screen. I mean, I'm already sharing it, but I'll show you my terminal. And I have the tendency of switching really fast between screens and, yeah, if you happen to be confused about what I'm editing, where I'm editing, what I'm doing, just let me know and I will try to clarify that. Okay. So I guess, yeah, let me start, stop everything. Just to make sure. Okay. And if we go in here, I've already shown you the product, but we have here our prerequisite and this is the Docker command to quickly put a clone database, a clone service. It uses Docker. If you have already blown running with Plonus API, you're fine. You're good to go. Otherwise, and I recommend that you do that for this training. Just copy that command, paste it in the terminal, and then it will start quite fast. Okay. So what we have a photo running this one will create will automatically create a website. So if you go to ZOB on port 8080, we already have a website created, and this website already has both or is is both or integrated. So if we, if we log in with admin admin, and then we can go into the site setup add ons, Plonus API is already installed and then the Plonvoltor add on which takes care of the both integration. So now that we have a Plonv running. If you want to stop or if you have problems difficulties, just let me know. We can maybe stop and look at the problems for the next thing to do. And the next thing to do is to bootstrap a new Volta project. So we have this command. One piece of advice while we wait for to run in. I have just run an npm install with global mode. If you run the same command on your system and you get a prompt for pseudo rights or you get an error that you don't have sufficient rights to install the package. I would say you're doing something wrong in the sense that when doing development we want to use local installations and the way to do that is to use a node version manager for example. So, if you go to the docs Volta CMS comm website. And this one allows you to quickly change between Volta between node versions and install them really fast and it will create a local or user installation of node so that you don't need to install or you don't need system permissions to install the node packages. And if this one is even better because maybe you have already know the new system if you install a package maybe it will break some system dependency or something. So this one like this with the node version manager is a lot safer. So, your install your one is a scaffolding scaffolding tool we use it to generate a new Volta project so basically we have some templates we have some files that act as templates. We copy them into the Volta project and they service these templates and we. So next step would be to install the generator photo. I'm doing this for the benefit of our viewers who don't have a lot of experience already with Volta. So, this step should be clear, but in case you've already started developing with Volta and you know this. Congratulations but have a little bit of patience. Okay, so now we can. We can bootstrap our Volta project. Okay, so a quick introduction to what bootstrapping is. Or why, why should the Volta bootstrap the Zop for example runs as an application server we have Zop running. We boot Zop it runs blown and so on. But it's a centralized installation with Volta things are a little bit different we have to create a customized version of this Volta for each project and Volta itself becomes. Not the standalone application but the library and that library is used by our Volta project and the Volta project is a standalone know the GS application that's using Volta as a library. Okay, so. Here in the prompt. We should say no we don't want to use add ons or should we say yes. Okay. Choose false it says in the training. So we want to do that. If we. Tiberio I see your chrome. You should switch to the terminal if. We'll restart the stop of the sharing screen because I'm, I'm on my other screen. Okay. Share screen. Share screen. Like this. Do you see my terminal. Yes. Hold on a second I'm going to bring the. The browser to the same. To the same window so that we can switch between them to the same desktop. Nice. Again, you see my terminal right. Okay, so this the bootstrapping created the Volta project. We can. Remember our command was your initialize use the clone Volta generator template and initialize a new thing that we called Volta tutorial project. Oops. So now I can change directory to the Volta project and I can run your start, which starts Volta. Okay. If you don't already have a fast computer. And you want to do a lot of Volta development. It's a good idea to get the fast computer. And the best ones are desktop computers, but there are some reports about the Apple M one processor machines are quite quite fast with the Volta and this start book time. Okay. Hold on a second. Okay, so now in the browser. I can go to local holes 3000 report where both runs. And I can, I can see both of that it's running and it's already started and so on. And if I click to add a new page. I get the usual. Both of blocks. Okay. So now that the Volta project is start is bootstrapped we have to create the add on. And we do that. Following the instructions in the tutorial. So we do that and it says here in the Volta project that is the folder that we've just created. So I will copy this command stopping Volta. I will run the command you'll add long both add on so it's using the same generator but kind of like a sub template inside that generator. And it's asking me for a name and the convention is to use the clone collective namespace in case you're doing an open source add on. And it's a good idea. Yeah. It's a good idea to contribute as many as possible as many Volta add on as possible, because both is great and it needs our love. And I'm going to say Volta or that a table. Tutorial, I think, let me just confirm. Sorry, what's a Volta that the table tutorial that that should be the name. So Volta that the table tutorial, that's my add on name, and it created the, it created some file, it says, they are not here. So if I we're going to look at them right away. A photo add on is actually an node package. There's nothing almost fancy about it. You develop it as you would do on any other node package. We are right now we are in the Volta project folder. These are files that are. These are the files that are necessary to run Volta. Our add on has been created in the source folder. So if we go to source add ons and then Volta that they in in the add ons actually. If we look, we only have one add on our, our add on. So we, we will go to the Volta that the table tutorial, and then let's let's look at this at the at the structure. We have bubble config make file package that Jason and source that are important. Let's say important files. We have the JSON and the source folder are the most important things to have in in an add on. If we look at the package that Jason, we can see a regular node package of the name. And the main visa visa requirements for a node package, but also the main is really, really important for Volta. And that is because the main that is source index is is executed by Volta when Volta starts. So we have, we have this function inside that's already generated the apply config. This one is kind of like, let's say, configure that ZCML in in blown, we use it to get some configuration, we adjust that configuration. So we can do anything inside with this config object that we received here. We return that configuration that has been mutated, or you can, you can even create a new copy of it, or you can come up with something else but usually we just mutate this configuration object. And then we export the default function. We export it as the default function from this module. We can, we can add multiple such functions inside here so I can declare for example, install some optional dependency, and that will be and will be a function again that. Looks just like above. And we have to return always have to return the config. And of course we can export it not as the default, but just by name. So we can be imported by name. Now, just, just having this add on generated doesn't mean that it's already loaded inside of Volto. We have to go back to Volto. So now I'm in, I'm back in the, the Volto project right here. We have to edit the package.json. And here inside this add-ons key, we have to declare that we want to load our add-ons. So it should be the Node.js package name. Volto Colleague, Plon Collective, Volto Data Table Tutorial. So right now, right here we have not the path where we have it, but the Node.js package name. Okay. There is, I mean, there are quite a lot of intricacies to packaging for JavaScript and even after, yeah, so much time or working with it and like, digging into quite a lot of, encountering quite a lot of problems and digging into quite a lot of situations. It is still hard to produce JavaScript packages to have them integrated. You always, there's always questions and problems like, should I transpile my package? I mean, I have to package it as browser, I mean, as common JS package or browser package or whatever. Can I export the JavaScript modules? Can I, or should I export the old Node.js, common JS files and so on? Fortunately, Volto makes this process easy for us. With Volto, you don't have to transpile your add-on. You just have to ship it as source and Volto will deal with the transpilation. That means that when developing an add-on, if you have dependencies, your add-on depends on some other third-party packages, you have to use a Volto or rather the Volto project. It has to become a monorepo. That means that Yarn, which is the package manager we use with Volto, will have to know that your add-on path is a workspace, is something that will be considered as a location to look for dependencies and, yeah, to include when installing the whole system. So one possible way of dealing, for example, with the whole development process is to use missdev or missus developer. And I think, sorry, this should be explained here in the tutorial. Okay, so let's go with the tutorial. So now that we have scaffolding, we have our add-on created, we need to go and edit the JS config file. And inside the Volto tutorial project, we will find this file. This one will instruct our, let's say, compiler or the overall build system. It will instruct and will be able to create aliases, for example, to say, okay, this package leaves this location so that we can switch between having the add-on installed in node modules, for example, or a first-party dependency that is, let's say, packaged or rather distributed. So it will be a distributed package, then it will be installed here in the node modules. Or if we have it as a developed package, then it's going to leave inside here in the add-ons folder. And you can use missdev to manage that process. But if we don't use it, we have to basically come here and edit this JS config file on our own. So from here. So, and basically we have to say this. We have to say that, I'm just pasting, but this JavaScript package name, right, leaves in this location. And you notice that this location, it starts with add-ons because the base URL, which is listed here, lower, will serve as the root. So basically it's going to become source slash add-ons and so on. Okay, so now let's go back and see. We have listed the add-ons here. We have listed, we have listed the add-on path here in the JS config. And now when we start Volto, we shouldn't complain. But that's something is wrong. Is anybody following the steps? Did you encounter problems? Do you want to stop? Now that I am taking a look at the Slack conference channel room, Victor is mentioning that there is a certain convention, which at the beginning we didn't really follow it, but now it is a recommendation that the add-on should be named in a particular way. So, plural and collective namespace, use it or not, yeah, that's your choice. But the add-on name should be like Volto xxx block, Volto xxx widget. So it would be kind of like Volto, what we have, data table block, right? Or Volto grid block or whatever. And of course there are keywords that you should use so that we make Volto more visible. Okay, so Volto started and let me open a new terminal. I'm using T-Mux for terminal multiplexing, so I'm going to switch between the first terminal, which I'll say that it's named yarn start. You can see it here at the bottom. The second terminal, which is, this is just a convention, that's how it works. And the third terminal, which I'll bring here to the second position, we're going to say that I'm editing the tutorial. And, okay, I'm going to start an editor here. And I'm going to edit the only JavaScript module that we have right now, just the index, right? So with Volto started, I can go here and just to confirm that our add-on is loaded, I will console log the config. Okay. Now if I go back into Volto, I'm going to open the developer console. And let's see what we have here. I don't know why I have those errors, but we have to make sure that our add-on is properly loaded. Okay, let's try again. Okay. So, yeah, this is the configuration registry for Volto. And let me tell you, as a clone developer, this is awesome. The fact that you are able to inspect the configuration system of a system, it's really, really great, because it makes it possible to understand and debug problems, because this registry will be looked up, will be used in a lot of cases, and it will be a lot easier to debug if you have problems and situations. Okay, so we won't look at the configuration registry too much right now. We don't know yet, but our add-on is loaded. What else do we need to do? Yeah, that's the optional. I'll read it right now, because yeah, it's not the time, but I really, really recommend that you use Mrs. Developer when developing Volto add-ons, because it makes it easier to switch between the add-on life cycles. So, Mrs. Production, Mrs. Development, and so on. And you'll probably want to collaborate with other people on even open source packages. So, with this tool, you'll be able to bring them into the system and start working on them by yourself. So, the tutorial recommends that we add the add-on as a workspace. That means we inform yarn that it needs to treat the add-on location as a workspace so that our project becomes a monorepo. And that is to be able to add dependencies to this package. Okay, so back into Volto, we need to back into the root of the Volto project, right? So, I'm here. I'm going to edit the package.json and I will add the add-on path, source add-ons, and the add-on path is actually, we can look here, Volto data table tutorial, right? And I'm missing Volto here. Okay. With this change, if we go here back into the Volto root project, yarn workspaces, info, we can run this command. Okay, it complains that it cannot be, it can only be run in private projects and that is because if we look here in package.json, again, in the root, this is false. We need to set it to true. And that means that we are not able to publish this Volto project to NPM. That's, I mean, that's not any tragic thing because we wouldn't publish it anyway. Okay, so now if we run the command again, we see our package has been picked up. It has a location, proper location, and so on. So, hold on. I think I should follow the tutorial and talk a little bit about what you can do with the add-on and we can look at the configuration with that for that purpose. But if we are still here, let's add the dependency to our add-on, which we're going to use later. So, yeah, work workspace, and then add-on, yeah, I'm going to use the command history. So, plon-collective-volto-datatable-tutorial, that's the name of our package, we're going to add a new dependency, which is pop-up-ars. Okay, now if you already used Yarn to add dependencies in a Node.js package, you know that there are two types of dependencies or more, but let's keep it to two types. Development dependencies and let's say real or production development dependencies. So, from this point, from the add word here, we can add the other Yarn switches. For example, if I would add the development package, I would need to use the minus D switch here, right? So, basically, Yarn workspace, this part becomes a prefix for the Yarn command that follows, and that will add the pop-up-ars dependency inside of the data-table-tutorial. Okay, so I'm going to do that now. And we can look at the result. So, back in my data-table-tutorial folder, which is the add-on folder, right? If I look in the package.json, it was updated with the dependency, the pop-up-ars dependency. Now, when developing add-ons, you will sometimes see Node modules folders created here in the add-on location. And JavaScript packaging is like, it works really, really nice in the sense that you can have multiple versions of the same library. And that's something that I don't know how to achieve with Python, but who knows. But if Yarn detects that we have multiple requirements, that we have requirements pointing to multiple versions of the same library, it will stash that version inside the package that required it. So that it doesn't have a conflict. But otherwise, Yarn will hoist. That means it will lift all the dependencies into the top-level folders. So now, if we look at, for example, Node modules, I'm just going to count, right, the numbers of lines. That should give us an idea on the number of packages that were installed for this project. Yeah. It's that crazy number that everybody complains when talking about JavaScript development these days. But inside, we will see that we also have PapaPars. And let's just use Papa. Okay. So our PapaPars dependency was added as a declared dependency in our add-on, but the package itself was hoisted to the top-level. So it is here. In case, for example, you're interested in reading the source code, it's a little bit strange. But that's a process, usually. So if I look here in my PapaPars dependency, the first thing I should do is look at the package.json. I'm looking for the main entry, and that tells me which JavaScript file will contain, let's say, the main entry point for this package. In case I'm loading it directly in the browser, this file will be used. So now, hold on. Yeah. That means I have to look here in the PapaPars.js. Now, usually, the JavaScript packages, they are shipped as transpiled. So some of the things will be just noise for us if we're trying to understand what happens here. But it is possible, for example, to get an idea of what happens, and it is possible for it, like, really to go in at the debugger line. And, yeah, in case you're using a third-party dependency and have issues and you're trying to, yeah, figure out something. Most of the time, you will have to hunt the location of the original source code and look at the source code in non-transpiled mode. Okay. So, back into the photo project. I should tell you what add-ons are really about and what they can do. And the story is that, like, kind of like this. With add-ons, we are more or less mimicking the clone add-ons in the sense that we want to have them self-contained and we want to have all the power to change the system with them. So that's why they get the configuration system and that's why they are able to mutate it so that they can influence everything. And among the things that an add-on can change, they are provided additional views and blocks. And that means, let me quickly move some of these so that I can switch them here. That means they can add new things here in the views. Like, they can register a new view for a new content type that you're developing, or they can even change the default view. They can do anything. They override or extend Volta built-in views, blocks and settings, which I've mentioned. It is possible to shadow Volta and if you've taken the introductory Volta course, you know that it is possible to basically create a mirror of any Volta file with your own customized copy and it will overwrite the Volta file. And that is thanks to Webpack aliases and this is similar to Z3CJBot, the Plon package that we use in Plon development. It is possible to register custom rules and what does a custom rule mean? For example, if we look here, if I look here, this one is content ID. That's not a root. Let me start Volta. But a root would be like events slash edit. It would be control panel or something. So that's a separate component view that we register for a particular path in the browser. Okay. So for example, if I go to site setup here, hold on a second just to reload, make sure everything is fine. You see this one is, that's not content. That's not a root. Actually, the content paths are also loaded with the root, of course, because we have a router in place. We can provide custom Redux actions and reducers and recently we are also able to provide middleware for Redux. And Redux will be used as the global state, the global store, and will be, will allow us components to communicate between each other. And this is a react. So of course we have one way that the flow. So the parent pass down props to children, the children are able to call functions from the parent, but they will not be able to just change some value passed down and the parent will not be able to see that that value has been changed. So we use Redux to change data and to store this global data. This is one of my favorites, register custom express middleware. That, that means something like this. And when we do this one yarn start, it will actually start an HTTP server on localhost 3000 that HTTP server is the express GS framework. It's a really, really popular, popular nodes, nodes, application development frameworks. It provides the basic HTTP server it provides middle an extension mechanism called middleware. So we are able to register new, new extent new pages new routes for that. And we can also use HTTP server. That means, for example, one of the use case that we have is a course proxy server, right, or some other, yeah, some other nice stuff. We have a webpack configuration and this one is a really, really hard one. Because you have to learn a webpack. But usually in the in the add on development process, you don't really have to bother with this because most of the setup for webpack is already done by portal. And the base that you want to, to, to load a webpack plugin that's not supported. I mean that's not already provided by photo. You are able to do that by writing a razzle that extends that GS file inside your add on. And this is, of course, documented. So let's see razzle that extent. You see, it is documented in the documentation website. And another one of my favorites or yeah, let's say that piece is an add on can provide a custom theme. So, in, in the classic photo tutorials, you see that the project is used as a theme. So inside if you want to customize CSS, if you want to customize components and so on, the project, the result of our initial scaffolding will be will be used as to provide the custom theme. And it is possible to, to use the, the add on, and to make it behave like a theme. And there are plenty of examples, for example, in the ea repo. And I think I should go to let's say, both the MS theme. If we look at this razzle extent file that I've been, I've mentioned this little lines, make it possible. So to override the theme folder. So it makes it possible to treat this add on as a separate theme. And that means that the, the voter project that we just generated, it becomes throw away the, the, the structure inside it. And the, the files that, that the scaffolding to generate, you know, they may change in time. You will, we will develop new things new practices, we will, we will arrive at the situation that we're going to you're going to see, hey, if you have a voter project, you have to migrate it and you will do this or that inside your projects to, to update the, the basic files up to the latest standards. But if you move all your code in an add on. And then it's the process of upgrading becomes really easy because you're able to just say, okay, I'll just throw everything away, except my package that Jason that contains my dependencies and add on projects and so on. I don't load it. I just download it and just start from scratch, and nothing will be lost and that's, that's a great thing. Okay. And yeah, many, many things. The intention is to make atoms as powerful as the project and as far as I know, today, that's 100% true. I don't, I don't know anything that you can do in a photo project directly and something, and you cannot do in an add on. And of course there's always the shadowing mechanism to override a photo file. Okay, moving on. Oh yeah, I didn't, I didn't. And this is important. I didn't mention the loading order of the configuration. In our Volta project. If I'm editing the package that Jason here. I have, I have this list, and the list is not just a bunch of things but it's an order to bunch of things. Right. So if I have another add on, I will list it below. Right. So let's say I will, I will load Volta. I don't know. Glossary. And something else, right. The order that they are listed here is important. This one will load first, then this one will load after the other one. And that, that means that for example, if you find that you don't agree with what this add on does, you can just, let's say this one is our add on but we care about I can just put it last. So that in my, in my add on loader, I can just, let's say, fix the configuration. So whatever, whatever Volta Glossary did, I can just come in after that and fix it. And the registry configuration resolving order is like this. So first Volta declares its configuration registry, then it uses all these add ons declared here to modify but the configuration registry. And last, the Volta project comes in and it will, it will let's say give you the last chance to fix everything. But as I've mentioned, I don't, I don't really like to use the Volta project for things like that and I like to use add ons. It is also possible to load from an add on some optional configuration and I've mentioned here in when I was looking at, at this file. And I've mentioned that it is possible to export not just the default function, but also something else. So let's say function on add extra things, right. And this is a configuration loader. Just because why not I'm using two different methods of declaring functions but they are equivalent in here we declare a constant that's equal to an anonymous function. But here we just declared a function. Okay, so now with this function declared here. We can go back into our Volta project in the main package that Jason and we can say, actually, let's let's do it here. Add extra thing or whatever that. Yeah, things. So now, Volta will load from our add on it will load the main load configuration loader but one that's exported as default. And then it will load these these other extra things. Now, here comes the cool part. In your add on in the package that Jason. You can actually say that your add on depends on other items. So you can say my add on depends on Volta Slate or whatever other add on exists. And it can it can depend on some profiles. Let's call them profiles some additional configuration loaders from them. And they are all resolved in a dependency directed dependency graph. So, so basically there will be no conflicts. The dependency order will be resolved by Volta. Yeah, so that's it for loading the not on. And, okay. Any questions, comments so far. If not, then we move on. Yeah, just mentioned that you can have more than one profile as you do in the code as default block in simple link. Yeah, yeah, those are two. For example, I'm taking Volta Slate, because I'm more familiar with it but if we look at the index.js. So that's the main. We have this one which is the default configuration profile but we have something else but minimal default, the simple links. So basically all of these are configuration loaders you can they are optional you can choose to load them in your project or not. And because the resolution order, because you can always add your add on to the end of that add on list, you have the chance to fix to come back to fix the configuration if you don't agree on what an add on loads. Also, we are calling it profiles. We knowingly map it mimicking the zone and the blown add on. I mean, the add on profile right in generic setup. So we willingly call it profile because of that. You can think on them. Yeah, I can hear Erika talking. He's having the Volta deployment tutorial. Yes, it's quite close here. We are close here in Toronto. Okay, cool, cool. Too bad I'm not there. Okay, so, um, continuing with our tutorial, we're going to do a basic block and I mean, this is more or less one of the basic tasks that we do with the Volta and I'm always amazed that it goes so fast to create a new block. It's so nice. Okay. So our block is just a react component. So if I will create a new data table block and I will call it data table view that J6. I think that's a Yeah, that's no data table. So let me go and fix my path. Okay, so I have a folder called data table inside. I have a file called data table view. And let's clean this up so that I can close it. Okay, so now if you're wondering which file I have set up, I'm looking at you can see here. And basically if I switch between the files, the color which will change. So the black background is the file that's selected. But I will try to always point to the file that I'm editing. Okay, so I need to pay. I need to grab this code, which is just a simple, simple, simple, the most basic react component. We always need to import react in our components. Maybe, maybe when we upgrade to Babel 7.0, that won't be needed anyway. And we export that component as a default. And that affects the way we will import it, but other than that, we could have exported from here as well. And our component only has a simple div as its view. And then we will do the same for the edit component. So a Volta block has needs two components at least the view component and the edit component. The view component will be used in the, let's say view page. And the edit, of course, in the edit page. Table edit.js 6. Okay, so this one is mostly the same except that we're importing the default export from the data table view and I'm calling it the default export, but you have to realize that here you could have called it anything you want. So, now I put my on the right side I put the data table view module. So here, I have it declared as data table view but because I'm exporting as exporting it as the default, then in the other file in the edit file. When I'm importing the default export from that module, I can name it anyway I want. So let's be aware of this, but it doesn't have to match. So in here, if I, if I make the names match, it's going to be fine. But usually just just to avoid, you know, when you try to find something and by name and yeah, whatever, we try to keep the name consistent. So that means I'm going to import the data table view, and I'm going to reuse it in the view component. Okay, so now that I have my components, I will probably need to register for new block. So inside here inside the index.js. One second and I'm sorry I'm just used to switching screens. Oh yeah, we're going to do this step immediately but we need to add a new block. And this one, basically you can you can copy it from the tutorial you can find it in the documentation, but I really really recommend that you keep the photo source code close to you. Because I just as well, I could have gone to my local copy of the Volta of Volta checkout, I could have gone to config blocks, and go to the registration of the blocks just copy one of these block registrations into my configuration. And yeah, that's, that's what I usually do. I'm going to the source code of Volta and copy things from it. Okay, so, um, what we do here, we are mutating the configuration and I see some code that looks like config equals and then config just spreading and and just doing like this and that's that's yeah. And so on that's, I don't know, it's just a bunch of noise to me, it is simpler just to say, going the config blocks blocks config and we're adding. So this one will be an object. And we can look for it here in the Volta. Let's call it this Volta or rather browse Volta. So, we're, we're dealing with this thing, this object, which we can see it's an object, and it has ID and then the block object configuration. So in the blocks config, we will basically monkey patch it or we will, we will add new stuff inside it. We're going to do, we're going to add an object called data table. And so, what is important, make sure and I've seen this happen while we develop, make sure that this ID matches this one. You don't have problems or you don't know why your block is not showing up or is not registered and so on. It is possible that you just, you don't have them matching so the data table registers a new block. We have the title data, data table, we're going to need to import the SVG icon for it. And we're going to need to import the new components for it. So, I will go and add the imports, right. So, I will import data table view, data table, I did, and I'm going to import them from the data table folder. Okay, right now, this is not possible and you can see the leaner telling me that it's unable to resolve the path module data table. And that happens because I don't have inside this folder, I don't have the index.js file. And here in the index.js, which I've opened on the right side, I need to export data table view from the current path file data table view, right. This should work and I'm going to export the data table edit from the data table edit path. Okay. So, basically, right now I'm editing index.js file. And I'm just saying, hey, from this module, we're just going to export to two names that come as the default export from these modules. Okay, so you can see here that the leaner no longer complains. I'm going to close the split not to make it confusing and sorry about that. And now I've just saved and Vultul will crash except yeah, it's not started let's start it. Actually, sorry. Yeah, go ahead. No. Okay. Yeah, let's let's import the table SVG so that we have this module. Perfect. Start Volto. While loading you can maybe mention about the omlet folder Victor mentioned it in this slack chat. Okay, yeah, I know I just I saw the message. So, Victor is telling us about following fact. If I if I go into the Volto project right so I went up in the directory hierarchy. You will see that I have no modules and inside it is a big, big, big, big, big, big folder where I will find at blown inside it, I will find. And I will go into Volto, right. But the Volto project, thankfully, sets up a sim link, this one called omelet. There, there's a build out extension that does this. I'm not sure how. Yeah, it's old and I'm not sure if it's still used right now but basically what it, what that extension did was to create sim links for all all egg python eggs to create sim links for them inside a folder, so that they can be used by by Linter, for example, to more by just just to browse code for easy access. And this happens inside the Volto project as well so we have this link omelet omelet and inside it. It is the Volto source code, and it is the Volto source code that's used by our Volto so that means that you can actually change it. And so for example, I can go here into blocks. And I can, I can write console log for blocks config and I'm gonna write Volto as a string just to have it. Now, if I save, you will see here that the code reloaded, that's called the hot reload and you can see my console log at top. Yeah. So that's, that's how the blocks config looks like when logged into the console. And these are the sg's. Okay, so let me get rid of this. Not, it doesn't happen always that the hot reload will take the Volto modifications, you may have to restart Volto, the Volto projects just to take the modification done inside Volto. Okay. So, um, back into our project. Sorry. Now, if I go, I can create a new page. Let me try to simplify this. Why it's so ugly. I don't know. And voila, we have the block. Right now it does nothing, but that's a good start. So table here, it's the, it is the view that takes here. So with that, we are able to select the block, and we are able to edit it. Okay. Back to the tutorial. We're going to improve the block edit, and we're going to add what the Volto blocks usually do. I mean, they edit edit their settings or their data in the sidebar on the right. So I'll just take this code. And I will drop it in the edit. So right now I'm in the table edit file, which is in the Volto project in the add-ons both the data table source data table data table edit. And just focus this on the on the add-on. So here in the data table, we need to replace and put the code from the tutorial. Okay, so we have some components that we need to import. And I should, I should mention, and maybe right now, I mean, maybe I should have mentioned it at the beginning of the tutorial, but don't start developing for Volto or anything. Don't start developing unless you have a really good integration with the editor. And my editor right now is integrated with ESLint, which is the code linting tool that Volto uses. And ESLint can tell us that, for example, this name, sidebar portal is not defined. And yeah, basically, we have to, we have to import them. If I save right now, you can see that another, another integration is that my code was auto formatting, auto-formatted, and that is done by the ESLint integration. So make sure, and I'm sorry, but I don't know Visual Studio Code or any other popular editors, but make sure that your editor is properly integrated with JavaScript editing and with the ESLint configuration provided by Volto. And Volto, sorry, Volto provides that integration you've seen. I didn't have to do anything. By itself, it's already integrated with everything. And that is thanks to this file that sits in the root of the Volto project, which basically defines the ESLint configuration. Okay, so back here. There are several imports. The sidebar portal, it comes from blown Volto components. So this is a module that exports most of the Volto components. So you will see a lot like, for example, yeah, we can add form, we can add field to it, but the segment, this one, it comes from semantic UI imports segment from semantic UI react. And if you find this name a little bit hard to remember, you have to remember that for them, semantic UI is the most important things, and they put the react to the end. Because sometimes you will find the JavaScript package name called like react drop zone, let's say, or react something, right? But in our case, semantic UI is the CSS library and the react integration is more or less a side project. Okay, so now we are reloaded. Sorry, we can go to our data table project. Oops. I will go here and if we edit, oops, I have to debug something. Probably I didn't do the imports correctly, because yeah, let's look. Okay, so form is actually imported from semantic UI react. That's my mistake. And that's why you should always follow the tutorial. I'll go here. Okay, so we have it running. That's good. So we have something in the sidebar. We have this data table thing. And this is a file. I mean, this is a, let's say, pick object pick widget, we can basically pick content from inside the inside auto inside the sorry inside plan. It's really basic and nothing happens, but it works. So let's look back on what we've added. And let me try to place them side by side. So we look at them. Okay, so like so, okay. So sidebar portal. That's kind of like a magical component that that basically will insert into the sidebar everything that we put here is as a child of this component. So we put the we put a segment and it has this header, but that's a table. This is this one. And inside it we put a form and with a field. And this field will be, we will call it file path. The widget will be object browser and the value will be something. And now we arrive at the concept of fields and widgets and I think this is a really, really important concept to master with when developing both. Okay, so our block our data table edit component, it will receive some property props from the Volta machinery. And for example, one of the prop is selected, which we use here in the sidebar portal, it says, okay, don't use the sidebar portal unless this block is selected. If that means if I go to the title block. I'm no longer seeing the content inside here. But if I if I go to if I go to hold on a second. If I go to my table block, it will be it will activate the content of that block. Okay, and then we have form here is used strictly for decorative purposes, it is just used to make things nicer. So if I am saving it. Yeah, you see the hot reload kicked in and it's updated the finger my right, but it looks bad. So, in our case form just provides a rapper with with styling. Now, the field. The field is a photo component. And it's a rapper on on widget basically, it will have some some algorithm where it will decide exactly which widget to to produce here to edit your field. If you're familiar with the exterior the schema, then you know, or so schema or any other forms library, you will know immediately what what this is about. So we have we have some data, we want to edit that data with a particular particular widget. And in our case that widget will be something called object browser, but this one, this key corresponds to to the widgets key in the Volta configuration registry, and we will look, look at it right away. And then the value for this field will be this one. And it will, it will have some props that you will pass down to the widget. Now, the widget selection algorithm, I think it's really really important to to have a grasp on how it happens. And you can do that, the easiest by looking in the Volta source code. So for example, if we go into components manage, I think it's widgets and field. No, it will be in form here. Okay. So the path is Volta components manage, and then form, and then field.js six. Okay, if you scroll down to the component here, you will see that the widget is the one based on a bunch of things. But it's always with an or condition. So first it tries to get a widget by field ID. So that means that in my case, I have this field ID so I can register a custom widget for a particular field by the field ID. Right. Then it will try to make to get it from attack values and that is the widget options and these ones, these ones you can pass it down from vote from the dexterity schema. So that means if you have a dexterity content type, you can set in the schema particular hints for for the widget machinery inside Volta on what widget exactly to use. And for example, we have a jo tags widget. And we want to use that for particular type of field and that one, we deduce that based on information coming from here. Then it's going to try to do get widget by name, right. So it says props widget, and that is our key here. So when we pass down widget here as a prop to the field component. And we say object browser, it will get in here in the field component in Volta, it will match this one props widget. And because what will happen will be true ish. It will stop and not go further. But if we go further, we see that it can, it can get the widgets by having available choices, or we can define the vocabulary as a prop, or a factory or type. And the final is the widget default, which is just a simple text input. But we said object browser here, right. So now if we go to the Volta configuration registry. So I'm here in Volta config widgets.js 6 open it. And we had we had get widget by name so this one you see it looks in config widgets, widget, the by widget name. So that will be config widgets, widget, and then object browser, this one object browser widget. So, it's, it's no mystery, it's very, very straightforward to understand what you need to do to register new widgets, how to register them and how to reference them from schemas. And we're dealing with client side schemas not the same schemas, we don't have a content type we're just developing a Volta block. Okay. The widget protocol implies two main things, and that is the value and an on change property. This one will be a callback that happens when the widget decides that it has changed value. In some case, right, we have we are passing down this anonymous function here. This one. So the on change callback that we passed down to widgets is the, it is a function with signature ID and value. So it needs to do something so when the fields changes, our function will be called with ID and the value so basically the ID would probably be this one that will pass here, and the value will be whatever value we enter in the widget. And then we can do something and that is we will call our own change blocks. So, so that's another prop that's passed down from above. And that allows us to change the values of our block. So we will call our own change block it again with an ID and that's our block ID. We will destructure our existing block data. And we're going to say that in this data, we will put a new value. We don't know the name, but we know what we have this ID valuable referencing that name. And we set the value. Can you add a console log for blog data before the return this way we also see what they contain in the browser console. Sure, sure. Okay. Okay, so we can add the console log right. And I do that and the print or console log is a programmer's best friend, but we're going to also see what other ways of debugging we have to react. Okay. So, if I look at the console. You can see my, my block data block always block values always have this type, the block type and our file path right now is empty. But if we would pick something. Our file path now will be an array. That array contains data coming from my object that I have just selected. And just to see how responsive this is if I just erase it, then the file path will no longer be there. Now, actually, and I know I'm going to enter unknown territory because I haven't. Oh, I do. Actually, we have the reactive development developer extension. So you can go to components here. And you can. So there are two pickers one above that's the dome element here. And the one from below this one, you can just go into any component. See, and we have it that the table view here. And that the table edit here. And inside it. We can check whatever it got as props. Right. So everything that I would have console logged is also here so it gets the data right here. So if I pick something here. Yeah, let's. Oh, why, why, why don't you update. Let's see. Now I have, I have the new value. And yeah, that's that's a, I mean, familiarize yourself with this extension because it's great, but don't feel shy to use console log. It works just as well. Both of them use them. Okay, so this process of adding new fields will really really soon become pretty tedious. I mean, we will. I don't want you saw the amount of controls that we had in the beginning when I have shown the block. There were a lot of controls that we have many, many fields. But we won't actually continue with this type of development where we set the field manually like this but I want you to understand what exactly is a widget and what how you can manipulate that because that is the key to your future development. So if you're going to like to develop new widgets. It's really easy to develop them. And I think, I think I am not. Yeah, I, yeah, we're gonna, we're going to actually develop a widget on our own here in this training. And yeah, they are great. Don't, don't be afraid to do that as well. Okay, so to continue. So far, I've just picked something random from my, from my plan. But we will need a real CSV file to use in the data table. And the tutorial provides this one. So in chapter one, add on basic. Actually, it's here. It's a photo add-ons development. So it's chapter two don't don't go by the URL because it's a mistake. There's this note at the bottom and there's a link. Click on it, you will get a file for a study at that CSV. And if we look at that file, a little bit. And the second time on my screen because I have it already open is just a file. I mean, it's not really huge. It's some random statistics that I had on my computer. But yeah, just a CSV file. You should download it in case you want to play and follow along with the tutorial. Okay, so hydration break. We will continue to add less files. And we will style basically our add-on with the last file. So today, in this day, this, the fact that we can use less files in add-ons, it's very mundane. It's nothing fancy, nothing special about it. But at the time when this tutorial was initially written, it was quite a fit, quite a feature. Okay, so I'm gonna copy the content of this file. I will add source data table, data table ID, dot less file, right? So back in my editor, I have to data table less, no, data table dot less. Okay, so I've just created this file and I'm going to paste the content. Okay, now, this, what we have here is a less, which is, you know, a superset of CSS. We have a less file that integrates with the semantic UI theme that's provided by a photo. And to be able to do that, we have to declare a type and an element. And for add-ons, we usually use type, extra and whatever element you want, but custom will be just fine. And I'm hoping I'm not making a mistake because this is complicated. I like for us to at some point simplify this. Just do it like this and you won't have problems. Then we have this import multiple theme config and this is like the key feature that makes semantic UI theme work. And this one, basically by declaring this variables, less variables, inside theme config, it will do a bunch of other imports based on what is here and it will try to find the extra dot overrides and extra dot whatever. But by doing so, we have access to variables that are provided by photo. So we could have said probably primary color or whatever it's used in photo. This is just a really, really light example of how you might use less files in photo. And my brother David should probably at some point offer to do a training on Volto semantic UI theme. Okay, so we have less file. And if I go to actually I will go to data table view, because that would be the proper place imported. So I'm going to say import data table dot less. And notice I'm not importing anything from that file. So we imported for side effect. So basically there are two, two types of imports, one where we import a JavaScript object from it. And this one is just for side effects. The fact that by doing so it will be picked up by webpack machinery and it will, it will result in some CSS being loaded in our project. Let's see, because we, I haven't actually looked at what that less file content. Oh, this is data table edit. I should have just actually will leave it in the data table view, because we are already loading that component. So, we are saying here that a block called data table edit will have a form element, but we don't have it so far so we need to continue with with our tutorial. Okay, so I'm going to take this bunch of code and drop it in the data table edit, and I'm not going to try to, although we might be able so basically this part was added. But there were some, some changes here as well, where we have a full fallback. And I think, yeah, we've added this class name to the component. But, yeah, let's just copy paste this and keep ourselves out of troubles. I'm not going to do anything wrong. Okay, so what are we missing we're missing icon and table as a G. So we have table SVG. We have it here. And the icon is imported from auto components. There are, there are, you're actually going to encounter many times to components one from one coming from semantic to icon components or one coming from semantic react and one coming from both of the both the component. Basically, you have to pass it on SVG file. Like, an SVG component like that. So if we look for example at this one. See, icon name is stable as VG but table SVG here, it will actually be table the SVG file content. Okay, so what do we have. Let's let's first see it in the browser. Okay, so we have a nicer default view for for the for the blog. So what do we do inside. First of all, we have this condition. So, and we saw that the file path is an array right. So, if what what we're saying here is that if the array length is zero, then we're going to show this. Otherwise, we're going to show this. No. So this one is if the array length is basically high, bigger than zero. So if I if I take the negation from from the beginning. So, and let's talk a little bit about this symbol, the question mark symbol here. This is this, this one will wrap data. So inside inside, let's say, safety provider structure, in case, in case of file path doesn't actually exist inside data, it will be provided as sort of fallback object, and that fallback object will have, for example, length. And if you notice that it replaces something that would be a real object. That length will be zero of course because in case that in case file path is missing, you don't actually want the length to be bigger than zero. But for example, if this one will would would be missing. So the question mark symbol is that. So let me save it. And go to the table, delete it. Yeah, they will crash all the time. And they will crash with an error. So let's see. Cannot read properties of undefined reading length. Okay, so what, what did, what did we do in data. So, in data, our file path was undefined, because we were trying to read length from from undefined. So we can avoid that problem by wrapping by adding this question mark. So that makes it safe. That makes it the alternative would have been to say data file path. And so this one is safe as well. Right. So, hold on a second. It's not as short and the code is less readable. I mean, we were adding a lot of extra characters and if you have for example, if you have deep objects, object within object within object and so on. That code will look really, really ugly. So it's a lot easier just to add this question mark character here. And then all things will be safe. Okay, so Okay, let me put things on the screen back again. More down below. Basically, we have, we are treating two cases. We have the sidebar portal. And that controls what is on the side. And then we have the other part, which, which is the main part of the block here. And basically what happens is when we pick a file. It's going to change here, but also at the right side. Okay. Now we will do a break in five minutes. So let's see what else we can do those five minutes. Okay, so first of all, I should upload my file. For a study. And I should pick that file. And I should save. Okay, so next, we're going to have to fetch data for this block, right. And yeah, let's let's expose on this. If we look, I mean, what, what will happen is that our data will be fetched in an Ajax call and you'll come and you will populate our table. And there are multiple ways to do that and, and it really depends on what you're trying to achieve. If, if for example, for you, it is important to have that data in the, in the moment the block renders at several site rendering, you will probably choose a different path. So if you upload the data inside the block, you will try to make sure that or you would, you would basically have to add an additional mechanism to make sure that the data is included with the block, when the block is already rendered, because our data will come. And based on code that executes in the browser, that is triggered by the rendering of the component and, of course, there's a react component lifecycle and how they render and how the data, the use effect hooks executes and so on. And it's, it's, it's too much. And unfortunately, I don't think we will be able to discuss it in this training but if you want to go quickly through that, I can do that. Okay. There is, there is a mechanism in Plone REST API called block transformers. That means that when you save the block in, in Plone, when that blocks goes into the ZodB database, it will go through a transformer that that will take the data and in the deserialization process, it will be able to mutate that data. So, for example, why we might want to do that. We might have URLs, like or you've seen that object that was picked by the object browser. We might have URLs and they, when they are stored in ZodB, they are converted to resolve UIDs. So, when the destination content moves, is moved or whatever, the resolve UID is updated to the new location of that. I mean, not the resolve UID but when we serialize that, that block data. So, this here lies is when we save the block serializes when we load the block and look at it in the view page. So, load that block in the serialization, the transform the serialization transformer goes in and it, it changes that value. It's, it picks up the fact that it's a resolve UID value, it changes it and it will transform it in an absolute URL so that we can have the updated path in the browser. There are many, many, many use cases for the transformers and let's say unfortunately but that's, I mean, that's what it is. It means that you can do, let's say, 90%, 95% of the development you can do it in React, but you will have to do probably also some backend development also in Plone. But the good, I mean, the bright side is that React, I mean, if you, if you work in a team, React developers are a lot more easy to find than Plone backend developers. Okay, so let's, let's do a break. We'll meet each other again in five minutes. My voice is really starting to fill the two hours. Now stop speaking. So let's continue at six, five, five, no, then. Hi everybody, I'm back. So, I'll just assume everybody is back and I hope you are back, everybody. We're, we'll continue with, with writing a data fetching procedure in the client side. And for that, we will use Redux and we will write a new action and the reducer pair so that we integrate with Redux. And I have some suggestion on showing you exactly where to look up for block transformation information. So Plone REST API. Share your screen. Right now we see your face full screen. Okay, hold on a second. Do you see my screen? Okay, with my screen. Yeah. Okay, cool. Okay, so in the Plone REST API documentation, there is a section called Volto blocks support. And here we will see blocks serializers and DC realizer and a transformer is basically an adapter. Actually, it's a subscription adapter, because you can, if it would be a single adapter, register for this you will have conflicts and so on but we actually want to register multiple adapters. It's a class. It has kind of like this signature, it will get the context and request in the initializer and then when it is called, it gets the value and the basic structure for the DC realizer and the serializer is the same. Basically, the DC realizer transforms the value coming from the browser, it transforms it to a value fit to be stored in the database, and the serializers does the counterpart basically takes the value coming from the database, and it makes it suitable for the browser. And you have to add this block type attribute and that means that this block transformer applies to Volto blocks that have the A round type image. And basically, if we will have a block and I'm writing on the left side with with type image and whatever data. And we'll have your URL something right for this value, this, this transformer will will apply. And it is possible to also set the block type as none, and that means that this serialize this transformer applies to all blocks. So that makes it possible to create the so called smart fields or whatever. So you will get the block value, and it's up to you whatever it's up to let's say the, this adapter, what it tries to find inside, because it will apply to all blocks. So that means for example, our block can declare something like the, just just a random key and value. And we could have, but there is no such thing that we could have a transformer that looks for block value keys that start with underscore V underscore and we will would implement the the ZodP protocol that values that start with underscore V underscore are not safe to put that base. So, yeah, that that would be one way to use a smart field, or we could say hey, all, absolutely all your field that are called URL, they're going to be processed with resolve the ID transformation thing, and, and so on. Or I could I could imagine for example but we push data as the block value of coming from the browser but we push let's say something like binary data, and it's going to be a bunch of things I don't know it's, it's, it could be base 64 encoded or whatever you make sure that you can put it in a JSON. And on the backend side, you take this data, you transform it to a real file inside the ZodP, and you replace this binary data inside the block value right because at that moment you're in the back end, you could replace it with URL and let's say the path to that file that you have created. So it's a really, really powerful concept and unfortunately, not, not, not enough used, let's say involved, it should be more used. Okay. So, back to our tutorial. Let's, let's skip this part with the changes to the block. And let's start with creating and adding the new action reducer pair. So, yeah, I mean, I mean it's really hard to explain redux as a side note in the training, but redux is that global state. And it implements the one way data flow that react promotes and I think it was initially called flex the package that implemented but that flow. So, to be able to mutate the data in the global store, you call an action. And in that action. So, we would have this function get in our case we call it get through content. And in this function, we can pass whatever parameters we decide, and it needs to return an object with. So, basically this one would be the action. It needs to return an object that represents our action, and the object needs to, so it needs to have an action type. And then whatever we decide that we need as an extra information in that action. So, this action is executed by redox. I mean, it's intercepted by the redox middleware, and it's used to change the global store. And the change is being performed by another action another function, which is the so called reducer so this one. It gets the previous state as a as a parameter. This one, it gets the action that we have sent. So it's this object. And inside, it needs to return the state modified what whatever. However, we, we decide that we will change the state for that. For that particular case. And I'm going to also show you how to, how to debug let's say this redox actions and store. The convention is to use a constant for the action type. And we will. Sorry, we will place a place that constant in a file and just to. Yeah, whatever. Trust me, I'm not sick with whatever pandemic is these circling these days and just. Constance. Okay, we've declared this constant. We put it in the constants module file. This one sits in the root of our add on right. So it's nothing but a table folder. It is located in the source folder of both of the table. Modular add on. So in constants. We are exporting this get get through content constant. And we will also create the actions. And we will also create the action module. And I'll put the action here. And if you, if you browse. If you browse the Volta repository. And inside Volta, you will see actions and inside actions, you will see one folder for each of the actions for the purposes of this tutorial will keep it simple. We won't replicate the folder setup, but basically it's just name spaces and folders and path, nothing fancy. And they all, they all get imported in here in so basically this one will become kind of like the actions module inside Volta. So because all of them from this folder will be imported and then exported. We have this action what do we do, we need the type that we have imported and the type is just a second. The type is just a simple string. So now I'm looking at the constants file. Okay. Let me close. Go back to actions. So now I'm in the actions. I have the action. It exports a type. And we define something and that would be, we say, I want to, I want to do a request. And the request will be based on on the parameters that were passed to the function. So we have Volta trids actions that have this request key in a magical way in the sense that they will actually be executed by a special middleware and they will, they will cause a network fetch. And we're going to check immediately and see what happens. And then we will do a request of type get so you could you could you could have here post for example or put or delete or whatever HTTP verb you want. We have the path and the headers. And we're going to stash the URL in action because the request won't be preserved. And let's, let's continue. Okay, we have to write the reducer. So we put the reducer here. Okay, so this is a function called roll data. And this name, the name that we use here is going to be the name for the key in the global store so basically, you can imagine the global redux store as a big object is going to have a lot of keys. And for each key will have a particular state and that state will be whatever it's determined by executing this reducer function. Yeah. Let me put them side by side so that you can notice something. So, I have, I have the action on the left and the reducer on the right. So I'm looking at an action called get raw content and with the type get raw content. And here, when we do a switch on the action type, we see some action types that we didn't declare. And that is from the from photos API middleware. And let's say network API fetching middleware. This, this middleware invents and actually triggers new types of functions based on the request, and we can look at its source code. Actually, I think it's linked here. And that is good. So this middleware to look at it. This is this is the signature for a redux store middleware. And not not the object of this tutorial but you can see that it will, it will, for example, trigger a new action where the type is type pending. So, if I look at, if I look at my reducer, we will see that we are handling an action called something pending. And then it will have in case of success. It will have actions. No, this one, it will have action success. And it will have probably action failure. No, this one, you see it will trigger a new type one new action with type type which is get your content that underscore fail. So, these new actions, although action types, although we didn't declare them, we have to process them because they will be triggered by the network fetching machinery. So let's put the code in use, and then we can really, really play around with and see how to debug it and so on. Okay, so now that we have the two modules, we have to register the reducer. And we do that back in back in the add on configuration module. So I'm going to into Volta.Table source index GS, right. So this, this file was the one where we've added the block configuration. So now I can do that. I can, I can add this line either here, or I can add it here, it doesn't matter. It's up to you how you want to organize the content of this file, but we have to import it over all data. So, we've now imported the reducer, and we've added the reducer to be add on reducers. Now, Redux will know that you will have a new key inside the global store and that key is called Rode data. And let's see. Do we have Redux? Yeah, we do. Okay, so what I have here is let's put it to the bottom. No, that is crazy. Let's put it to the sides. Okay, I need it to dock actually and I don't need it to go wild on the screen. And close. Oh, well, we'll look at like this, try not to to break too much. Okay. Okay, so in here in this extension, you can see all the actions as they are executed, and you can actually go back and you can you can see this the screen changes in the back. And that is like, you can just travel back in time, back and forth as the actions are executed to see how the the state for our react application change. And each action will will have some data inside it. And you can you can check what the action contain. And you can look at the global state. This is what we're actually looking for. And in the global state will will have now this new robot a key right now it's empty because we, we haven't really triggered any action to change what's inside. But it is here and it's probably properly registered. Okay, so let me just quickly double check. We still have to do the changes inside inside the data table view module so basically what we want is when we look at the block. Right, if I'm in, if I'm in the view. And let's check if because right. We want to look at the components. And we want to look at this component, the data table view. And this component has the fire path saved. So, we are actually ready to already use this file path and fetch the fetch the data that sits there and put it on the screen. And we needed the action, we have the action. Let's do that. So, I'll quickly grab the code from here. And I'll also explain it. Okay. So, and just to simplify and let me focus. Because I don't like seeing red on my screen. I'll also grab the imports. Okay. So now we have a little bit of destruction but inner. So, in the props. So, we will have a key. And we basically what we are doing is constant data data equal props. And what we are actually doing is const data is props that data. So basically just assigning a new constant with the name data but is equal to the value of the key. So, the lookup key from props, called data. And then we are further destructuring that data object, because we are interested in in defining a constant called file path. So, constant file path is actually props data file path, right. Okay, so then, um, now we're looking for an ID that that would be that that would be so let me quickly go back to this one. In profile path, you see that it's an array. So we'll have to look very the first entry in that array. And that array will be an object. And in that object you will have a round ID, which will be basically the path for our file. So, then, now that we have it and be aware that, again, I have used the question mark to make it safe to the inside the, this data structure. And again, you see, I'm using a question mark to further step down inside that that fail safe object to look for an ID key. And this idea right now could be or could be not like, I could have something in the in the array I could have something in the file path. I don't know, but it will be safe for me to handle it and the browser will not break. I mean, we won't crash with an error. So now we are defining the path. And that path is, if I have the ID, right, we are using the ternary operator. So if I have the ID, I'm going to do a string templating with the ID and download at that download blown view. Otherwise, it's going to be still no. Okay. So now we come to the interesting part, which is the redox. And this is the redox react hook. We did we, we can get the dispatch function that dispatch is a special method that when we pass it the action, it will connect it to the redox store. And this is more or less a new modern API to interact with the react in Volta, you will find many, many cases where the old style of API is used. So, let me show you. For example, if I look at the manage edit, just just six. Yeah, you will see here connect. And I mean, I shouldn't go here, maybe, maybe, maybe, maybe go to breadcrumbs. Yeah, that's, that's a more safe and simpler component. Okay. So, this one connect, it will connect our component, it will wrap our component in in something that will connect it and inject properties to it. So, this one connect it's, it will get a function that function will receive the react global state. And from that function, we have to return a new object. And we, we put everything. I mean, we put anything we want in that object. And we can use the state that we receive and that state is the whole global react, the whole global redux state. So from that state we are looking at the breadcrumbs key. And that one will will have an items key. So for example, if we look here in the redox developer extension. We have the breadcrumbs, and that breadcrumbs should have an items key. Right. So basically that one just picked this array and the, and it puts it in this object and this object is the content of the keys of this object are injected as props to the component. So we look for root here. You see, these props route. Because whatever is injected here, it will become a property. And this one uses the old class style. So the props route and the items will also be a prop. And then, of course, the we will have the actions. They will also be props. And you see here, get breadcrumbs from here. And this one is imported from the action. It's imported from both actions. But when it is used. It's not used directly from here but it's, it's the one that's coming as a prop. So you see this props get breadcrumbs. And this one is basically wrapped inside the dispatch that we use directly here. And the action as a, as a option to this connect function. And what we will get, we will get it as a prop to the component. And when we use the action, we have to use the prop. But here in our component on the left, we use the new style we use dispatch. I mean, we get dispatch and with dispatch. We are able to use to use it directly so no longer connect and get it from get it as a prop or whatever. Okay. So, we have a dispatch. Then we have the use selector. And this one, the use selector is mostly equivalent to our case here at the right. So we get the state. So it is a function we get the state. And from that state we return whatever we are interested in. So just like here we return. But because here we had to define a new object with keys and so on and so on. In our case it is simpler because we are just interested in one value, and that value is binded to a constant here. Okay. So what we do, we use this selector to look up state and very dark store, we will call that state request. And that request should have data because that's how the API works. It will. It will store the downloaded content inside data. So we will define a Boolean that is that says, has that is, do I have something in content, true or false, right. And that is what the double, a certain the double shouting mark does. Okay, so here comes the interesting part, the react use effect. So that is a very, as an effect of our rendering the component and that is rendering, let's say, and mounting it on in real though, on the screen. And that's what happens. And that's something is, if I have path, and I don't have data, I'm going to dispatch the action, and that action will get the URL. That is the path that I have extracted from here. And basically, I, you see, I have a condition path, right. So it is safe to say, I will have a path here. And that's going to be something with the value and react use effect and I'm going to just delete clear this and you will see what happens. It's going to complain. This one is the extra parameter to the use effect. So the first, the first parameter to this function to the use effect hook is a callable function. The second parameter, the second argument is the, let's say action dependencies, whatever I list inside. So for example, if I don't inside I'm using path has data and dispatch, right. So I have to, I have to list them all as dependencies. And the magic thing happens here. So for example, if path is changed, then this whole thing is rerun. So you can, if you change from outside the path, it will just be called back again, this whole code because the use selector basically adds a subscription of this component, so that it will be refreshed whenever the global state and the state road data changes. So with this, it's pretty magical and simple to just keep the component up to date whenever we change, for example, the path. So with this, so basically the dependence here, see here says tell react, okay. In case any of this object changes, you have to rerun this code. Okay, okay. So it reloaded and let's, let's take a look and look at the network. And hopefully see a network call. Okay. Now, just to show that this network call just didn't happen before I'm going to just comment this. So you see calls. Okay, just, okay. And this one. And it is our path and it has our file content. We are now ready to show it as a table on the screen and step into the tutorial next step. We did this, we did this, we did this. We are. We are ready to go into this in here in the tutorial we add the pop up our dependency, and you see, again, the yarn workspace is command that that I've executed. Initially yarn workspaces info. You have to realize for example, I with with with this command you basically add the dependency in a particular workspace package in our head. But you can add the dependency also to the vote project itself. So I can add for example, I don't know something good react color, right. But it will complain, because it says you cannot add the dependency to the workspace route, because now it's a basically it's, we are knows that we are dealing with multiple workspaces, unless we add this minus w flag. I would have to go back and add minus w flag. And then it will add that dependency. And with it, it added the all the other dependencies and so on and so on. One, one important thing is we are still here. The yarn log file, you should commit this to whatever code repository you use to get help or whatever. It belongs to a product. It doesn't belong to the add on you shouldn't, you should not commit this file if you have it somehow in an in an add on, you should not committed there, you should not add it. It doesn't, it doesn't belong in libraries only in Node.js applications. Okay, so now that we've added let's just also remove it. Yeah, I wish I wish you would have shown where that was added before you removed it. Maybe with the young workspace info or in which package Jason was added. It would, it would. Sorry. It would have added to the project route, no modules. But stay focused. Okay, so what do we have here, we've added the pop up ours, and that's one. And then we have the CSV parser. It. It is one of the few that run CSV parsing in the browser. There are a lot of CSV file file parsers node based CSV file parsers. And yeah, you can have huge CSV file so it's it's not a simple task. And this, this package basically works with CSV file parsing in the browser. So, what do we have extra. We have this one. So, now that with this one we make sure that our request. And then our request will be populated with the result of whatever is in redux of roll data. And then that we will have from that request will have defined the content data, otherwise it will be undefined right. And then, once we have all of that. So we have this condition. If the data has been downloaded because you have to take into account the fact that our block will be rendered and re rendered and re rendered and so on, many, many times. Whenever something happens in this block. In some cases some some of this refreshing and re rendering will be done. Based on asynchronous conditions so you could have some variable defined that's you that's using data coming from one of the hooks. And inside that same component, you will have some other variables that's going to depend on a single data from other hook. And then you will have to basically check and match and see if you get all the data and be able to show this block in multiple cases. If I have only one data or if I have all the data and so on. Okay, so let me copy the CSV import. And we will. Let's just console over file data, just to see, because you see constress is CSV parse content and so on. And we're, we say file that on is a reactive memo. Okay, so that means that our, our function, our component is basically a function, you see here, and that function. I've just mentioned that you will. It will re render it will refresh rerun rerun rerun. So that means that this function will execute multiple times. So if I, if for example, I have this variable defined here. This one will will always, let's say, be redefined whenever the code runs. So we have costly functions and this one is like a really simple variable definition where we're just we're just we're just looking up some values inside to two levels deep object. In case we have costly functions like this CSV parsing operation, we want to not do that all the time. Right. So if you're familiar with the reactive state hook, which is the most basic one. And that ensures that the state is preserved whenever this component function reruns the use memo basically is ensuring that whatever constant we're whatever variable we we're defining here, it keeps its value, unless that that dependency variable changes. So basically whenever content will change, the file data will be refreshed because the function inside the use memo will be rerun. Okay, so let's see the file data in the browser. And the console. Yeah, that that is my data. So this one, it's, it's looking like this. So Papa parts will create an object with some keys, the data key, the meta key and the errors. And yeah, it seems that I have some errors. It doesn't matter. I have data inside my, my object. And I have some meta information about the CSV file. Most importantly, most importantly, I have the array with the fields that were inside. And that that array is basically the headers of my CSV file. And we will use them as table headers. And that is happening because I have passed here, the header through option to the CD Papa parts operation. Cool. Any questions, or something that's not clear. No, okay. Moving on. Now we get to one nice part. So, oops. I'm not second because I'm lost. Okay. So we have this bit of code you saw just before that meta. So in the file data, which is the object that I've logged, we will have the meta key and that will have the fields, which will be either an array, or an empty list. And this is something that's really interested about JavaScript, if you're a Python programmer. And that is, sorry. I'm not sure that an array is is true in node, even if the array is empty, which is not the case in Python. So if I say for example, X, and is not empty, right, so that would be true and true. Right. So, X, I mean the array, the empty array, ignoring the fact that it's empty, it's still Boolean true. So, if we want to check if this array is empty, we have to do X length. And at the same time, it will give us zero and that's something else that you have to be aware that the result of this operation is zero more or less, you can use it as a Boolean but if you use it to render a component. If you use it in the in the GS J six part of your components, sometimes you will get zero rendered on the script and you don't want that. So, we will use this case, like with the ternary operator, right. So that we can just render nothing or we can just say no, which again would be ignored by react when rendering. Okay. So, again, we have the array with fields so basically we covered the fall back cases if there's file data if there's meta inside. And if we have fields is going to be an array, it can be empty can have something in any case, we have to make sure that we also have a fall back. And we will have the nice part, we will have a table displayed. So we'll replace this part, let me scroll. Okay, it took too much. So basically, just like before when I have shown that we have to, we have to make sure that if we have file data, we render something. Otherwise, we're in the render before but we use the ternary operator here to see if we have a file that we will show the table otherwise no data message. So the table needs to be imported from semantic right act. Okay. So back here. Let's look at the screen. Okay, now. It's possible that yeah, it was set on responsive. Our table looks good. It's really empty we will see immediately how to improve it, but it has the data and how do how did we go about displaying the data. I see a console log with the file data. Let's see. We, we have the file data. And that file data is stored in the data array. Okay, we have the fields that we saw in the meta. This array. So we will iterate over the fields. And for each item for for each field name basically. We will create a new table cell in the table header. And the content of that cell will be the field. We will use the field as key, which could be. Sorry. Which could be a problematic if we have duplicate duplicate fields because react will be will complain. So we might be, we might do something like this. And the key will be will be, we will use string templating. So right now, we have a string template. We're just using the F value. And we have to wrap it like this but we'll also use the I variable that's the index of the map takes a function that the first argument will be the field. I mean, the value that's being iterated. And the next argument will be the index or rather the iterator of that array mapping. So now our key is, let's say safer. And the key is used by the skip prop is used by react as an optimization and basically when I whenever we have lists, react needs to know if it should re render the member of that list and it does that based on many, many things but one of them is if the key changes. Okay, we, we, you kind of want to have the key being a direct, let's say, a unique consequence of whatever you render in that component. Okay, moving on into the table body. So, until now we have rendered the header. And if I make this screen wider, you can see so I have rendered the header. And then in the table body, we are iterating over the file data which we saw it's an array. And that array has again, the, an object. And we will have this type of object with a key value key value so field name value field name value or whatever. And we will render a row for each of a row for each item from this array. So inside that row, we will take the fields and iterate again on the fields but we will use the field name to as a lookup key in the object. So, yeah, that's it. I mean, not pretty, but let's say pretty basic. Not something fancy. Okay. Now, here comes, let's say the interesting part. We can, we can look at this code and say, okay, we kind of have a bit too much inside. It's a bit too, too many concerns. We have this templating code. So we have the GSX part. And we also have the data fetching part. And that's, that's kind of too much. It becomes hard to understand. And another thing, it's become, it becomes hard to reuse this code. What if we want to say, okay, I want to have components that that depend on external content that depend on content that will be fetched from the backend server. In PLO we will have something called a behavior, right. So we model this thing as a behavior and we will use that thing. We will use and reuse that thing. So in React, we don't have class inheritance or at least it's not the, it's not promoted. In React, we have to, to structure our code in a pattern called composition. And that means we will need to extract the reusable bits of our code as functions and then compose these functions to create to let's say, a rapid one one another to create a new set of properties that are injected to the final component. And in React, this composition is the easiest to do with the HLC pattern, the higher order component pattern here. This one is more or less like a Python decorator. So if I will write some quick code here. So in let's say, let's assume that I'm in Python, right. So I can have the decorator, which is a function and the decorator is applied on top of another function, and we can do something. I'm decorating the pattern and then the decorator always needs to return another function. And of course I'm keeping it really really simple. Like the most basic decorator. The React higher order component pattern is the same except that. We're going back here. Our HLC our higher order component will be a component that wraps another component, and that is acting like a function. So we will have const. The convention is let's say to write as with some sort of behavior with, I don't know, file data, let's say, and we're going to write a function. Right. And that function will be a decorator so it needs to get the, the decorator component as the first argument. So that would be correct component. We will call it here wrapped component but basically I would do. When I'm using this. I'm using the HLC I will do like this with file data of some component right. So some component here, which would be I don't know, could be something like is just a deep. component. So now inside this decorator, we have to return a component. We could. We could return the basic component. But if, for example, we want to pass down a particular property to that component, we have to return J6 we have to use that component in react terms. We have to write a new component. And we have to use that original component so we have to say, okay, these are some props. So this new component will substitute this component. So, whatever props we got there, we have to pass them down to to the original component. And if you if we want to pass some new additional things we could say hey, I don't know color, a new prop is whatever. Right. So this simple pattern is possible to to write reusable behaviors because here in this function, but it's actually more or less a wrapping component. I can use react hooks and I can use react things. So I can say const. I don't know. I can use set toggle set toggle. Just to use some sort of state is react state. So, I can use hooks and so on. So basically it's kind of like component inside component and this one is just logic, not presentation parts. And I can, I can reuse and I can structure my code so that it's easy to understand and easy to reuse because the network fetching code will not be again dependent on the decor on the visual code on the template code, let's say, and my template code will not be dependent on the network fetching code code. Okay. So, this code here with file data is just basically this code. So, whatever, whatever I have here on my, on my right, so on my left side, sorry. So, whatever I have here, and probably this thing as well. I'm just going to remove it and put it in, in this higher order component. And this one will be a file called with file data. And we will put it in source. So basically here. Okay. And with file data that just And I'm going to paste. Okay. So, if we ignore the tutorial for a moment. And we try to look side by side. Okay, then, yeah, basically gets everything so more or less, all of this. Right. All of this is removed. And we can see that here in the wrapped component we are passing file data right. So now my file data will be I can do const file data is a is props destructured. And there's another pattern where if you know just to avoid this type of code we can just say here file data because we don't actually use the props. So it could be like this. And, of course, I won't have file data inside. In this case, sorry, if I look at the code, it will probably crash or something. Let's see. No data. Okay. And we can, we can declare for example, fallback like this in case, in case, for example, if, but I'm, I'm basically I'm making sure here that I'm passing something that for example, I'm passing here on the undefined. Like, if I, if I didn't have this fallback, it will crash. Now, it didn't crash. So if I remove this one. No, but if I, let's see if I did it here. Yeah, wrapping the hook. I know I'm trying to, I'm trying to show what would happen if I'm not passing I'm trying to show actually the JavaScript here, this, this feature that I can pass like a default value in case, in case the value is not defined in any case. So let's import, let's import the hook. So import with file data from, and I'm gonna take this path. And this one would be Hawks. And we don't have the Hawks index just so do I have it as a default. Yeah, it's a default. So I need to do it like this. Okay. So now let's see to now it complained it crashed, but it should work now that it's reading map now file data data. It's empty. Let's see that they will you know this one. This one. Like this. Yeah, I'm not wrapping it. I know. Okay, so we have to wrap our components with the Hawk. Now that we've imported it. You remember it had it has to act as decorator. So I'm, I'm going to call with file data on top of my component. And looks like I can get rid of some imports. And moving back. Let's let's do another sanity check. Let's see what I didn't do well file data file data. Okay, so the thing is, this one, it's supposed to be called like this. If I remember correctly. Okay, so yeah. Let me check with Toria just to save myself some trouble. Now I need to be called like that. Okay put put it initially. So, okay, well, let's, let's debug it. First of all, let's see if the download is performed and the download is not performed. We have file data, which is with file data. Let's see. My face. It's our class fine. And then we can look at props. And props. Okay. It's there here, but it's for some reason they are empty. No, that's, that's from the table. I think so. File data is empty right now. So fields. Condition and file that again, the condition. File data is is undefined here for some reason. What is it? Data table view line 20. Oh, right. File data. I have another condition to add here. Well, we fixed it. Okay, just remove the logging. I'm not sure why it keeps it keeps breaking with our notarized when I when it had reloads, but I have noticed that it does so whenever we have an error. What. Hi, let's see. So I see that file data. Probably it's because we have defined it as an as a fallback object. Again, that's, that's, that's a mistake. Okay. So if we, if I define the object as like this, it's an empty object. And node considers it as true ish in value. See. So the result of that Boolean operation is true. So that's that that was my problem because I have had added this fallback. And when I didn't have value, then this condition was true. So the code executed. And in the tutorial, we don't have, I mean, in the training, the written training, we don't have these conditions, but they work because we didn't have the fallback at the top. So if I have the fallback at the top, then, yeah, better just to avoid that. Okay, okay. Well, moving on. Now we get to the fun part, let's say, and the one where we get up speed, but we have just a little bit less, a little bit more than half of our left. So I wonder if we want to stop here and have a question and answer session and just chat or should I continue and I want to hear some opinion. Not just you, David, I know you're, you're here. You caught me. Well, if there's no, no one raising hands and saying that they want to go to question and answer, then we can continue with the tutorial. So we have the block editing with a form concept and this one is really powerful and we started to use it in in both or and it's really big productivity boost. It. Basically, reuses the schemas that are produced by plan rest API. Let me check for Volto. Okay, don't crash on me. Okay, so I'm back into Volto. Reload start the developer console. Go to network. And if I go and add a folder. Okay. And then we see here we will see the, the plan rest API endpoint. And we will see what we call a schema. And it's the serialization of the dexterity schema. And it's in, in, let's say, long rest API serialized. I think this format mimics the JSON schema with the properties and the field sets. So basically, no, not the JSON schema. I don't know I'm missing it right now. The schema as the most basic object is an object with title required, which is an array of the required fields. The field sets with, which is basically the different tabs, let's say the field sets. In the properties we have the definition of the widget. So for example, if we track it down, the title field is going to be a type string. The subject will be something really fancy, which is, is an array. So the subject field is the tags so I can, I can write here some stuff right. So this one is an array. And what else? Something like exclude from navigation, you see type Boolean. If we look exclude from navigation, it's just a checkbox, and so on. And you can, you can see here but it uses the, I mean, the widget lookup mechanism and Volta has to understand this. Plong rest API schema. Now, in the client side forms, what we're actually doing is to appropriate the schema and to reuse it and to reuse the form components in Volta to declare JavaScript code with this type of schema that can, that can use the form component and render the form. But not with information coming from a Plong rest API from dexterity content but with something that we define in JavaScript. And the fact that we are manipulating the schema inside JavaScript, it means that unlike blown and unlike dexterity, it is really, really easy to have dynamic schemas and all kinds of fancy things that the kind of things that will take days and weeks of work to do in dexterity and back end code. Okay, okay, so we will define. So the task that we're trying to approach right now is in the beginning I have shown that in the block configuration, we had code that was able to, to change and define settings for the table. So that we can, we can make a table. Basically what we're going to do is more or less implement the formatting. Yeah, well, they page page page page page. So now if I add. If I add a table block not not a lot of table, but a table block. I think it's in common. Now this table block is the editable table from Volto. And this table block has the options right, we're going to add the options and we're going to implement it. So once we achieve that, I'm going to show you the Volto code which is old style and we can compare it to the new style of code that uses the client side schema and you can see that it's a big improvement. Okay, so. Yeah, we have the schema. And we're going to create a new file we're going to call it schema. And the schema in our case but it doesn't have to be but the convention is like this, but this we create the schema with a function. This one could have been missing. We could have it like this. And yeah. It could be just a JavaScript object but sometimes you have to get some some things from the data or you can, you can, a nice pattern for example is that we are able to to conditionally add new fields to be rendered. So for that one, for that reason, we make the schema a function. I mean, we use a function to create the schema. And that function, the convention is that it gets informed data and the Intel object which is the object used for internationalization. And it returns a basic skill. And in our case so we have a single field set with just the description field. And we will define that that description field to be a text area, and is just going to have a title. But I think this schema is just just an example. Because the real table schema will be this one. And I'm going to copy it from here. And then we can take a look to see what it has. Okay, so in the real table schema object we say title table, we will use the Intel object to internet internationalize and instead of hard coding as a string the title, we will pass an Intel message. And this one the default field set right so we will have lower down the screen in the file we will have this type of defined messages call. Inside we define each international internationalized message, which will have the default message and an ID, and it will be inside an object and we will be able to to call it by that ID. So, we will have messages the variable and inside it will have the various messages. Okay, we will have to to field sets and the fields but I guess the easiest way to see what it does is to actually render that form. So, now let's go back to our data table edit component. And we're going to basically replace this section with the form that reads the schema and will render that schema. And let me just check that I'm getting the proper things. To avoid some possible problems I will just take all of this and replace it. So, now we have the inline form we have a schema that we have to define. And the inline form is something that we can import for from both the components. And the schema will have to import it as well. If you screen if you saw the screen jump that was just file auto formatting. So, we will have, we will import the table schema from this module. And we will do table schema. And we're just going to pass our props. And let's, oh my God. And the browser at our data table. And in the edit, if it doesn't crush is. Hooray, we got something. So, what did we get, let me make this smaller so that we can put it side by side. So, we can look at the schema. So, we have two field sets. One will have only the file path and the field set in the sidebar is rendered with an accordion like this but the first field set is not in India accordion. So, in the default field set will have a file path. And we will have another field set called style that could be any name but that we will give it a title and it will appear here as the accordion title. And inside we will do, we will have some fields, basically six fields fix cell on to 123456. And they will be all bullions as defined here. Right. So, our file path object field, we have defined it with the object browser more link. And it's here, if I delete, basically I have to pick it up again, it works. And we have the schema. Okay, so let's just do some random editing here. And let's see how that one reflects in the block data. Let's inspect the table. Data table view table. Okay, fine. Let me make this one bigger. So, we have whatever was inside the schema became a property in the data and we will look immediately how that was done. So basic through so true. So everything that that was here. And just now, I can see that they have toggled so to false. Let's check. You see, self is now false. Now it's true and so on. So, right away, we can see that editing the schema here in the sidebar has an effect inside the data that block. And that is because that is because we had we we use the inline form component. We passed down the schema and the inform component except except one property called on change field. And basically that one is executed whenever one of the fields inside that schema changes one, I mean one of the one of the widget changes values. And just like we had before and that's why I thought it was important to understand how the widget works, just like we had before. We are change we are calling on change block. So if, for example, I'm going to separate, I'm just going to copy this and I will undo. Okay. And I will paste it here. Okay. And just to get rid of this ugly color. Okay. So just like we had before on change ID value on change blog. It's similar here ID value on change block data. So very, very similar API. And another, another thing that's really, really important to keep in mind. When writing widgets, don't feel tempted to use inner state inside the widget, because it doesn't make sense. I'm going to write a widget, let's say, I'm sorry. Now I messed it up. I got back to the version. I'll, I'll need to redo this on one moment. And I will carry on. Okay. So back to my don't use the state in widgets explanation. Let's assume, let's assume that I'm writing, I don't know, news item. I'm writing here a value. And you will have, you will have this widget, you will define a state and that state you will just updated with with the widget value because you need to react control component and so on. And what you will have to do is basically whenever that state changes, you have to change the on change value callback. So then, if you have to do that, and basically, you will also have to watch the incoming value to update the state of the widget when, when it changes from the outside. That basically means that the inner state of that widget is absolutely irrelevant. And you don't need it. And the only case where you actually would need a state for a component for a, let's say a widget would be in the case where you, for example, you want to have a model or you want to have a dialogue with a confirmation button, because then here yet it makes sense that I keep an internal state and I only transfer that state to be parent component only when I have the confirmation from the user. So in other, in other cases, no, don't do it, it's not needed. Okay, now, we take this really, really simple formatting function. And we're going to put it in the data table component. And we're going to use that function. So basically the function goes in here in the type table as this. So, before we had celled, right. But now we don't have to hard code the formatting because we can say, hey, just format based on the data and the format if you look at returns. I need to add also the data to the property. So basically that was the data is a field inside the props object. Okay, so this format, it's a function, it gets the data and the data we saw that it has all of these formatting options as Boolean's. And we will basically return this, this object with with the options. And these ones, they will become properties to the table. Now, how do you how do we know which props we should, we should pass. That's easy. You have to look inside the react some semantic. So, not semantic UI but semantic UI react. So if we look at the table component, the props, we can see for example fixed compact basic so basic is here. The cell this here and you can see that they're all Boolean's compact. So if it's a Boolean, then inverted, it could be and so on. So, now let's see if, if our table data table is also formatted. I'm not sure why my mouse is misbehaving. So let's do reduce table complexity. Yeah, it's already responding. So really, really. So, we can really implement something like fancy formatting. But this is a lot easier than what's being done right now in vote on that's a, that's something for us, or for for the community actually now that the vote is the default face of blown. That's something for the community to improve. Because if we look at the table view component that is used for the table block. I mean, this one is fine. That's not not the issue. But editing that data. So we should look for, for example, compact. We have all of these functions that just, just toggle one of the options on and off. Which is straight. I mean, whatever. And then we have a huge list of everything. That's, that's used to edit the formatting options. So in the end we get to quite a big, big file like 744 is not all of this will be dedicated to that part of editing the formatting of the table but the big part of this will be forgotten. So that's why using this schema will greatly, greatly simplify such a boring and usual task I mean I you don't want to have to just right field like this on and on and on because they can be mostly automated with the schema. Okay, okay. So, um, no I just close too much. Let's see what else we have. Okay, so now we will have another block data as a reusable pattern. And then another, another hawk, another higher order component. Concept. And that is with block data source so when we've created our data table. Object block right. And then we have a new one. I mean I'm not going to be able to delete it. So I should just start a new page maybe. Okay, so I have, I have this. This look and feel for the data table block. So I'm building here in the main part we have some, we have a file picker in a, in a complex website. This would become something that you would see on and on and on repeated to many, many components for example, instead of the data table I might have, I might use that CSV file for to plug data inside a chart or to plug data inside the something else. I don't know the map. Who knows. So this, this sort of code we want to reuse. And we can do that with higher order component again we just have to restructure our code a bit so that in the end, we're having these conditions on and on and on because we don't actually care except when we. So, we don't really care. If we don't have the data. We don't, we don't care about this right, we only care about what happens when we have the data. So let's put that aside, let's make it a reusable behavior, reusable higher order component and be able to reuse it in another block. Okay, okay, so just like before, we will create a new HLC with block data source. We will do it inside the Hawks folder. And I'm going to take the code. And I'm going to just stick it here. And we'll look at what we have. I will put side by side. And we can check. So, basically this pattern feels in the main block side, the view, let's say, and it's going to render that file picker and it also feels the sidebar with with the fallback form. So, for example, if I try to clean up file. I wouldn't have this sidebar portal, but let's just pick the component from the tutorial because it will be easier. Okay, I cannot really program while talking. That's, I'm pretty sure that it happens with everybody. Okay, so the tutorial says also to update the Hawks in the index file. And then we will say export with block data source. Like this. Okay. And then the edit component, I'm pretty sure that it's going to be greatly simplified. Yeah. So, we'll just have this as the return value. So, I'm on my right side, let me let me close. I'm on my right side that the table component. I'm just, I'll just replace all of this. And I'll just put return. And I'll be able to clean up imports. Okay, now, I won't get anything. If I if I'm going to check my browser right now. Yeah. And I will clear it. No data, right. There's no, no smart behavior. And I get the default form. That is because we haven't actually used the Hawk yet. So if we go back. And I'm going to copy this line and rewrite this. I will need I will need that one. So I'm not going to delete it. Okay. I need a web data source. Collective, both on that table. We block that a source. The Hawking itself will be a function that a second. This one. It will be a function that gets the options. And it will return another function that gets the wrapped component. So, if you've seen in Python. Let's see. If you have seen this, let's say, view, right. And then we have. We have a decorator browser view. And I will call it with path equals index and permissions is view, right. So that would be a decorator that receives arguments. So that means that this decorator when you implemented, it will have to be first a function that gets the gets the arguments, then it needs to return the actual decor decorator. So it would be kind of like this. The browser view of the function. And I would define decorator that would return the function that would decorate the function. And it will return whatever, but it needs to return the function or something about that function. And I'm, I need to return here. The decorator. Right. So, just to, to be able to have access in a closure to be to the options but also in here I can, I will have action. I will have access to the options. Okay, so the function inside the function, and so on. So, in this case here, we have the same case right we have with block data source is not the real hook. But it's, it's a function that returns the hook. It's a function that gets the options, this one's. And then it has them in the closure, and then it returns real. So with this one, basically we, we had the code that was before in the table data table edit. And we have generalized it and we have it wrapped and then put aside so it's a reusable thing and we can compose it everywhere, like, just like this so. Well, it's getting late, I think, I think we are our time is up. And I'm really glad that you were able to participate in the training so far. And just to have patience and watch whatever I'm doing here and I'm explaining and I hope, I hope that you have picked up a few tips and tricks and how to work in total. And, yeah, if you want to stick around, we'll see each other in the chat. And then, yeah, if you ask, if you have questions or comments, add them in the chat. It will be really a good great to hear from you tomorrow morning, tomorrow, the, when we resume training, if you have tested, and if I have tried to follow our tutorial here. Otherwise, yeah. Thank you, and see you tomorrow. Likewise. Thank you, thank you. See you. See you.
Objective: With constant growth of open science and access to information, access to grey literature also increases. Little is known about grey literature and its use in Slovenia. The purpose of the study is to investigate the use and prevalence of grey literature in Slovenian university libraries. Methods: A survey will be conducted in Slovenian university libraries using an online questionnaire. The questionnaire will be designed to investigate knowing of grey literature among librarians and to identify the categories of grey literature stocked by libraries, the method of acquisition and organization and infuence of open science on accessing grey materials. Anticipated results of the research: The results will show that the use of grey literature is increasing with open science. The survey will also illustrate the different types of grey literature that are being used and it will show also the knowledge of Slovenian librarians about grey literature.
10.5446/50074 (DOI)
So let's say we start. Hello everyone, I'm Peter, I'm in Zurich. Welcome to the Plon6 classic UI theming. We will be doing three slots each of them about one hour, there's a short break in between and I hope you could prepare yourself or your computer and that it works. If you have any questions write in the Slack channel or just speak up here on Zoom. So I will cover one small part quickly and there's three web customizations. In Plon you always could do some web customization by changing the logo and stuff like that. And with Plon6 some things change. So I start up my site quickly. So one thing which you probably know is how to change the logo. I won't go into much detail here. It's a simple form. You add the logo and it will be displayed automatically. There is one other pull request to also change the fabric on but that wasn't merged yet. So the big new thing is we have now with bootstrap 5 the possibility of custom CSS variables. We introduced with Plon 5.2 version the possibility to add your custom style sheets again. Maybe remember that from Plon 4. Otherwise I'll quickly show you how that works. So you go to your Plon site. Set up for the theming control panel. There we have already a theming control but doesn't matter now. You go to advanced settings and within custom styles you have a simple field where you can add your CSS that you want to add to your site without theming anything in your Plon site. Just enter your styles here and you already see that heading one has a different color. If you go back to Plon site you see document first heading got that color. But with bootstrap 5 we got the CSS variables. So this makes possible to change a lot of stuff visually in your theme or in your site without creating a theme without compiling any styles. So just to give you an idea or an overview first this is the link to the series CSS variables that are available now. These are defined by bootstrap and we are using them as well. So we can pick one of these variables and just add it to your CSS. Instead of header just color red I define my variables here in the this root definition. I have here my green and my orange and I use the my green for the heading and also the orange for the body text color. What I'm also using here is a standard bootstrap variable. If I go to save here you see that I changed the background, changed the heading color to the variable name and also the standard text is now my orange. So yeah to mention there's just some variables. I think these variables will grow with the development of bootstrap and with what we do in Barcelona in the future. So let's head over to how to create a theme based on Barcelona. So we are going to create a theme on the file system. We'll be using the recommended way to make your life easier. That's based on Plan CLI. Now for the Plan 6 we created three new templates. One is for creating a theme based on Barcelona which we'll be doing now and the other two are covered by Stefan and Mike later. We quickly create a new Python package. You can enter whatever you want here. Just going with the defaults. So we have now my Aaron.name package. Just enter this package and then what we do here is we create our clone theme. So within the you change into the package and add a theme. You can also do a Plan CLI minus L and you see all the templates that you have available. Let's add a theme based on Barcelona. We will recreate the clone theme Gritzy Busy that I created for this showcase or training. So I just give it a name and it's added here. To run clone the first time or to get all the dependencies it will Plan CLI build. This will run build out for you. Get all the stuff you need to run a Plan CLI. Meanwhile I can show you what was created here. Within your package the namespace there was a theme folder created. We now have the basic files here that we need for our theme. To start my site I just run Plan CLI serve and if I go address I have a plain clone instance running but no site yet. So we just add Plan CLI. So this is your Plan CLI running. Then we go to the site setup to our Plan Control Panel. Go to the add-ons and here you see this add-on here. If I install it I already have some kind of coloring here. These are just default styles. So to get started with your Plan CLI I also already skip this part. You leave the Plan CLI running. Just create a new terminal. I go into the package and then into the theme volume. So here in the package we have the package.json file that defines all the dependencies what you need to make your own theme. From here you just run an npm install and npm will get the stuff dependencies what you need to work on your theme. What we also included here is a watch script. This will compile your scss file which is in the styles folder to the actual theme sss that you will be using later. So if you run npm with the watch command it will read the scss file and create the scss file. If we go here and do a refresh, maybe a shift refresh, the boilerplate scss is gone. So we can start with styling our own theme. As I said we are recreating the pretty busy but you can make your theme however you want it. It's always good for a theme or the most important part or what gives you guidelines how the theme should like is the logo. So we just add the logo as I showed you before. Here select your logo and press save. We have now an idea of what colors we'll be using and we will start by adding those variables. To say bootstrap has really a lot of variables and I can stress it enough you can change almost every aspect in your theme just by changing variables. In the beginning there are the color variables, base variables and here you can see the important part where you will be filling in your color values then. So we will be adding some colors. It's important that we add those colors before we do the import on basuleta or bootstrap because the way scss works is that you have to find it before your imports that the imported files can work with. So if I save the watcher compiles the files again and we should see some color changes already. So to make this really a kitten theme we will be changing more variables. One besides the specific variables we have overall variables that we can use to change our theme. These are variables or properties like rounded, shaded, shadows and others. In case of basuleta we already have a little bit of rounded corners but for this theme we want to make them more rounded. For completeness I add the property enable rounded but set the border radius for almost everything to one REM. So if I wait for the stars to compile if I press refresh you see most of the borders now get more rounded than before. So let's change some more variables. We are changing the background of the body and background of the breadcrumbs. So these again are variables from bootstrap that I add here. I'm reusing my defined color from above. Press save and it got even more pink. Most of the stuff you will be doing in CSS but some things like basic structure of your theme is in the index HTML and for some parts it's easier to just add classes here. This is the index HTML that will be that is our the base for our theme. What we want to achieve is we want to make the breadcrumbs the navigation the breadcrumbs not going to the full width. So we add this container class. This also comes from bootstrap and it's part of the grid system of bootstrap. I'll add this to the main navigation wrapper and to the above content. Just save it. You will see that now we have the navigation as well aligned to the content here. Put a link into the bootstrap grid system. I want to have more information. Once I know this don't change too much in index HTML. If you start very important are the IDs. The rules.xml is the file that will put your content from your clone site and replace it in the theme file here. Fonds are an important part in your site or in your theme. Selected two fonts from Google Fonts. We just added here. So import the fonts. Still above the other imports. And I will change two other bootstrap variables to make the things active or visible. You can change the heading font and we also changed the overall base font to a different font. If you want to use the fonts or want to see the fonts within the time, tiny MC editor as well, you should go for the input version. Alternatively, you can put it in the HTML as well. To prevent some kind of font loading side effects, it's maybe a good idea to add these pre-connect text. No matter if you import the font itself in the HTML as well or in the CSS. So to make these fonts that we put in here a little bit more obvious or visible, we add some more. We color the font. Just another set of variables. This will be the size of the body text and this will be the heading color. The heading color is now also the primary color that goes all through the theme. There are extra variables to change the link color separately. But for now we just stay with the primary color. And we also have variables for all kinds of font sizes. Make the h1 and h2 a little bit bigger. So h1 and h2. This body text is now also the same. So this is just a short view on what you can do with variables. But you can change almost every aspect of the look is implanted with this. So we change the primary color and the fonts around the corners. This is also true for the edit forms. You see here also the input fields around it now. We have the fonts in here. Also styled. And puts it to change very detailed aspects of your theme based on with variables. So to complete or prove our theme, the border of the navigation is still a bit pointy. So here I add some styles. Especially or specifically for those input forms of the life search for example. So I search for the selector at the class. And now it's important after the imports you add your own styles. And here you also have access to all the bootstrap mixings and utilities. Here you can just use them as you like. Okay, shift reload helped. Now we have connecting input box to the button for the search. Let's head over to the navigation and breadcrumbs. So we set the border radius for the navbar and in the plume for the breadcrumbs we also change the border radius down here. So if you go here, now have the navigation nicely rounded. There's one more thing that I want to change. Now the header was done in with flex and we are able to move the contents inside that flex box. So I like the ID portal header, they align items to the end of the box. And if I do a refresh, it's nicely aligned on the underside with the logo. So that's basically it for a simple theme based on Barcelona. I haven't covered CSS variables here very much because it's still developing, but you can change CSS variables within your theme as well. Since the browser interprets this directly, it's a good idea to put it just to the end, similar as the sudo web customization would be. Just edit here and it should work. So that's basically it from the Barcelona perspective. Maybe as an explanation, we created with the new Barcelona theme an MPM package, which we'll include. And you just can extend from that. We have two files just for explanation. We have an import of Barcelona at the SDSS that includes all of bootstrap plus the Barcelona data or our clone specific components like the navigation or other small elements that we use in clone. If you don't use any of the components from clone, but just want a clean start and working UI, you can also include base SDSS. There we only have the styles included to make the edit forms and other stuff work. You can start on top of that. Stefan, did I miss something or are there any questions about the contents? I didn't see any questions in the chat. Feel free to ask. Or I would say we make a short video as a closing word. I can't stress enough to get familiar with all the bootstrap variables because it makes your theme in life so easy. And I really enjoyed working with it or doing themes based on that. Maybe a question for me, for Maritz. Do you use this approach the most for your clients? Yes. Yes, because I like to have everything within the same look and feel. I don't make a difference between logged in or normal users and editing users because it just works and it's so much easier. What I do in 95% of my theme work is just basing my theme or also the site on the functionality of Plon that I already have. Also, if you use those variables, all the views, templates of add-ons will be almost automatically styled if they are updated to the Plon 6 and bootstrap 5 markup. One part to make this happen is in Plon 6 we really refactored or updated all our Plon core markup to bootstrap compatible markup. You do change your variables and everything within your Plon site will follow those variables. So almost no extra styling for whatever you want to do. Just use the bootstrap components and it will make your life quite easy. Any other questions? Okay, then I say thank you. Let's do a break till 45. Then Stefan will start showing you how to theme Plon from scratch without using any of the Barcelona specific codes. Thanks for listening. you you you you you Change your view. I can't hear you. How is it now? It seems to auto adjust doesn't work with zoom and my microphone but now it's okay. I can hear you. I can hear you. I can hear you. I can hear you. I can hear you. I can hear you. Okay. Let's start with the next part. Quick check, Mike, you can hear me. Everything fine. Screen is visible. Yeah, sounds good. Okay, thanks. So let's proceed with the next step or next chapter in the seeming training. We are going to create a seam from scratch, which is similar to the approach Peter showed already. What small details are a little bit different and gives you more or less better or worse options, however you see it afterwards. The seam is built from scratch, which means there is no dependency to Barcelona or Barcelona styling. The only thing you need is bootstrap. You can create a custom seam or custom UI or if you think in micro applications, stuff like that, you have a clean starting point and you have full control over whatever you want to do. So if you have any questions or questions, please leave them in the comments section. I can help you. Most of the stuff I do on the console is done with clone CLI. Ask Mike after my session. He is a good person to ask questions about clone CLI and what you can do or what you not can do with this thing. So I will proceed requirements. I guess who tries to follow the training also already made some steps that are required for my part as well. So I will skip that. And we just start creating a new seam here. So in the documentation, you are getting asked some questions for now I will proceed with defaults they're okay. The amount of questions is different based on if you have configuration files on your system that answers for example who's the author or your email address or stuff. It's taken from a configuration file. And if you miss a question, check the configuration file. So, the file structure has been generated. As we see in the console output. Then we are adding a seam to the actual package. Quick explanation we created a package that is called clone seem Tokyo. It is at this point basically an add on for blown without any functionality. There is no content type in there are no, almost no templates in it's just an empty package, and we start adding a seam with clones CLI. Okay, you have to step in the package. So here, some commands of clones you lie for create our for creating packages some stuff adds templates or views or stuff to an existing package for that you have to be inside the package already. That was the mistake I made here. So it adds seem basic seem basic is a bob template. So basically, a bunch of folders files configuration files that are put into our actual package. And we replace some variables here, for example, the same name. You can go with your own values. And then the client, clones CLI asks me to create a repository or add my stuff to the local git repository. This is more, more a security step. If you make a mistake or if you want to revert stuff to go to few lines before this gives you the option later so it's always good to do that and answer with yes at that point. So we have to see what you created. So we have to seem in. Next is to create or run the build out. The whole build process is covered by clones CLI built. So if you see the green output on my screen. We have documented that in the seeming documentation as well. It shows what actual command is fired on the console. Basically, virtual environment is created the requirements with our install with PIP install the build out is put straight and an actual build out runs in the background here. And build out we know we have now some minutes to talk a little bit. I like the, I like to do the steps manually at some point for the training clones CLI is a good start. Whatever you prefer. If you run into an error here if there are dependency issues, you will get output on the console and the process stops. If everything is works as expected. We finish with a line that says everything is done. For me, the generated stuff or as far there is no error everything is created and we can start now. So I recommend using code editor development kits somehow. In my example I will use Visual Studio code it's up to source you can download it and use it. Just we are already inside the package so you can just open it by codes dot on a Linux based system like Mac. It opens the code editor. We are inside clone seem Tokyo package seem has been added already. So I'm going to get some errors here. I just ignore them. The editor recognizes virtual environments, Python environment stuff. I'm not an expert answering that I ignore it or reload the window when the editor wants to reload that helps a little bit. I guess I have to look up here quickly. I guess the next step is to start up the instance and I recommend doing this inside the code editor. So I open a terminal new terminal. It gives me basically a shell in here where you can do the same like you do in a separate window, but you keep everything together in one editor window and you keep track of different running processes and commands. The instance for around starts up the application server. This is my fire war here I have to allow connections for that. And as mentioned in documentation. It shows you the application survey serving on local host port 8080. This is configured in your build out configuration. There is documentation about build out how to change that there is good. There is documentation if you want to run that on a production system. This is not covered by that talk, or by that training. I opened it up in my Chrome. When you have a new window. I quickly show it with Firefox here. I did this already before that's why I can show it. Once you go to the application server you will be asked for username and password poses admin admin, and it's documented in the build out configuration so there are your login credentials. Normally will get this ask after the first step out. So I am already here. And a look to the, if I look to the management interface it looks like that. This is a good sign everything is built everything is up and running everything works as expected. So I am going to go to the front part to go to the root. This screen you have already seen from Peter. We do a little thing a little bit different now we open the advanced tab. Then you have a little bit more information. You can directly install add ons with your package directly. You can also install a clone sim Tokyo and click on create clone site. This installs a clone site and installs the selected package in one go. Okay, so we have a little bit here to see. Let's check out the documentation quickly. Okay. Um, what we see here is blown up and running we see the edit bar, and we see it's not completely broken, because the phone markup fits to bootstrap. That's why we get some basic styling including colors and stuff. I will explain that a little bit later. The first step we do with the new setup is we add some columns clone always work with some columns. We don't have any CSS or any styling active so we have to do that. We have to cover that part manually. This is an example for overrides. We override the main template I explain a little bit more about overrides later on but let's start with that to have it in place and we can touch it later on. Okay. So what brings you apart folder. This is where all the original or the core stuff is located. That's, that's the packages. Basically you download it in a very special orientation. And here inside the the omelette folder, you see all the original packages. So when you copy over a template from clone up content types where the content type templates list in or blown up layout where lots of units are located in. I recommend not downloaded from somewhere you can use the master branch from from GitHub, but I recommend always copy the template from here from your local setup, because this is the exact version of running at the moment when you develop the stuff. The main template is under products. CMF clone browser templates. Here we have two templates we have to copy over it's the Ajax main template and also the main template, which we have here. So I copy the files, scroll down to my package and copy to the source folder I documented in the documentation. Create a new folder for that. And paste it in there. They're new. The VS code sees it's a new file when you do the diff you or the source control tab you see that are two files I've created. That's the only difference that is not committed at that point of time. Okay, in the documentation. This is covered by this lines here I added in the full past so you can see where I copied it to what location I actually copied it. And then we have to register the template. That's done in the configure part in the configure CCML paste it in. Save the file, go over it. And the last thing we have to copy is the main template PY. Again, scroll up to the original part. And then we have to copy it to the inside template it's directly in the browser folder there is our main template PY. In plan for it was just a template that has been changed later. I'm not an expert but here are some macro definitions that addressing an issue with recursion errors. If you know to fix this let us know. Okay, we have added a bunch of files, especially we touched configuration CCML. I always recommend to restart the instance. There are tools like blown reloads for Python files or as well configurations but in at the time where we have SSDs in our systems restarting a soap instance doesn't take that much time. So if you restart the instance and it comes up as expected, it's always a good point you didn't break anything. So, at this point, I go to the start page I reload and I see nothing changed. I expected, I can I'm not sure if it works or if it doesn't work. The only thing I can do is just add some characters it's HTML so we don't break anything if I add something here and if I reload. And then we can do the same thing to result in the template so this means the template registration we have in our package now just works as expected. Good. Let's have a look at the documentation. I cover one chapter with conflicts that says, if you register a template that is already there under its name. You have it with the main template. I'm talking about this part here. If you do this and name main template is already taken in blown, you get a conflict error. You can avoid this by adding a seam layer. The same layer is registered or is created in your team. And that's the or when you activate when you install the package, this layer is activated as well. So, uninstall the blown package, the layer has no effect. That's an option to have different packages touching more or less the same templates or the same parts of blown without conflicting so that's the hint here, explained under conflicts in the documentation. Okay. We have not built our team on top of possible and it has so we have to take care about some stuff. One stuff is columns. The main template doesn't care about columns by default. We have to do it with CSS with a mix in in the in the in the buzzer. I can show you that later. That's also an option of adding columns. I like the idea of having full control over the main template so I just copy the file, paste it in to the main template. I can do a.pt save it and give us some price if I reload now we see we have some columns. I have a container here and next to the container is an invisible column at the moment. I have to add some content quickly to make that I have to add some content. What I changed is basically I added a container to give it or to limit its width. It's not no more full width at that moment I added a row and inside the row I added columns. I can also over view that when I close the article which is the main content in Plone and when I close the two sites that contain our portlets. This is an option to give one column to the content and give two columns for or give one column for both columns. You can also remove statically remove the portlets stuff with that and have just one content column which is the idea of Plone Seam Tokyo. The Plone Seam that is available for Plone 5.2 basically removes columns in the main template and just serves one column. I will explain a little bit more about that later or feel free to ask questions. If you have questions on that on the seam just paste it or ask it in the chat I will walk through them after I'm finished. Okay. So as I promised there are columns you cannot see them. Let's add a little bit content now. Add new folder. I will just create a demo folder. I give it some Ipsum text here. Let's add a page. Also some copy and paste to see a little bit afterwards. I do this only to have minimalistic styling afterwards. Add some headlines as well. As you can see the editing stuff at this point works. It's fully available. We have the tiny MC. We have some basic styling tabs are also working. We have what we call back end and Plone. Basically the editing pages are fully functional even without any Barcelona theme. So we have just bootstrap. We have Plone and it works together somehow. But somehow it works because all the editing stuff is also bootstrap markup and that's why we can use it here. As promised there is a navigation portlet on the right end here and our content we created. We also see the notifications here. Just a short example and not part of the documentation. But if you want to touch it or if you want to give that some styling. So we are talking in the main template. We are talking about that notification stuff here. We have global status message. Give it a container and we should have this tool. We have it not here because it's gone when I reload the issue. I have an I reload the window when I save it again now it's also uses the container with that is defined in bootstrap. Okay, let's proceed to the next. I do this at that point because I need some columns some organization in my team before we go further. So let's go to the build process. Okay, short. Let's show the grids as CSS before this is part of possible meta. I added a link to the documentation. This is basically some magic that relies on classes inside the body tag classes that indicate if there is one column two or three columns and based on that class information. It adds some mix ins to give our seem one column one column plus one aside or plus two aside. This is basically the magic. You can literally copy that as CSS into your team and use it. Yeah, that gives gives you exactly the same functionality that possible and it deserves if you need columns in your seat. Good. So let's go to the build process. I have to say what we see here is a pre compiled as CSS file that is shipped with the Bob template stuff. It's not generated by us at the moment. That's what we do now. I switch. Quick look at the code editor quickly. I recommend here in this tab we show the Python process run in its shell with the Python process if you click that little plus you get a new console. And here we do know the theme compiling stuff. Step into your seem folder, which is here. So now I'm in the same folder. I do not explain that in detail because Peter mentioned it. We are now in the level of the same folder where our package chasing this. Here are the scripts in that you can run with npm. So I do npm install to add all dependencies. Magically there will be added a node modules folder in the same folder. You will see shortly here. It's great out because it ignore ignores that folder so it's not added to your code repository. Here are all the dependencies including bootstrap including bootstrap icons and stuff like that. Okay, looks good. Everything runs through no errors. That's what we like. So now we have npm run build as the command to actually compile the CSS from our SAS files. I documented all the paths in the seeming documentation. Basically we have this scss folder. Here we have a base scss with some small paddings and stuff. We prepared also something for the port of footer. It's basically empty except some paddings and margins we added here. This base is included in our seem scss and what you see here is everything we need to compile our seem based on bootstrap. We have some variables. Peter explained something about that before. They are just bootstrap variables with some colors. I added tone colors here and the secondary which is just a gray somewhere. Then I import bootstrap. Bootstrap is located, as I said, in the node modules folder. I want to explain that. This works because we have a little parameter here that says our load pass is node modules. So if you have a configuration and rename node modules or your node stuff is somewhere outside your seem folder, you have to take care about that pass. We have this parameter here. That's why node modules is the default and inside node modules we have all the stuff we need to compile it. So if you have two options, you can include everything of bootstrap or you can include only what you need. There are some required parts, functions, variables and mix ends and a lot of optional things. I recommend to include everything except you have a project that is very big and it's 0.1 kilobyte is important at the CSS size at the end, but for now I would go import everything. Last active line. So for example, if you have the Peter mentioned a node package, if you add this to your dependencies or if you install it manually, you can run this import to activate it quickly to better readability. You can run this import to actually activate the grid. So I look at it again. The only thing I have is base. Again base is inside the CSS or base CSS file. Don't bother with the underscore. I copied that style from bootstrap. It works. It imports base and it compiles everything together. The result is inside the CSS folder, a seam CSS and a mini five version of that. How do they go into our team. We have manifest configuration and inside the manifest. They are referenced. So basically here you can change that to something else. They are generated by our team admin that CSS is static when we create the bottom plate, you can change that to something else you have to take care about if you change name of CSS files and stuff. Another thing I want to mention here. A rules is empty. This is basically the point where you add a Diaz rules XML. When you don't have a rules XML Diaz is turned off. So we serve our templates as they are in the packages. The main template or is served without any modification. The page templates, the structure. As you see in the scene, there is no index HTML. So we serve exactly what we get from the application server. We serve exactly what we get from clone packages. Okay. Last thing I recommend is start them. Watcher. We have a new tab down there. So no watch always is taking care about when you change something. I do not do a lot of examples here because Peter did this already. You can ask questions if you want to see something special I can give my best and show it to you. Okay. Happy seeming. That's the point where the fun starts. I have to say, if you get to that point now, the whatever comes here is, in my opinion, much more fun than it was in the last years, because everything is on the file system you have everything set up with npm you have tools and stuff running that really helps you seeming in an explicit way and there is no, you don't have to start an instance and scripts and stuff and hope everything works together somehow. In the most cases we work with that since like one and a half year now, it just worked. Let's show that. I shouldn't say it just worked before I just tried something. So if the fall watcher is starting again here, we don't have to restart clone at that point because our CSS is changed and you can show it directly. So if I go to the team and do a reload. Nothing happens, do a shift reload to directly get the new file. And you see this is a little bit darker now and here we have our fancy green. And what we do with our team is takes effect now. Yeah, here is also the example. I can show some examples about enough bar and stuff when we have a little bit time later. Next thing I want to show is logo. The template is shipped with some some templates. They are just prepared. If you ask questions why is that that empty over there normally that this is the navigation or let's say navigation, the story of blown on top of that there is normally the logo and all the, all the search stuff. This are overrides, I explained what overrides are in a few minutes. The footer is just empty. That's or this is the footer the header is just empty that's why there is nothing. The sections view lead is what we see here as mitigation, and I want to extend the navbar brand. And show what what is that. What's the navbar I have to do with the docs. I always say your documentation now is the bootstrap documentation. Check it out. I guess I mentioned it somewhere in the, in the trading docs. But here you have components and a lot of boilerplate template you can literally copy and paste. And what I added to clone or to this example as not bar is basically a copy of one of the examples we have here. I have to. Yeah, I search it again it's a combination of search some navigation including not bright basically that it's cleaned up and of course dynamically navigation links in clone. So, if you have an example here and you want to try out something you can do it. I also have a document and an example of how to add a logo navbar brand and just put an image inside and give it a size. So that's what we do here. I copy that code and paste it in here. What code is always nice for matter to code. Remove that save that. This template is already in overrides. You don't have to restart the instance at that point it takes effect. It will give an error because there is no file there. So, the navbar is gone and we have clone, which is the alt tag text. I need an SVG now. I literally borrow it from here. Save it to our development. I'm going to add a folder. It appears here now. Folder with green and we have to hear only thing I have to take care is the file name should be the same, obviously. Maybe I extend pop templates and add a logo from clone in one or two variants to show that we start this step. But actually, now we have to clone logo in so it works. I did comment now I changed also that line. It's just a link. So the logo acts as link to the homepage. And just one note on the search thing it's just a search. It's nothing special. It's just a form that points to search and it has a name searchable text that's what you need to give you a search. And then we have a program which is pretty sure that it comes into my lower middle text number. So we find our demo page and so the search works at well navigation is dynamic. We don't have to drop down navigation that you know from clone default. You can add one level of drop downs with bootstrap for the for the bus of a native team. More complex navigation because more level of of drop downs are supported there. Okay. We added a logo. Next is content type templates. I think this looks pretty clean but there is still stuff maybe I don't want to have or I want to change somehow. I have this byline here and I don't know exactly where this template is from. Maybe you want to touch it so I clean up my editor a little bit. For the next step. And check was the documentation size. Yeah. Let's talk about overrides. As I mentioned already, we have that overrides folder. The overrides folder is registered in our configure file. You have it here. It's the technology or the package for the stuff that works or that is necessary to get this work is Jboard C3C Jboard. It's a clear directory called overrides as you see on the left pane and also our same layer so that's everything you need. Whenever you put a file inside that overrides folder that has a special name it overrides a template that exists in the clone application server in the clone ecosystem somewhere. The first started name is the actual pass so the sections view let is blown up layout. This is the package name there is a folder called view let and the template is named sections. We now want to change the template for the default content type document in my demo folder I added a document when I edit you see that in the headline, edit page or page is the content type name to be clear. I'm going to write that template now. After a while you will learn where this stuff lives in. I explained already in our parts folder we have omelette and in omelette there are hundreds of packages. If you're not familiar with that ask in the chat ask questions dig around a little bit. You can always do a search in there. And I will not recommend the search I search on the console most of the time because it's much faster than using an SDK. That's the only thing I don't know how to search in visual studio maybe there are guys that explained that a little bit better how to come to a to an actual template. So over I have to look. It's in omelette there is blown content types. This is the package where some of the default content types of blown are from. We have our well known browser folder inside browser there is a template sub folder. And here we have some listings, summary listing tabular listing you may already know. And there is also our document. So that's the file I want to have quick look into it pretty much nothing except a text block in there. I copied that file. The small thing closes everything so if you're lost just click it and you start from the root. I love that button. Once in Tokyo browsers overrides, and I paste it in there. So what I have to fix now is the past to the origin of this template. I'll just add a name. Again, add some exit save it. And if I reload, of course, nothing happens. If you add something to that overrides you have to restart your instance. So if you look at through the windows here I use hotkeys on my keyboard I Google for the key I love them. Just stop the instance. I do clear for the window to see what happens from from a new window or from without any stuff that was already there for if you have an error in the template or if there is something broken you will see it so an instance restart is always a good idea. And if I check this in browser now I see somewhere three access that's my indicator it works so now I can modify that template somehow. I don't step too much into details here. All right. I use the example I made in the documentation. This is basically the easiest or the most basic example of page template. We have the heading we have the lead paragraph the description mostly and we have our text block. There are different ways of adding text to to something. That's not how you should do it. It's one way of doing it with some code that takes care of links relative to something. The important thing I need to explain here is we fill slot main. So normally you fill lower slots that gives us the option to insert stuff between title description. There is a buff title below title stuff that's all available in the main template. So, I will show that quickly. We have that actually in our main template but I'm in this point or this example I don't want to use it. So, here are the defined slots and if we use the main slot, we override everything that is inside the article stuff. For example, we override the above content title. If you have you that's registered to that provider. Of course, you need that provider in your template. So, it starts getting complicated here. For now I want to keep the example simple. And if I save that I restarted the instance. And now if I reload, you see the byline is gone because the provider is missing. But you have now full control of everything. And you have also control. Nothing pops up here because it is it is registered in somewhere below. This is an approach of start making a template that everybody understands. And if you need a provider in there, you literally can copy it. So, from the main template if I need that below content title providers thingy I guess that is it, you can add it to your actual document template. This should give us. Again, so little bit more obvious what happens in your template when you have that different layers in one page template. Okay, we'll move it again. Okay. That's about page templates. So, if you need an override. The other option than doing an override is you can register a new template. Basically, we set example for the main template we registered a new template, we gave it the same name that's why we have overwritten an existing template. And if we gave that the name like name temp or main template to we have a new template. It doesn't work for the main template that way but for content type stuff you could do it that way. I'm not stepping into that we have or Mike is talking about that later when he when he shows you the seeming based on the ISO. There is an example you can add entire views with blown CLI. You can register them registration takes place in the configs ECML. And if you want to configure it for a specific content type, the for attribute is important for you. So there you can like folders have two or three for different options of listing content. You can also add a second the third option for example for a document or for a news item that is selectable when you add the option or it's not selectable. You can use it when you know how to apply the view on that. So this is when you register a new template as an option to an or as an alternative to an override of templates. Okay. Next part is not that interesting. I, we had the same example from Peter so I move on here a little bit more quickly. So for example, of course, you don't want to ship funds directly from Google, Google, you probably want to download it added to your team and ship it from there. I can show you an example of how to do this. I, it's not part of the training. That's what I have to say. But let's see how we can extend our scene. So I added a funds as CSS. I inside the funds I just do the import. You can use this also to make more smaller as CSS files for your project for example different components or different content types if it if it grows. You can split it up a little bit to see what happens in the different parts. We need just the import to get the funds in. In the in the scene. I have to import now our funds as CSS. That's everything I need to do here. And before I save that's the stuff I have to copy over from the documentation. Maybe I show you how to do it correctly. In the documentation it's mentioned. I know there is a variable for defining funds in bootstrap. I have no clue what the name is and I have absolutely no clue what I should write in there to not break everything so that's the point where I have to look it up. So in the documentation you have to scroll the bootstrap bootstrap stuff list inside obviously a bootstrap folder. There is a CSS folder. That's where the magic comes from. And you should scroll all the way down you have a variable s CSS. That's all the actual variables from bootstrap and here you can pick up what you need. And you have a command F I guess, and you have different stuff specified mono space and sans serif fonts. I copied that line. So that's a little bit long. The example is part of the documentation. And I will add it somewhere. Into our variables over there. And the change is actually add open sense. And we can keep that stuff or we can remove it doesn't matter because I don't have 20 funds in there. I keep system UI I keep. I don't know. Let's keep everything from now. And then we can remove the defaults tag. That's an error in the documentation I have to fix. And now if I save. I have to switch to the other tab and then you see it's compiled again. So the background watch is still working. It sees I changed my CMS CSS and we should have a little bit different from now when I reload the window. So to add a font and just use bootstrap variable. Of course you can also do something like h1 and then at styling for h1 and use apply that font only for some specific HTML tags. That's also possible. But in our example we use the bootstrap variable for that. And here is a example of how to do this a little bit better is maybe the Tokyo CM repository. There is fonts folder. Again the example with open sense we added the truth right forms directly to the package. And we have an SDSS file. It's surprise it's called underscore fonts. And here you see what you need to do to register that font that through type font files to be available in your package so it's not magic it's nothing you need to you have to be scared of. So you have to do for fonts and different styles from the same font based on what you want to have or what you want to serve. So this is an example. So today we create PloneSim.Tokyo in this training. PloneSim.Tokyo is made for Plone 5.2. And when we created that package the idea was the same. We want to use bootstrap and we want to get rid of columns we want to simplify basically seeming for but it was 5.2. So you see in the source code of PloneSim.Tokyo of the of the main or master branch. There are tons of overrides because we had to touch a lot of stuff in Plone 5.2 to get a clean view. I can show it quickly. So this is basically PloneSim.Tokyo with some demo content looks pretty much the same. In 5.2 there was tons of overrides necessary to get this archived. After the training I will update that seam for Plone 6 with bootstrap 5. And as you saw in the example, it's pretty close to what you see on the website. So basically I have to delete a lot of overrides because now we have the bootstrap markup in all templates, which makes it much more easier to have a clean seam for some like that. Okay. Yeah. In the documentation, font is just an example you can touch whatever you want to touch with variables or with CSS. You can tear in components from what you see here. So if you have some minutes maybe we can archive that quickly or show an example. My, my, what I love is cards because they're tiny, they're simple. This is bootstrap documentation. You copy this stuff. I added to my, to my page template, just as an example of how you can use bootstrap. I can also add a page template override when for example I want to place the description inside a card. I do just this. So we should get a bootstrap component that looks like that, but keeps our description in. So basically bootstrap to fix stuff. It's a little bit broken here. There is no image or if you have a news item, you can add a news item image there. And then you can add margin to the bottom that's also available in bootstrap just as class or helper classes and the three is margin bottom three, three units, units are declared in bootstrap and you can use them. And now the margin bottom is there and the image is gone. So this is just an example of how quick you can use components from the documentation. And there are tons of documentation. There are tons of components you can reuse, especially for your patterns you want to reuse or stuff you maybe need. I recommend the example section from the documentation. There is for example stuff like sidebar. We are going to use that in a project as well. The code is pretty simple to get that and it just works out of the box. That's the good thing. So have a look at the bootstrap documentation to have really fun on that. Okay, last step. I have here my edit bar. People that know me also know that I really don't love that bar. So I'm going to replace them. There are definitely options. I show the way how to do this now. In our setup py, you have an option to add a dependency. That's what it requires. And the second thing is in our configuration. This is an example how you do it programmatically. Of course you can add the package to the build out run build out and then you have it and you can install it manually. If you do it that way, you have it installed automatically when you install the package. So, I, because I touched the setup py, I have to rerun build out now. This should take care of the requirements of that package and it should fetch the package and add it to the project and activate it in the project and make it available for us. Let's see what happens when we run the build out. Not so much output output on the console but some output on my on my example here and there is an error now. So, I built out tries to fetch that I can show what I basically wanted to show. That example, we made a replacement for the sidebar for toolbar which is called sidebar. If you log in. You can modify navigation and editing features of clone. There is one template so if you need to touch it if you need to modify it you don't have to bother with the complexity of the edit bar. There is only one viewlet or one one page template that you can also easily override to get something like that, which is not stick next to your team. So, if you go with custom UI or if you want to create a website layout, a modern website layout that is based on one column. This is an option for you. I have no idea why my build out is failing here at the moment, but maybe sometimes you just have to read error messages. And I guess the quote is missing here. I guess that isn't. Let's give it one more minute if it's not working I will skip that. And. Yeah, proceed. I see nothing in the chat I also see nothing in the slack. So I guess no questions so far. I think there are informations about the trainer somewhere around the training page. Feel free to ping us or me if you have questions or if you need to know something. I will push the code I created here inside my package to a branch of the closing Tokyo package so what you see in the documentation is available as code later in the closing Tokyo, I will create a branch that is called something like the training 2021 and there is the actual code I created today. So I will push that after the training so you have it for reference if you want to try this at home or if you want to see exactly what I added here. So build out runs through everything fine I have to start up my instance again. And if I reload it says everything work it has to pick something because I didn't define a version. It has to pick a version 1.5 oh which is the newest and edited. I have to create a new flow site. Or as Peter mentioned before in the site setup at one section. There is not yet on I can activate. So as our own package I have to do this, or I install or I create a new flow site then it's installed automatically. So, I like that approach because it doesn't bother with my layout, it just an overlay. And I have to say this is code that's written for clone five to it works of course, but the styling is not bootstrap five, we will update our sidebar and use the sidebar component for that. I also have to mention, there is work in progress on the edit bar of clone. It's part of the year six branch and I guess it's going to be merged in a couple of days or weeks. If somebody knows more, let me know. That's from my side. Seeming from scratch. Please give us feedback. Let me know if I missed something or if I should cover something different in this part of the training. I would say we make a short break about four minutes and proceed in at, yeah, like in like four minutes with Mike. Thank you and have a nice conference. So as the colleagues earlier said, if you have questions, don't hesitate to ask them. We are there for that. For now you can ask them in the Slack channel and the training channel. If you have later questions, you will find us usually hanging around on Discord and the blown channels and a classic UI, for example, just ping us there. You can find me there and Mr. Tango and the colleagues are also there. I can share my screen already. Can you see my screen correctly? Yeah, it's not too small. If I should increase the fonts and things like that, just let me know. Okay, I wrote earlier in the chat that you should upgrade the Optum Blades Plone CLI. We said it's some last quick fixes in the Diazo routes we will use later. If you haven't done that, you can also fix that copy paste later. That should be an issue. The training docs should have already the correct rules and all in place. But it's a bit easier if you use the current Bop templates blown beta 6.0 beta 9, which has already the correct stuff. Let's see what we will discuss today. My use case, I want to show you is for me the classical case, a web designer comes with a layout. You can either just Photoshop or you already have a click dummy. Another situation could be you got the web design from some websites and we will use today this sim from start bootstrap. That's why I chose that. It's relatively simple from the content and all, and also relatively close to plain bootstrap. Other themes I tried out, usually use a lot of jQuery plugins and other stuff and this makes the whole thing. It's all possible, but it makes more work for us and we want to focus on what's important for now. What you will learn, you will prepare your development environment. We will use Plone CLI a lot. We will also create content for the sim in the Plone site. We will integrate the static layout we got from startbootswrap.com. We will shortly see how to compile the styles and how that works. My colleagues showed you already some of the stuff in use. We will also create some HTML snippets as tinyMC templates to achieve some of the content areas which are not so easy to achieve by just using tinyMC. It's a nice way to just use existing HTML snippet and add it to tinyMC and then use it. What we also do is we will create content types and views for the products area. Let's start. We will go on the terminal. Use Plone CLI create add-on to create an add-on called PloneSim Business Casual 21. There was a different sim with the same name from startbootswrap back then which we also used in training center. I think there are even sim products on Collective. This is the new one. It will ask you for a description. For now I leave it out. You can here give a hint for the Plone version. Right now this does not have big effects. It was earlier used more for differentiation between Plone 4 and 5. Right now between 5 and 6 there are not many differences for the Plone CLI related. The Python version stick to Python 3 and this code support we also want. Now it's generating our Plone package like you have seen before. Now we see the into the package. And the next step will be at Plone CLI add-seam. This is a different package so it's not a sim basoneta, neither a sim basic. Funnily enough this is one of the more basic sims also because we will not have much in it because most of our layout and resources we will use are coming with the sim we will download soon. So we will call the sim the same as the original name. So we will just put business casual 21. Now we have the sim. Let's fill it with the actual sim from start bootstrap. If you follow the link from the training you will go here. You can download here but you can also click the preview button which also has the download up there. You can also see how the sim looks we want to achieve. So you have different sites with different sections. Here you have products listed and here you have the store which is basically the open hours and again the about section. So if you download this you will have a zip file. And if you take this and extract that or you could actually go right from the zip file. So this is the actual content we want. This is just the name folder and this is the stuff we care about. So if we select that and extract and go I created a new folder so we go in our and our sim package right down to the sim folder we have now. And here there's already some stuff in it. We don't care much about that. So we extract that in that folder. If it asks you to overwrite files just say yes. And now let's open VS code from this year. Yes we trust. So let's open the sim folder and inspect what we have here. So this HTML is now the index HTML from the same. And we also have other pages like the about and products. We will not use this, but we will mainly based on the on the index HTML. So let's have a look what's next. So the structure you can, you can look at this is basically what you have after you filled in the same you downloaded. So the clone CLI gives us a easy command to build our complete development setup. As you can see it also shows you exactly what commands are running. So it's creating a virtual environment Python running build out bootstrap to create the build out scripts and then it runs been built out. So you can run all these commands from any place anywhere inside your package folder so it doesn't have to be the root folder. I will figure out where it is. So you can run the build command later or so from a sub folder and also the other commands. So that's basically what the CLI provides us. So while this is running. Let's look. The next step would be using plune CLI serve to to start. You can also just do the bin instance FG, which is basically what plune CLI serve does. But again plune CLI serve works anywhere inside the folder structure. So if we go already in the folder structure to our sim folder where we do most of the work. We still can use plune CLI serve. It also shows us that we can open the website on this URL. We don't need that I already have it open here. So let me reload. We get the welcome page. For now it's in German, but will change soon. admin admin. So let me switch this to English. So you could also use the advanced. The advanced tab earlier and right away activate the same or other add ons. But we will do it the classical way, which means we just go on the add ons. We find our same here we will install it. Now we see it's installed. Right now here this still looks like like basal neta. This will also not change. If we go on the seeming control panel. You see now we have a seam here which is active that's our seam and we have the basal neta here. This control panel is never seen so even if we have some serious issues in the in the seam and the other rules or whatever this will always look like this so that you can activate or dictate the activate your seam and and fix the problem. Let me open this in a new tab so that we can have a look at the homepage, which now already looks like what we want. This is rather static now so for now I can go on edits. That's fine. This is all the added views are coming from from default clone. So this is basically what you have in basal neta except that the banner and the logo is cut out of this area. But everything else the styles here in the edit form in the manage portlets or in the control panels. This all looks like default clone. The reason why we do that is, normally you don't care how the how the back end how this control panels are styled. You don't want to rearrange the real and and bring with your style all this and restyle that. There are options to do that but normally it's not important and also said, it has the advantage that when you have like manuals and documentation, which use default clone style there. Yeah, they have a better use for more people and everything looks like everybody everybody knows it. The only difference is the same part. So we are locked in so this is this is no difference to being locked out, except, let me show you in a isolated container where we not locked in. So the only difference is the toolbars not there. So, okay, how are we going from here to many tabs open. So this we went through. For now, because this is all is static. I would suggest we disabled the same. That's why I left this open here. But other than that, you can always go on site setup and go there. So to deactivate that the best is to just activate basonita again, which means now when I do the reload here I have to default clone again. Now, to really achieve what what we have here, we want to create the products, the many, many new items. So let's create the the free we have the home we have anyway, and let's create just one more so that so that we can later see the difference. So when we replace this with with actually with the dynamic content coming from clones, otherwise we wouldn't see the difference is that the seems still static or is it is it something we created in clone. So, let's just close this. So folder about and the next will be products. And store and our extra content. So let's get rid of the other stuff of the default content blown ads. We don't need news and events here and we can even delete users folder. So let's just delete it. Now our menu looks already like it should just to make sure when you put more stuff under the, the more on menu blown by default has these drop down menu but our seem doesn't provide that so to not have the menu breaking we will force it to only have. Only have one one item. Also, you can see here but this is a bit different than blown default this is already coming from our from our seem so there are some configurations applied already. So when you create the seem with with clones, you know I there are some settings already in the profiles you can adjust. And for me, this makes the most sense. So in the menu I only want to see collection folder and link. And then I want to see types like file image. I never want to see in in the menu, or even news items for example, if you have an folder with hundreds of news items, and you have a left side navigation with the portlet. And it just looks odd, and it doesn't make sense to show them. You can always create a folder, a folder will be good, good start for menu point and then put the page in it and set it as default. This is what we are doing now. So we're going in the about page. And get inspired here so this is the title. This is that the rest we will do later so first just the title and description. And then we will launch it. So when we go on the folder about, we can set this now as a default page. So that when we go on about this, the page is already shown by default. So we will create a collection, because we want to list the products here. It's just called products. The rest we will do later so just save it. Publish it and go on products and also set the collection as a default. Now on store we create the last page for now. So, set it as default page in that folder. Let's publish this. So this is the basic content structure for now. The next step will be creating HTML snippets in tiny MC or for tiny MC. The way I did that you can, if you want to go fast, you just copy it out here. But this is basically the about page or this is the section about. So if you go on the page here and you go on the about and you inspect that in the browser, you will see that you have the section element here. This is what we want. This is what we want to include because it contains all the content and also the classes which are needed for the scene to work. So you could just go here, copy outer HTML and then go in your editor and create the file there. But the whole thing will be wrapped by this. So it's just for tiny MC. You need this div tag here and one below. This is what I did with all the templates you see here. And then we create this at the file. So this is the section about. So we go with the editor to our scene to the folders, tiny MC templates. You see here we have some of the examples that Stefan showed earlier. These are the example hero hero left and pricing from the bootstrap examples page. So you can use this also. They are there already. But we will not use them for now. So section about HTML. Copy and save. So the next one. Yeah, this note we will take care soon because some of them have pictures included. And let's see if I go a bit like this. So normally when you copy the stuff from the layout directly, this is the original path. But for to work later so that's inside clone. So inside our our scene, it finds these pictures. We have to add this and this is basically plus plus seem plus plus. And then the idea of the same recreate if you give a different seem name. This will be different. Yeah, so this unique to your to your team. Yeah, I think I did it in both sections already so that you have this. I think here there was also something. Yeah, so it's it's taken care of. If you copy it from here. You have. Where was it here. And this is the section intro. safe. And the next one is the section promise. And I think this is the last one. So, which is this section opening hours. So, our templates on place. What we need to do next is to tell tiny MC where to find these templates. By default, we have this registration already so let's go on the on the editor. We have here the profiles folder so this is outside of the same folder. And inside profiles defaults. We have configuration options. You will find things like the same name and some other metadata. And inside the registry folder, you can have different files. The names doesn't matter. Because this is basically sometimes you see just registry XML but you can also have just registry folder and then have smaller files which are just taken care of some stuff so here we see display types in for the navigation. This is the setting to change the default settings there. And we will focus now on the tiny MC here. So, if you look here, let me close the sidebar. You can see we have here already a list of templates. This is a record which will register the templates fields in the config registry and this is the way how tiny MC will find our templates. So, make sure that this is valid Jason so it's not a Python dictionary so the last comma should not be there. To make it simple. Let's just take the complete value tag here and replace it. Save. So now we have our name, our names for the for the different sections. And this is the path here. Still again, this is unique to your scene so this will be different and different seems. This is the folder tiny MC templates inside your scene folder, and then the file name according to the name you chose when you created that just for you to for the info. So, this part is right now necessary to, to register or to activate the tiny MC template plugin. This will most likely not be necessary in the future or at least it will change a bit. So normally you can activate activate the template in the future by just checking a checkbox in the tiny MC control panel like you can do with a lot of other plugins. It will be no longer an external plugin and you don't have to put in this way. But right now, we are still running with the first alpha of clone six. We still have the old resource registry running here. This will change in the next couple of weeks, hopefully, when we already and merged the year six effort. So here are some examples how you can change the HTML filter in in clone. This will change the back end filtering and also configure tiny MC to allow certain attributes or tags. One tag I already included here is the button, which by default is not allowed. Because in some of the bootstrap templates we have buttons so we want the buttons in there. So far so good. Let's have a look at the HTML. We can best do this. So we have the same we have the index HTML which we will use as our layout. So this is how it comes from from the designer. We don't need to change a lot. For production, I would also replace the, the CDNs here by not having the CDNs, at least in Europe, that's what you want for data privacy reasons, not to use external CDNs or use different ones. There are advantages and disadvantages having that but you can also make this locally available and change the links. For now, we will just keep it here. This, what we have here, we will remove. Because we will include the style sheets in a different way I will show you soon. That's one thing. The other thing is in the, in the footer of the content on the lower part. Here we have some JavaScripts. This one has only these two. The first, which is the bootstrap JavaScript, the, the bootstrap bundle containing the JavaScript from, from bootstrap. This is shipped by default in clone six. And so we don't need to include this anyway this is like jQuery and bootstrap. JS they are globally available so you don't have to think about that. You, you will have to bootstrap five version there. So, not our concern. This is the JavaScript we will still include so that stays like it is. Then one thing. clone has this thing called global status messages. This is when you, when you go somewhere and you edit for example, and you save. You have these info boxes here. And these portal messages, we also want to show when the same is active. So to have that, we take this small, small HTML snippet I didn't copy it just yet. So we copy that. And we insert this here. So this is one change we want to make. And another change is we want to wrap this. So we have two sections here. For example, we see here we have two sections. And this is the whole main content area. Many bootstrap seems kind of bad in in how the markup is structured from from the semantic point of view, which needs needs to be fixed. So we give it the main point and this, this actually gets filled with our content area from blown. So, this is this. And here we see it wrapped. So you can also copy paste, in case you did some typos or anything. So now how do we get get the styles to also to also make adjustments later. We will, we will go to the folder where they have the styles, you can always put this differently, give different folder names and so on. And for now, we call the seam SCSS. So we are using thus or SCSS. And in the same you see, we just pulling in the existing styles from the from the layout. Save it. So basically, what we need for now to to get in. So the next step will be having a look at the package Jason. So, if you go inside the same folder, you will find the package Jason like you will find also and two other packages, the colleagues created before. Pretty much looking the same or at least similar. So, here you have some commands you can, you can use. Most important is build and watch for now. So, let's do it right in the VS code here. Going inside. Same folder. Here we have to package Jason. We run npm install to make sure that all the dependencies and def dependencies are installed. This will not take too long because our list is not long. The difference here is def dependencies are used for building for bundling resources. And dependencies are dependencies you actually have from your team. So, as Peter told you before, you can use for example the basoneta styles, even if in your team if you want, or even if you go that way, or this way. So, you can do an npm install basoneta base theme. I don't remember the correct name but you can find it in his documentation. And then you have it in your node modules folder and you can use it and this you would put in dependencies. Or in def dependencies depends a bit how you use it. If you're just using CSS or or thus files, usually it's enough to put them in def dependencies because you will compile them later to CSS. And you only want to ship the CSS to the SSS is not necessary later in the browser. Okay. This was running. And then let's run npm build. We have a problem. I think the problem is other fuller sites. Yeah, I was not, not done. Sorry. We need to adjust the paths by default. We have the same folder name, like in the basoneta example, but in this scene, it's not called styles the folder. It's called CSS. So they are basically for places where we have to adjust this. Style. That's it. So let's try again. Still not. This time. I'll see CSS. No such. Ha. Okay. CSS and here. And then I was. So basically have to make sure that the scripts are using the correct for the name. Another way would be just rename this CSS folder to styles that should work too. So this time it worked. Now we have the, the styles coming from the scene, but they are actually get pulled in our seam SSS and the same SSS does compile to these files. And then we have the CMS as a map for, for the browser to show you the real CSS and the minified version of that. And this we will use in blown. This we did. One thing worth mention is we have a vendor folder. And inside the vendor folder, we will have all the resources we installed with npm or john. So our everything which is not deaf dependency, but the dependency. As I mentioned earlier, will be copied in here. So that later we can when we ship our seem to to blown or delivered. We can actually delete the node modules or not the, the one level higher. If you look here inside the seam you have these node modules, but there's a lot of stuff in it because this is all the build tooling stuff but inside the vendor you will have no modules and when you have defined dependencies here they will be in there and you can use them from your seem like you can use the local JavaScript we already have in the template. So let's move on. The next interesting thing would be to have a look at the manifest. Manifest is the central configuration fire of the scene. Here you have the visible seem name you will find in the, in the user interface and the description. You can even have a preview image here, which right now is not used here but that's what you can do. This also points to the rules file by default it's the rules XML and I wouldn't change that. Here's the prefix also which is the prefix you've seen in a lot of places this is unique to your seem and it's auto-generated generated by my plan. Here you have production and tiny mc content CSS earlier in clone five for example you also had to like a development CSS but this was because you could you could point it to a less file and the less file would later on in the browser also be compiled and you can adjust it there. This is not provided anymore so to revamp you can only write CSS not as CSS or less. So we only need the registration here for the final CSS you want to ship. And this is also because of the reason why we earlier removed the CSS inclusion in the HTML file because this is the place where we pointed to the CSS also for tiny mc so that tiny mc can include it in the iframe and looks basically the same. So you can have additional parameters like defining the portal URL as a as a variable and this you can then also use in inside by other words later but for now this is not not that important. So you can see here for God, yeah, we have to adjust here also the folder name. But this, as I said depends on if you change the folder name or how you want to structure. You can also separate the CSS and compiled files and put it in a dist subfolder as you like. This can also change in the future the default but at the end it's easy to change and to adjust to your means. So, when we have done this we can actually run npm run watch keep switching the wrong windows. So, we are in the same folder so let's run npm run watch, which will run a watcher. To the to the CSS. So, we also need to start the clone site. Let's use plunes here I serve for that doesn't matter where you're in the package. Clones that I figured out there's something wrong. I think I have somewhere something running, which I shouldn't. Let me see. And the terminal here. Okay, I can leave it here is okay. I don't need to have it in the editor. You can do it both ways. So, okay, so we have on one side, clone running and one terminal and here inside the code we have a second terminal where we have our npm watch running. So that means we now can go to our clone again. Now we want to verify. Now we want to go to the integration side to the the other side. That's why you're here right. So, let's go to the integration side. Let's activate our seem. So, we want to go on the front page. So, we want to replace now some of the things in the layout so if we, if we go on home for example. We have our sections and then our example we have it wrapped with the main, but also we have the navigation here. And this has different stuff and somewhere in here that was the navbar. Now, enough, enough, enough, enough, enough here it's because it's on mobile image was too small. Now something like that. Anyway, first thing first. It's a good thing to know what clone actually renders. One thing to find out is to open clone, not like we, we are doing it here, like local host 8080 blown, but using a different domain for example 127 001 will open the same page. The difference is, there's no day as activated. The reason is, when you go on the seeming control panel. In the advanced settings, you will see unseen host names and by default, we have this already here you can have also online, the same. If you want to treat later and you want to have some domain names where you can have it unseen so that you can inspect that. For now, locally this works fine, the default. And the nice thing is, let me do this, the same style. So if I go and inspect, let's move this to the bottom. We can actually inspect what clone has on the clone side so on and clone we will find. Now something like not enough. Back then it was these these classes didn't exist on phone six they are closer to what you also have on the on the same side. So when we use the editor. Let's make this big for now. We are opening the HTML. And we find the place where we want to replace the static part with the dynamic content from blown. And this is what we want this is this URL here with the nafba naf. So it's basically nafba naf on the same side and nafba naf on the content side. And to achieve that, we are opening the rules file. Let's maybe close all this sections here. Tiny MC we don't need packages and for now also not so. Okay. Just to give a if you short summary the first part. This is like including the older back end like when you go on edit in your seam or when you go on the control panel everything which looks default basonata. This comes directly from basonata. With this include rule so just leave it there. If you want, or basically the what this does is you have this kind of condition in basonata back end they have they have a similar thing but it's like inverse like this one. So it says, if not content has these classes, then it's the back end. And by default we are using this so when we have these classes in the body tech blown inject stem. This means we are not on edits, we are not on a control panel or anything else like that. So this is the way how you can switch from default blown to what you want to style. Here we point to the the same file. In this case we chose the index HTML but you could also have different files you could also use something like this is the default but use a different one under a condition so you can add if condition here when the path is like this or when somebody classes are like that use a different layout for the start. You can have different ones, but we keep it simple for now. No seem basically means when there's no visual portal VEPA on the content side, which from blown always is there. Then just don't seem anything that is a is a fallback so that when you have something which shouldn't be seen by the ISO. It usually don't have this like a eject calls and things like that. So, um, most of the stuff you can just leave it there. But what we actually want to do now is we want to take care of the navigation. So, we have you covered. Let's activate this rule. This rule what it does is it uses the replace functionality from the ISO. Because of this prefix CSS dot, we are using CSS selectors. If you remove that this part should be an expath selector. They are much more verbose, but they also much more powerful so sometimes when you want to do something tricky expath is the way to go. But in general, I would recommend sticking with the CSS classes. They get generated or converted into expath in the background anyway but you don't want to read them and you make more mistakes if you use expath. So, if you don't have to don't use them. I'm not sure why this is still using this. I thought I had this change but maybe I also have to not updated Bob templates somehow even though I updated it. This will change so normally you don't have to adjust this because when you want to see my boots up seem it's it's a bit easier to to have it like this. You can either say I want the elements and here we have this. This says is the same side but it says seem minus children. Which means not the element in the template itself. So, not this element we want to replace but we want to replace the children. So that's what the first condition says and the second set. Give me this and then inside give me the L E's so we will replace the children of the enough bar enough in the layout with the elements coming from blown. We could leave this like that or we could. We move this and also use the children here, which is the same. So if we do this, we save this. Let's go back to our browser. Now we see we have our more button here and our menu is working so we see here the URL is changing so we're actually going right now the content still has our index HTML content so that's not so fun. Let's fix that. We will go to the area where we have the content. Let's activate that. What does this do. It finds the content and also here is wrong. This time we want to use content core content core is the main content area, but without the title and without the description and without the lead image for example so it's really the base, the base text area which you added with the tiny MC. For this team we that's what we want. In most other cases you want to use the just the content not content core, which gives you a bit more of the content area. And on the same side we have our main tech which we use to wrap the sections and we will just replace everything inside the main template with the content core area coming from blown. Let's do a reload. And now we see we have here now the welcome blown. Here we don't see much for now because we are not taking over the, the headlines. So, let's continue. We did this we are the global message I was. Skipping. Let's add this quickly. We have. All that message so that's the global status message. This should work like this. To test this we will just go and edit. Save. There's our message. Concore we did. The last thing we want to get over is the footer. This is also already here so we just need to activate that. You see there are also other rules prepared for example for the, the portlets from blown to get them and put them somewhere in the same but this seem in particular doesn't use side panels so we don't need to activate them. Okay, the footer should also be there. Right now it would look like this. Now it looks a bit more fancy. One reason for that is we don't have our styles in a way but the portlets itself, the footer portlets we can configure directly in blown what we want to do is the default ones, which are basically if you want to stick with the copyright. You can also add more actions to that menu so you have some menu buttons down then you want to keep that but this seem doesn't have that so let's just turn it off. We need something we want to have a static portlet and we go here. Go there, take the content we want this is static for now but that's not important now you can create a template for that to have the 21 next year 2022 but that's not really important. So one thing I forgot sorry. Food is our static food. I forgot to click this. We want to omit the frame of the portlet. The portlet itself in this case doesn't have like a headline or anything else this can be useful to have but for now, this is not what we want. So now this looks already much better. This looks a bit odd here. So let's fix that. I think that's starting so we will. Just take this as a starting point. In the editor and go to our team. Just remove this and replace this. So what we added here now is the bootstrap SSS bootstrap good. So, we have to make sure that we have boots web in our depth dependencies bootstrap is there so in the node modules. Full that's there. You don't have to give the full path to the new modules you just have to take the path inside the new modules the rest. The compiler will will find itself. The reason why that is is you have these loads. Load path node modules for the sus compiling is given here and that's why this can be a bit cleaner here. So then we have our basic grid definitions this is just what we also have in in Boston eta and breakpoints. We can ignore that for now. So now to fix the content. This looks a bit. Um, a bit complicated, but it basically says when we have something like the content core that's what we grab. So, if we look at the content core. Um, inside the content core here we have parent field text so that's one of the things. And for now we don't have anything in there but we will see later when we use the tiny MC templates to have our already made snippets. This rule should not apply but whenever there's not a snippet like this with tiny MC in so default clone content and some other options. Then we want to have like this basic background and we just use. We define like like a with and the outside centered and then this background fade is I copied directly from the from the seam. The rounded and the P nine is all bootstrap stuff, and I used extend so that we don't have to. Yeah, we can have the same effect like they are using in the same without the markup having to provide this. This is copy this and put it in. So, our watcher have seen this and we should now see the change. We don't. Oh, wasn't ready yet. Okay, so this is much better readable and a good default. Let's continue. The next thing would be if we if you look at closely at this. It doesn't look exactly like our menu. So, this is not up a case. And it's also here there's much more padding. The reason why this is there are some some extra classes here. If we inspect that we have these. These extra classes on these enough item elements so pigs LG for for example, and also the uppercase somewhere I don't see it yet but it's there. So, what we can do. We could just fix that with plain CSS. I'm not afraid of that. I still like CSS. But to have a quick fix we just, we just do with X, extend. We just do the same as our markup would already provide this kind of classes so just take this. And for our enough items we do this extend and wait until this is finished here. There are some post processing post CSS and stuff that's taking a bit. So, so now our menu looks much better. Then, let's give the portal status message. Also a bit more space. Because we're short on time I will not show you the before but you saw that the status message was right under the menu. Now it has a bit space here which looks much nicer. So also the footer on the footer should have a bit margin to the top. So, we also put this. So this part we already did right away with the portlets. This is just here for the documentation. So, the footer itself. This is a bug in the current alpha here. So this is normally it should only show the page but for some reason it shows all the other content also. So the default page setting doesn't really work here. But here we see there's not much of a gap. So now the gap here is a bit bigger and if you go on other pages they also look better. So let's continue. We have our tiny mce templates. So first of all, let's go on this stop clone and start clone again just to make sure that we have our configuration and clone there. And one thing we want to do now is we want to uninstall and install our add-on again. We could also use an upgrade step but for now we don't have an upgrade step so we uninstall. And we install again. This is necessary whenever you change something in the profiles. So the registry settings for the tiny mce we want to load. And this way now when I go on tiny mce, not on seeming. Tiny mce. In the future it will just be here the template to activate but for now it's here. And here's the list of the templates and we see our sections. So the tiny mce should see the sections. So let's try if that's working already. If it's not working the first time after the reinstall we might need to. It's working. So here we have a preview of what we are injecting. It doesn't look exactly like the template we want. I just forgot something. Tiny mce is sometimes a bit strange when you have big blocks so do some spaces before and then insert. Otherwise you cannot really get under the block you inserted. So now I can just click under and I insert the second block we want on the homepage which is the promise. Let's save that. And now we have our dynamic content. Let's also move this so that we have the full screen size. So now we have our content that looks kind of nice. To prove that this is actually dynamic I can go here and can say. Drink fresh blown coffee. So you can customize that. And as you see it's here. Fresh blown coffee worth drinking. So you can adjust this also the link here for basic stuff this works if you want more complex blocks. You rather go with mosaic and create specific mosaic tiles. I'm planning to add this functionality also to the ploncel I so that you can just create this kind of blocks or any other kind of snippets easily in a couple of minutes for mosaic and use this in classic UI. The other way is of course, you can use what you have more built in options there and also plugins. But that's not the topic here. So, yeah inserting the templates. We were just showing this for this site now. Let me quickly do this also for the other sites. Doesn't take long. So edit. Here we just need one template so I don't need to make empty space. The product was our list so the store was the other one where we have static content. In this store we have more than one. So this store was opening hours. And under the opening hours. Again, the section about. So we have the opening hours we have the section about. We still have something here I don't know what this is. Yeah, we have the DMT spaces so we have to remove them. Otherwise, they get styled by our extra star sheets. Because these are actually not empty spaces but ptex empty ones. And on the homepage we should do the same. Okay, so we have now dynamic contents created in blown. The only thing which is still missing is the product page. For that, we will go back to let's do it in the previous code now a second terminal. So we have to blown CLI. We are in our. In our products and we create a new content type. We should be for. Commit our changes. So. So now. Is happy and doesn't complain. It's always good to. Yeah, make a comment before you start working with the plunge. Because plunge itself also does the commit and it's better if it's clean. You can always force it to just shut up but it's better to do it this way. This way you can. Things you did with the plunge later easily because I will create different stuff like tests and registrations and. To revert this manually. This can be a lot of work. So we create a product or a product content type not products. And then we have a description I skip for now we will go with the Python not the XML version. We can leave this container but we just need items so we don't need like folder rich things. We keep the globally at edible. Because we want to add this in the store folder. And then we do the. This is still yes and the next is we disabled that activate default behaviors. This other categorization and related items and all this fields you know from standard blown, but for here in this case we don't want them so we say first, keep it, keep it deactivated. We will activate to. Of them. And the next steps. So here you have to. All the stuff again. So. Now let's. See what we have. So we have a content folder now. So I creates for everything you create like views or behaviors and view let's usually different folders and you will find them. So we have a product here this XML file is when you want to create through the web your your schema fields and then download it here. And then we delete it because we said we want to go with the Python version. And here in this Python version. This is just example stuff. I will just delete it. And if you have in style clone snippets. Visual code extension you can just write clone. And the first thing we want to create is an image field. So. And we call this photo. The rest we can just keep like it is. The only can also adjust is the lead stuff. We see here right away there's there's some issues so as the. The node says we have to make sure that name file is activated. And also we have to make sure that the message factory this underlying is loaded. Let's clean this up a bit put the activated stuff here. And do an I sort. This looks much better we can also do format content which does black in this case. It's a great thing and also the testing relies on on black later so black will auto format your Python code. So that it looks nice and has the same rules for everybody. So this would be one field we need another field, which would be again blown and then we want the rich text field. We just call it text. We could leave it like that. So. This is what we did. We put this field. Text field. We forgot to fix the import. So here again we have to make sure that the rich text field is actually imported. Because this is. Coming. They are already here you just have to uncomment them most of them. Not not all of them but they are there so that it's easier for you to to use this. Okay that's it. We clean it up here you see how it looks when you're done. The next thing would be to adjust the FTI settings. So under profiles default types, you will find the products XML. This is the product name you have some settings for the product itself also like this global allow these settings are all made here but what is interesting for us is this so we want to activate the basic and name from title behavior basic basically gives gives us a title and the description field and name from title uses the title field to automatically creates the short name from the title when we fill it in the title. So this is very useful that's why we use this the rest, some of them also belong to the, by standard activate so if you don't say no to the question to the default behaviors you will have some more activate, but for now, this should be should be it. Um, we will just restart blown and uninstall install our add on. So, it's just do that. Add ons. Uninstall install. Now we go to our products. And don't need this anymore. Let's see we have a product here. We can create a product. The name of the product. Let's see fresh coffee is the name of the product. So, we have a product of drinking. Then we have these pictures here they are in the in the assets so you can just choose them but you could of course use different pictures from from some websites whatever you want this is the dynamic part so whatever you want you can put in this is the text. We will. Now we're talking. I was on the wrong page. So, coffee and tea and this is the description. And the picture we have already in and this is the text so. We are going. So, let's do the next one. Product. So, this is the title. This is our description. This is our text. And that's the product to save it. So we see the picture what we don't see here and the text is also here. What we don't see here is the title in the description that's because we are only pulling by default, the content core and the title and description are not part of that but we don't need that because we will use it in a different way. We will see that shortly. So the last products. It's this that product photo and the last text. The text can be formatted, but this is not needed for now. So if we go on products now let's edit our collection. Now we want to say, we want to show a specific type and the type is products. So here we have to preview already. We want to sort them by order and folder so that we when we change the order and the products folder. They will show in a different order. I think that's all that we need here. Now we see we have already the products in the right order. The only thing now is they don't look right. So to fix that, we have to do the last step now. Now a road trip. We are creating a view. So we're using clones CLI. Stop this for now. Also do this in the other way. Again, get that. It's commit. So now we want to pipe in class. We want to call this products view. We could choose your default view. This makes sense when you create a view for content type, for example, because this gives you some nice shortcuts to show the content. But in our case, we will replace it anyway so we can stick with the default. We also keep the defaults. So we want the template and we also don't change the name of the view and the template. But you could back to the editor. So now we have views folder. Views folder has a template, has a Python class and has our configure.tml where we registered the view. One thing we have to change here is we have to replace the folder with the collection. So if you want to create a view for a folder, nothing to change here. But you can put any other marker interface here so that it's under this name. It's only available for this kind of project. If you want to view on any context, you just put the Nusters and a file card. And that should do the trick too. So the next thing will be we are creating a file in the types folder for the collection XML. So we go on files, default types. So far there's only the products. Now we create a collection XML and we fill in this. This is a much shorter FTI because we only want to adjust one part. If you look here, for example, in the products, we also will find the behaviors and view methods. This is like view. And in the collection, the collection already has some. That's why we put the purge faults here. But we want to add another view method so that our view is in the list of selectable views for our collection. That's what this does. So this is the template for our view. Let's copy it like this just to mention. This is basically filling the content core area. It's iterating over the results of the collection. Then item is our item in each iteration. So we will fill in the description here. We fill in the title here. We create the image path here. We use the large scale from the image. And the same we do here again. The reason why we do that is when you look at the blocks, they are alternating. So it's like this combination and then you have even an odd variance. That's why we make it simple and we just do this. So let's just replace the whole thing with what we had there. So we have one thing here. Here's a condition. We use the repeat item. So when this is not called item but something else, this would be different here. But the last thing is to say, okay, when this item is an even or odd item, then do this. So in this case, if it's like the first, then this will get. And if it's the second, then it's even then this part only happens. So we only in in insects at the time one section, but it's either one or the other. So one last check here is the Python thing we have to adjust. That's the last thing. So if you look at the Python codes by default, it's our browser view. So we will now load the collection view because we want to basically use the collection view. We have a different template. The rest stays the same. So this stays the same. We need to delete this because we don't use it anymore. And that's basically it. So we want to start again. Since we changed stuff in the profiles folder again, what we need to do is uninstall install or add on. Or when you have public stuff, then you should write an upgrade step. Clones here I can you also help with that. So on the products, we should now have a products view here. So now that we switched that we have our content and it looks exactly like we want. So I would say not bad for the first taste. That brings us to the end of the official part. I hope my screen sharing or you can just join and ask questions. Also to Stefan, I think Stefan is still around. I hope. I'm not sure. Thank you. Questions from your side so far. Don't be so silent. Sorry for the rush. But it was a lot of questions to go through, but yeah, we have a bit of time left. If you have any, any questions to clarify, we can we can go over that. Yeah. One thing I can add is right now we have in the scene in the rules, a rule which will basically disable require Joe S. You might ask why disabled require jazz, because in the current alpha it's still there. So the resource registry is still working like in blown five, which would break things like the index HTML so you cannot just have like simple JavaScript included. You would get this undefined define nasty errors from require jazz. This rule will basically inject. No, no, this one, this one will inject like this at the very end of the of the header. And this will disable require jazz so that when in later code like in the body of of the HTML file, you use plain simple JavaScript. It's independent from from require jazz so there's no problem there. But this will go. So in the final release of clone six, this will not be needed anymore because there's no require jazz so nobody can shoot you in the food. But for now it's there. Still there. We will probably make some small adjustments after the trainings but it's mainly rockable like this. If you have any questions you can, as I said, reach us on discord anytime on community plone.org. Ask questions. Also all the Bob templates in for the plone CLI uses. They're not set in stone. So if you have wishes or ideas to make things better we have some some people already contributing to that. So this is going step by step. Yeah, you will find the code of this seem also already in the collective so if you search there for a plone seem business casual 21, you will find. It's the same steps I did here so you can inspect that in case you have any trouble with with your current setup and couldn't follow with the pace we we made. Then you can just inspect that and see what's what's wrong or just use this as an as an inspiration. It seems I usually get more complex but this really depends on on your needs. Okay, if nobody of you has any further questions then I would just.
The core technologies involved in Plone 6 Classic UI theming and how to write your own add-on theme package. You’ll learn how to create a theme based on Barceloneta (for minor customizations and overrides) or from scratch based Bootstrap 5 without any dependencies to Barceloneta.
10.5446/55883 (DOI)
I hope you can hear me well. I think audio should be fine with the microphone. There's one here. So we're super excited that so many of you decided to join this training. A big welcome from me and Katja, my co-host. Please, Katja, say something. Hi, all together. Nice to meet you here. Katja is in Sorrento, sunny Sorrento. Hopefully, I guess you're holed up in your hotel room. Well, I am in Munich. So the marvels of modern technology bringing us together here to give that training together. I hope that will work. Katja, I guess you can share your screen. You have the screen sharing feature as well, because I'm making you excellent. So you can make yourself heard and visible. If you have, before we start, some, I don't know, general tips and tricks, we'll have a couple of breaks. I hope. So this is going to be four hours now for our training. I we will stop at 7 PM central European summertime. So four hours from now. If we're finished earlier, that's also fine. If it takes five minutes longer, that's also fine. But the point is, if you need to go away, if your family pulls you away from the screen or stuff like that, that's not a problem. All of this training will be recorded and will be on YouTube after the conference at some point. Paul will have to do one of the foundation board members. We'll have to do that job and it'll probably take a couple of days to download everything from Zoom and upload everything onto YouTube. We will also add chapter markers to the video training. And yeah, so you can, if you miss something, the point is, don't worry, it'll be on YouTube. So if you need to take a break. Also, if you have questions, please use the Slack. That will be the best to use Slack. You could also use the chat here in Zoom. But in case we miss a question, because there were too many discussions going on on the side, please just ask them again or just unmute yourself and say something loud, that's fine. If you don't want to be recorded, because the whole Zoom meeting is being recorded, just keep your camera disabled. That is fine as well. Before I start sharing my screen, I would love to see all of you just for a second. I promise you, this part will not be on YouTube. I will cut this part. So you're not going to be on YouTube with this part. But if you just enable your camera for a second, so I can wave at you and wave back. So many faces I know. Really happy that you decided to join our training. You should give this training, probably. A couple of you have been around with Plone as long as I have. So you know more than me, probably. Let's see. Let's find out. OK, excellent. I'll take a screenshot of that. Maybe, yeah. Something like that. Yeah. I'll not post that anyways. Just a memorizer for me. OK, I'll start sharing my screen. So here goes. Screen sharing, excellent. In case screen sharing drops off or something, just yell at us so we can enable that again. You should see a browser saying Mastering Plone 6 Development. So this is also the documentation. We're following. It's on trainingplone.org. And it's the Mastering Plone 6 Development training that we're following. It's been updated yesterday and today and many days. A lot of times. But I think the last commit was a couple of minutes ago. I hope that went through. So if you want to follow the code examples, use this to copy and paste. Don't type. Please don't type. This is only two times four hours training. And we don't have time for you to correct your typo mistakes. I make them all the time. This is not a critique. It's just it takes much, much longer to type it. But a vertebral advice, if you do the training at home, continue the training or redo the training at home, which I strongly recommend that you do that. Don't copy and paste. Type instead. If you have the time, talk to your boss. Say, hey, I've got this training for free from the conference. And it's just it's maybe good, but not sure yet because it was only the two times four hour sneak preview. So I want to work through the whole thing. Type the code because when you do your own projects, you're going to have to type it anyway. A lot of your custom code. And it'll stick in your memory much better. So yeah, this if you want, you can switch off your video. If you want, it would probably be good for bandwidth issues. But if you want, you can also keep your video on. So you can wave at me if I say something stupid or at Katja if she says something stupid, which will certainly not happen. So this training is by default takes a week. So what are we going to do to make that happen in two times four hours? We're going to start skip a lot of chapters. And we're going to stop very early. So this training has as you see here, 54 chapters and we will stop, I don't know, somewhere much earlier here sponsored. We're not going to get to sponsors. 31 is the chapter we did last year at the end. So and we're going to jump ahead a lot. So if you think you missed something, just read the online training or and that's also very important. We can be hired Katja and me. We are certainly available to do in-house trainings for you or your company. We did that a couple of times already, various topics. So feel free to contact us, either her or me or even us as a team. She lives in Switzerland. Don't you Katja? Where exactly do you live? In Zurich. Beautiful. So if your company is based in Switzerland, Italy, or somewhere south of Munich, you probably should call here. If you're in Norway, you should probably call me. No. Yeah. The point is we are for hire, guns for hire, and not only for trainings, but also for plan development. We live of this stuff. So consider giving us a job. We're both pretty busy at the moment. So this is not a plea for jobs. But at some point, we will looking for more. So OK, this is about this thing is 10 years old, the training now. This is the video. This is incredible. I would love to hear some of you, something about you. But since it's online training, I will not force you, will not do this introductory round. If you the point here is we would like to hear what's of interest to you mostly, to discuss and give depth, more in-depth, deal with these issues more in-depth during the training. Since it's only abbreviated online training, we can't really do that. But at the end of today and at the end of the training, tomorrow, we'll have about an hour or 45 minutes open for discussion. And feel free to jot down any questions that you have. For example, I have this business requirement, or my client asks me to do this and that, or can we do this and that with Plone? This is that would be an excellent time to pick our brains to get these questions answered. So during this training, we will create, guess what, a website. That's what Plone is good for. And the website we're going to build is for a conference. And the conference is going to be a Plone conference. And it's the Plone conference in the year 2035. And it's going to be held on Mars, because Plone obviously rules the galaxy by then. And we're going to try to fulfill a couple of requirements. I'm not going to go through all of them. But obviously, if you have a conference, people should go to that website and learn something from about the conference, when it happens, where it happens, if there is a talk registration open, stuff like that. And then you also want to find information about what kind of talks are there, if there are trainings, what are the keynotes and the keynotes speakers. So this is basically stuff that you could do with just normal documents. You just write that down. But we want to structure that some more. And to be able to structure that, we could do that without structured content. But we want our speakers to be able to submit their own talks. And to have that in the same way, we create a content type for talks. And so speakers can submit them. And then I want to be able to edit my talk before I submit that. And I want to be, as an organizer, I want to see a list of all submitted talks, and so on and so forth. Yeah, a couple of requirements. And the most important point about that is not like there are complex comments. These are like everyday things. The requirements change because your boss forgets something or just a new requirement comes in. And all of these things happen during the lifespan of a project. Because let's be frank, you never got a project definition before you started that. That was 100% set in stone and never changed during the course of the project when you did the project. So for example, at some point we decide, we're not going to only have talks. Actually, these talks are going to happen at a certain time in the year. So we need to add time and dates. And also, these talks are going to happen in a certain room. Hopefully next year we're going to be in a physical room and not in a virtual room. So these are things that in this use case, you forget. You forgot when you started creating your website. And at some point, yeah, we will need to do that. And there is a lot of talks, tasks, that we need to solve for this. It will be interesting for you to read through that. I'm not going to go through that. You'll realize what's going to happen. And also, it's most important what we not do actually. We're not discussing professional theming. There's actually two trainings about that. One for the classic front end in plan 6, that is today. And one for theming Volto. That happens tomorrow. A training about that. We're also not going to discuss professional deployments. So if you really put that on the web and hosting websites, this is not covered in this training as well. These are the two most important things that we're not discussing. There's a lot of technical stuff that we're not going to do. I'm not going to tell you what PLONE is because you're going to learn about that. But I'm going to point out a couple of core concepts that if you are new to PLONE, and I guess a couple of you are maybe unfamiliar to you. And one is traversal. So that traversal stems from the fact that PLONE, as a content management system, stores content in a tree structure. So there is a tree where the content lives. So in this example here, you have the site root. And this tree has many branches and leaves. And the whole tree acts like a nested dictionary. So each branch has, again, a dictionary within it. And traversal is starting from the root and walking down that path towards an object. It's like a folder structure in Windows or Mac or Linux. So it's very familiar to you how the content is structured. And to access this certain kind of content, for example, from the site, you have a folder. And in that folder, there is a page. You can do exactly that. You traverse to that page going from the site to the folder, to the page, walking down that path. And this is also the way that URLs are built. So you have a URL. If you go to demo PLONE org, there is a demo site that you can use. And you have this structure here. Demo, a folder, a page inside a folder. So there is no routing like you would do in, say, Flask to expose a page inside a folder under this URL. But this is actually the structure of objects in that object tree that's exposed like this. OK, that's number one. Number two is also super important. That PLONE is built around the concept of object publishing. So if you have an object in your site, in your database, you can call that object. And this object will publish itself. So when you go to this URL, for example, or let's make it a bit simpler, DE demo, a page, this object is found via traversal, traversing down that path, that tree. And this object is found. And then this is called. So in Python, if you call something, you just do, how did I go there? You actually call that by with this. You open your brackets and close them. So here's your object. And then you call them. And once you call them, a lot of magic happens. There's actually a Dunder call method in a base class in there. And that, at some point, makes sure that a template or whatever happens in there is rendered. And it returns HTML to you. And this is actually what you see in the browser. So in a nutshell, this is exactly what happens when you open this URL. This object is called. And this HTML is returned. And your browser is smart enough to render that HTML in a nice way. That is the second basic concept of Plone. Third one is schema-driven content. So the content, Plone is built with content types, for example, a folder or document or a news item. And all of these have a different schema. And the schema defines, in other frameworks, it's called model, which kinds of fields basically are available to edit. So if I use this demo page again, and I log in as an admin here, and I go to this edit that, I have a couple of fields here. So this is classic Plone. We're going to move to Volto in Plone 6 in a second. These are the fields that are available. And these fields, all of these, there's a couple of fields, actually, they are defined in a schema. And these schema fields are then accessible as attributes on your object. So you not only can call your object, but you can have these objects have attributes. And the attributes store the value that you enter in a field that field is defined in a schema. There is not only string, like text, like here's some description, but also more complex attributes, obviously, like an image that is its own object, named blob image in this case. And it stores the data as the attribute data. And here you have the binary data from that image, for example. And objects can have multiple schema, and they are combined to give you functionality that you can reuse. Enough of that, I'm not going to go into the component architecture, but it's just a thing to prove that we are super smart, and you should be in awe of everyone who actually worked on that. And starting with Plone 6, Plone comes with two frontends. So there is the classic frontend that you're seeing here on the demo Plone org page, because Plone 6 is only in alpha stage now. But there is also a demo Plone 6. And these look a little, let me, classic. Let's open these side by side. So this is Plone 6 classic, and this is Plone 6. This is the official naming. Plone 6 has a brand new frontend written in React, a JavaScript framework that you're probably familiar with. It is client-side rendered. So the magic happens on your laptop or in your browser, basically, where the templates are rendered. This, whatever you see here, is rendered in your browser. And here, this is in Plone 6 classic. This is server-side rendered templates. So the HTML is returned via the concept of object publishing here. Actually, that's interesting. I should probably update that. Where's the object publishing part this year? Because Plone 6, modern Plone 6, actually doesn't call the object. It uses the Plone REST API to get a representation of that object as JSON and React or Volto. The frontend has its own name. It actually has its own logo that we're not using here. That makes sure that it's rendered. So object publishing actually is only true for Plone 6 classic, because that's where the objects are directly called. And it's your job to decide for whichever project you want to do which frontend you want to use. There is still a ton of classic frontend projects around. And it's been very modernized. It uses Bootstrap 5 and Webpack. And it's super modern and it's super cool. But it's not JavaScript. It's not React. So in Katja, there's a lot of Volto projects. So yeah, it depends on what you're doing. If you're a university and you have Plone site with 200,000 objects in it and you're migrating to Plone 6, going all into Volto would be a hard task because you'd have to rewrite a lot of your own code. So they will probably use the classic frontend. But if you're starting a completely fresh, a new project, Volto is definitely a smart default choice. OK, installation and setup are not going to go into that because you already, hopefully, installed Plone for the training. And if not, if you had any issues installing Plone for the training, please write something in the Slack channel so we can try to help you. Katja is available while I'm talking to help you assist you or if there's something fundamentally broken, we can discuss that together. But you should all follow this installation. This is the technical setup as a training. Here, installing Plone without Vagrant. We're not, hopefully, nobody tried to use Vagrant because this will probably not work on a Windows laptop at the moment because our setup is not updated for that. And Volto would not run. But you could probably run the back end in Vagrant. So use it directly on your laptop like this. If you had problems, just tell us. After we did this, you follow these whole instructions here and you got your Plone running, you should have a setup that looks like this. 3 minus L2. Let's say 1 first. So you have a folder that is basically empty except that it also has two folders, one for the back end and one for the front end. The back end is in Python and the front end is in React, in Volto. So if I expand that to 2, make that a bit bigger, you see that inside the folder back end, there is a source folder that will contain the code that we write during this training for the back end and a couple of other folders that I'm not going to go into now. And a front end folder that also has a SRC folder, which will hold the code that you write for the front end during this training. So I just decided that last year I used Sublime for the back end and VS Code for the front end. This year I will use VS Code for both, but me and Katja will both, when we switch from back end to front end and vice versa, we will always try to take you with us and tell you where we actually are because we'll have to do a lot of tasks on both, not the same task, but to fulfill a requirement, you need to make changes to the back end and then you make changes to the front end. And only some tasks require changes in one of these places. Actually, a lot of them require changes in only one of these. But yeah, it's a complex system and it's not getting easier by having a front end that is decoupled from the back end. Good. Since no one wrote any help, nothing is working in the Slack. I'll just keep on going and we'll continue with chapter 8, starting and stopping plowing, because that's what I'm going to do now. Here. So when I'm in this folder, I first stop everything first. Actually, I already started my front end because that takes a while, but I'll do that again for you. So I have two terminals and you should do the same when you're doing plowing development on your local machine. If you're a front end developer, that's a different story. You probably, some back end developer, got you a Docker container and that's running and you only care about React. If you're a back end developer and you only do back end, you have plowing running locally and you're doing your stuff there and you don't care about any of the JavaScript nonsense that your colleagues, tomes over are doing. So but in this training, you're doing both because you're going to be a full stack developer. I hate this word, but still, this is what you're going to do. You're going to learn how to work with Python and with JavaScript. But not only Python and JavaScript, but actually, clone, which is a huge stack on top of Python and React, which is a huge stack on top of JavaScript. Did I? Yeah, well, didn't say anything stupid here. So I'll go into the back end after I ran build out, which I already did, and I'll start my back end with bin instance FG. FG stands for front foreground, not front end, foreground. So this is the debug mode. So I'm going to see a output. Everything that's happening in my plowing side is visible to me here. And here I start my front end with yarn start after I install that. So you should have approximately the same setup right now. And if you go to a, let me close these demo sites, if you go to once it started, you can go to localhost 8080, because the back end is, by default, running on 8080. The front end is, by default, running once it's finished. It takes some time, especially when screen sharing, which also takes a toll on my laptop. It's a bit slower than. It runs on port 3000. So I'm going to start my browser here and go to localhost 8080. And it's already telling me, plow is running. And your plow side has not been added yet. I'm going to solve this riddle in a second. OK, Volto, here you see in the front end, also it has finished. And I can start that. And I can go to the front end. And it will tell me this page doesn't seem to exist, because I didn't add plowing yet. So to create a plowing side, I need to go to the back end. Always, you can't create a plowing instance in the front end. So when I say instance, we're talking about Python here. So a instance is, so there is a class defined. There's a class plow side, actually. I think that's what it's called. And when you create an instance of that, someone is instantiating that plow side, calling plow class. Site equals plow side brackets. And in a nutshell, that's what's happening when I click the button here. You should use the advanced button. Going to switch to English here, obviously, because that's the training language. Keep your time zone. Leave the name of your plow side as plow and pick the plowcon site add-on. You need to pick that, because otherwise, the front end will not work properly. This is auto-created and automatically checked out by buildout when running buildout. I'll click this button now, and plow switches to English. And when I go to the back end, I see when I scroll up, I saw ready to handle requests. And then I clicked after here, serving on. And then a lot of stuff happened, and plow told me about everything that it did, including here, finished bundle compilation. That's the last statement I see. And then I see a plow side, actually. Plow conference 2035. That's what it says. And starting now, I can actually see, this is the back end user interface. And now I go to the front end user interface. And I reload that. I should probably log out and log in again as admin. So when I say this is the back end user interface and the front end user interface, this is actually plow classic. So there is no what actually there is, but we're getting to that later. But this is the back end is the same as plow classic. So when we're developing plow six using React, and we say, we need to do something in the back end, that either means we need to write some code in our editor, or we can go to the classic user interface and do some manual changes there. And you see the content is the same. So if I modify, if I create a site here, for example, create a site and I say test. And page. And here's my, here's test. It just says test. Why does it say welcome as well? It's kind of weird. Something's wrong here. I should be able to reload this. And here it is. So this is just a visual representation of the same data that lives on the back end. So these are connected. These are not two separate things. These are separate processes, but these are not separate databases. The database of the back end is the same for the front end. That's the whole point of that. So here I got my site. I created a site. I chose the language. I checked PlonCon site, clicked create PlonSite. Then I saw that. And then I started, or even actually I did that before. I ran YarnStart. And I saw my front page in the front end. And I can skip over these exercises. You should do them at home. And then I see my site. I walk you through the user interface, the most important parts of the site. So obviously, we're always talking about, when we talk about the visual representations, always the React user interface that we're using here. And we're talking about. So here you have the header around the site. There is a logo that is always linked to your site route. You have a automatically built navigation that is created from the content that is in the site. You already have some content, which is weird, because you didn't create that. Plon did that for you. It's auto-generated content that is created when installing Plon. And I'm going to delete that. And you should do the same. Just select everything. OK, that was probably a bit too quick. Let's step back. So this is the navigation. Here's a site search. You can say, I don't know, test. Should be findable, because I created that. You have a content area, which is here's a header. Here is a content area where you can just write stuff, save it, and it's visible. This is what you're editing when you edit stuff. And there is a footer that has some links that can obviously also be configured by you. And there is a toolbar to the left side, which has the link that I just used to show you how to delete everything. That's a very important link. It's the link to contents. If you hover over these icons, they explain to you what they're doing. You can add various content types. You have more where you can change the workflow, change the view, inspect the history, and stuff like that. We're getting through that. Most important one is add, edit, and contents. We're going to use contents quickly to navigate to a page where you can manage content in bulk. There are breadcrumbs that you can use to navigate to the side route. I can select everything. And I'm just going to delete everything. Just kill it with fire. So now my blog site should be empty. Should only have this description here. If I do a hard read, that's caching in both. Obviously, I did a hard reloads there. So I got everything in the navigation. Just died and went away. So we covered the footer, the toolbar, edit, folder, contents, and add. We're going to ignore these for now. Yeah, very important. You also have your portrait here. You can log in and log out. There should be your photograph. You can modify your profile. And you have a link to the site setup where you can configure the site. So why am I not seeing this? And why does it say admin and not Philip Bauer? That's because we haven't created a user yet. And that's actually something you should do as a very, very first step. And we're doing that now. Well, you're doing that now if you're following along. So click on the picture here. So OK, I skipped over that. When you open your browser, your site in your browser, you have to log in for the first time. And the password is admin. And the user name is also admin. And please never use that in production. That would be pretty stupid. This is just for development. You can use admin or test one, two, three, or whatever you fancy. But if you ever host a site on the internet, that will certainly not be OK. So click on the portrait. Click on site setup. And here you have all these links to various control panels. And click on users and then create a new user. So how do you do that? Here is a small icon to the left where you can add a user and then pick your name. I'll use Pbauer. I'll use mine. You use yours. Bauer.de. Password. I always use tester for these trainings because it's five letters. It's lowercase. It's small. And give yourself the role of, no, you can't. You can give yourself any role, but not here. Manager. Yeah. Add yourself to the group of managers or to the, oh, OK. Complex. Add yourself to the group of administrators. Then you're an admin. And that's how I'll get to that. How complex systems should be set up. You have groups. Members go in groups and groups have roles. So every system does it that way. So this shouldn't be new to you. Everyone does it like that. So now I created this user. I should probably save. No, I already have saved. So he should be there. Yes, he's there. And now I can log out, hopefully, here. Log out and log in again as Pbauer tester. Log in failed. Super. Why? Let's see. Maybe the default is the email address already? No. Did I do a typo? Crap. I seem to have made a typo somewhere. What's wrong? Does anyone know what I did wrong? I can always. Your press don't cancel. What did I cancel? Oh, you think I pressed cancel? Because I got my user here. He should be OK. I can delete that. I can delete me again. I'll just do that and create me again. OK, username Pbauer. BauerHTWD tester. OK. Test groups. Administrators. OK. Let's see if it works now. User created. Good idea is to have two browsers where you can log in as different users to try this stuff out and not having to log in and out all the time. Yay. I obviously made a typo when I entered my password for the first time and we grabbed that typo. And so I had no idea what my password was in the first place. So now I can actually, I can really log out and log in as Pbauer. Yes, here I am. So now it actually tells me my name. I can edit my profile and add a picture here. I'm not going to do that. You're going to see enough of me. So here I am. I'm an administrator and I'm a real-clone user. Excellent. I did my first job. So next up. We're not going to configure a mail server, but for production purposes, you should do that. Because we have an add-on called products printing mail host. We're not going to send any emails during this training, hopefully, unless you disable this add-on. But you should, hang on. Why am I not seeing site setup and only my preferences? Because I don't seem to be an admin. I can't even add content. So yeah, so the thing is, I'm not sure if that's to blame, probably not. Probably I'm to blame because I couldn't press Enter properly. This training uses clone 600 alpha 1. And that was released yesterday afternoon, I don't know, around 4 o'clock. I saw that and I updated the training build out this morning to actually use this version. So it's brand new and may still have issues. Maybe not. Maybe it does. So maybe that's to blame. Maybe it's me, probably me, because this alpha is pretty good. So I'll need to go to site setup again and go to groups and add myself to the group of administrators. Admin? Yeah, how do you do that? I'm not going to delete that group. Go away here. OK. This changed in, why can't I add a user to a group? Katja, can you tell me if that is something I missed? OK, I'll just make me a manager and save. But it should be in the user interface should allow you to click the group and edit the group. Maybe there, but there's only delete and not edit. So add users to a group should be possible. I'll show you that. The point is, if something is not working, you're not crap, let's use another system. You can't keep this system, even though it's alpha 1, because all of this is also available in the back end. And you have a list of users and groups and the same user we just created is here. And here's actually the link that I was looking for. You just click on that. And then you can say group memberships. And I want to, yeah, I'm not, I'm only in authenticated users, but I wasn't added to probably I didn't press save at some point. So now I'm the group administrators, which is fine. And I can, I'm a manager because I'm in that group. So I'm inheriting that role. I'm actually, yeah. So now I should be here when I reload. So this is a very important moment where you learn about the power of these roles. I was a second ago, I was just logged in as Pbauer. As a user, I had no permissions to do whatsoever. And someone changed my permissions and I reload the page. And suddenly I get all these icons. So now I'm a manager and I have the power to modify, change, and create content, add users, and whatnot. So that is important. But what I actually wanted to do is go to the control panel and get rid of the annoying error message that you get at some point when you're not adding, oh, there's actually a default set now. Oh, that didn't used to be the case. That's fine. So we have a default mail server that obviously doesn't exist because I don't have a mail server running on my laptop. But that's fine. OK, let's create some site structure because that's like the first task at hand. There's a plon conference. People need to learn about the content of that conference, talks, whatever training stuff. Without programming, we just use plon to create some content. I will create a couple of folders here. So yeah, actually it says some of these need to be a page. Some of these should be a folder. So let's just change that text here. Yeah, it doesn't really matter what I add here. Let's can add some. You see while we're doing that, you see a interesting new editor. If you ever use plon 5 or 4, the editor looked much different. We're going to go into much more detail there. But we'll slowly, slowly approach that. So let's just create some page called training. OK, page training. And I'll just add some Latin nonsense. A folder called schedule. Folder called schedule. And you see it's a different user interface. How come? That is weird. You can think about that while I'm continuing to do that. I'm not make sure that you're not adding, because once I added the folder schedule or the page training, I was in training and in schedule. And if I create new content, it'll end up in there. But we want that in the site route. So click on the plon logo before you create the next item. This is the location. Page about sponsors. Oops, page. Come on, see. Page about sponsors. Again, different user interface from folders. Page for sprints. Sprint and a page contact. Don't worry if you're not creating all of these. We're not using all of these. For the whole story, it would be a useful setup if we do a one week training, which we're not going to do. So this is how it should look like from your end. And then once we did that in, oh, actually, we should have kept a news folder. I shouldn't. That is stupid. I removed that. Why is it as it doesn't say news here, but in news. Katya, can you make a note that we update this part somehow? Yeah. I'll create a folder called news, where we add a news item called conference website online. That's like the first thing we created. So the order is weird. So let's change the order here. So again, we go to contents. And we can just grab one of these items and move that to the top. News are the most important things. So they should be the beginning. And here they are automatically at the beginning. And I'm going to create a news item, obviously. The title is this. Let's add some nonsense text here. And actually add an image. And a summary will be good. So you see the user interface is different from pages, because you can keep that in mind already. The editor for all types, except for pages, is schema-based and only schema-based. And the pages use a block editor, block-based editor. It also has a schema, but it's small. So this, by the way, it looks like not much, but this is a rich text editor here, this small field here. And you can add all kinds of stuff here. You can add headings and subheadings. Let's say this is the subheading here. Here is a heading, or subheading maybe. This is going to be italics or whatever. And this is a link to something. Where my links, here is a link. So I can select a link to something in the site, for example, to location. And done. And when I save, this gets rendered, and here's my link. So this works as expected. OK, I created a news item. Then I'll add another news item just to annoy you. New item, submit your talks, more Latin nonsense. OK, let's add an image, because it's nice to add images. Another image. And so we have two news items. And now let's switch to a different browser and logout. You don't have to do that. I'll do that. How do I logout? Am I logged out? No idea. I am logged out. Good. So I'm logged out. Where's my content? None of the content is actually visible, because we only created the content. We haven't published the content. So Plone is a super secure system in such a way that when you add content and you don't publish it, there is no way, no way, that this content will be visible to users on that visit your website. And that is not because it's not hidden away. And if you know the URL, you will find it, for example. If you copy this here and you enter the, it tells you, no. This is not for you. And Plone has a very, very strong security record. And it's built in on object level. Each object has a couple of attributes that store, that define which roles you have to have to be able to see this item. And whenever something is traversed to, Plone checks for these roles. And you don't have to do that. This is, Plone does that for you. That is one of the reasons why CIA and FBI are using Plone for their websites. I'm not sure if that's a positive recommendation or not. But I guess they're kind of security sensitive these organizations. So yeah, also we should have a folder called events, which we don't have yet. I'll create that. So you see a lot of additional stuff that you can add there. And in there, I'll add an event. And you learn about another content type. What's the event deadline for talk submission? I know the dates are going wildly around like 2035, 2550. Who cares? So here you can add start and end date for an event. Let's say this is on Monday. Because I used English as a language, the calendar seems to think that the beginning of the week is on Sunday, which I think is weird. But that's America for you. 10 in the morning, 2 in the afternoon, that seems fine. There's also a whole day and open end. And we'll figure out why that is there. Also, still I haven't finished talking about the publishing on content. If I reload this, none of this is still public. I will need to publish that, click on the three dots, and switch it from private to public. You can also switch it to review, which is like a pending state where something is not visible to the public. But if you have a more complex setup where you have more users with more different roles, you can easily show you the groups again. You see administrators, reviewers, reviewers. That's the ones that you want in this case, because you have a couple of people in your organization that are gatekeepers for content before it's published. That's what this role is for. So this item is now public. The folder is now also public. And once I reload this site, I see events, but nothing else. And I can go, I said there is a feature for bulk editing. I go to contents, I select everything, and I change the state, which is the lights. And I change it to publish and to everything that's in there. And voila, my whole website is public, even my super secure intranet that I didn't want to have published. So make sure you're actually only publishing the content that you want published, because otherwise you may have problems. So here, my news items are public. I'm an anonymous user, and I can see everything. I can't edit anything. I can't even pretend I can edit. If I click edit, this doesn't work. I get an unauthorized exception. OK. The default content types already did that, but I'll show you the beauty of the editor again a little. So in the front page, we have a page type. Actually, it's not a page type, but it's page type-ish. It doesn't matter where we have this blocks editor. And when you press Enter here, you can add a new block. And this is a beautiful and super powerful way to create beautiful and very different layouts and sites. You have these, by default, a text block is created that you can just type to. And it's a rich text editor again, where you can have all kinds of different styles, call out, whatever type headings. You can add lists and stuff like that. But there are more than these types. There are image blocks where you can add an image. Let's upload an image. And it should be, is there something in the story? What kind of image I should upload here? Probably a Plome logo or something like that. Let's add a Plome logo. It's uploading the image in the background. And you can configure each block on the right side. There is a toolbar to the right. You can tell this block, OK, this should be small to the right. This should be hero sized to the left or full width. I'll make that small. You can add a link to that image. And then you can save the page. And voila, yes, there's your image. And anonymous users also sees that instantly. OK, what else do we, yeah, let's add a table like thingy here. Here is a table. Let's see. Stuff, whatever. There's a nice way to have content in tables. And there are very powerful blocks that contain dynamic content. In this case, I added a, what did it say? Listing, a listing block. And the listing block is it displays the results of a search that you define. So for example, here you say, add criteria. I add all news items, for example, matches any type is news item. And it automatically shows me while I click there, I already see the news items that I want displayed there. I can change the variation where I actually have an image gallery or a summary view where I have these little pictures. And I can also add a list of the items that I want to display. I have these little pictures. I can change the sorting. I'll do that by publishing date, which is called effective, yeah, effective date, reverse order. So the newest one is on top, limited to five. So we don't get spammed. And I save that. And I have a semi beautiful front page with dynamic content in here. So this is the page content type, which has this powerful editor that is probably all you need to create beautiful content. But it also has the schema that I mentioned when I sent the schema based content and the fields that you saw when we added a folder. You can expand that and you have all these fields that are defined in the schema available. So you can tag content, for example, from page, whatever it is. It's now tagged as that. You can set the language. If you have multiple languages in your site, you can add a link to something else as a related item. You set a publishing date and an expiration date and stuff like that and copyright information, allow discussion, and whatnot. A word of warning about publishing date, this is not workflow. So if you set the publishing date in the future, this doesn't hide. It hides your content, but it only hides you like this. Everybody knows you're there, but actually you think you're not, unless you're a cat. But so if the URL is, if you know the URL, you can access that. It's just not showing up in the navigation. OK, so that is the page. A folder is obviously a folder, but only displays a title and a description and can have different types of listings for the content inside that. So for example, the News folder has a view called Listing View. At the moment, I can change that to Summary View or Tabular View or whatever. There's a couple of views that you can see, but it doesn't have a rich text editor. You can't add rich text here. That's a folder. It is created to create structures. File, obviously, is for binary data. Image, obviously, is for images. Event, you already saw that. A link is also a content type that can link to either internal or external content. So you can add a URL here. Or you can select an item from the site that you link to. So it's a very powerful feature. And only if you are allowed to edit that link, you get redirect. You see the link when you click on it. You go to the link, and it points to this and there. But if you're not an editor, you actually automatically get forwarded to whatever the target destination of that link is. So that's a super powerful thing to create ad hoc links to locations that are actually somewhere different than where you are. A news item, you already saw that. That's basically like a document, like a page, just with an image and a different editor. It doesn't have the blocks editor. A collection is, we're not going to go into that. It's like the listing block, but as a content type. So there's a content type that does exactly, exactly the same thing as a listing block. And if you're asking, why do we have that? That's because listing blocks don't exist in classic clone. In classic clone, when you go here, this is the same database. You see all the content that was just created by me here, deadline for talk submission with date and time and the news items, including the images. So it's all the same. This doesn't have the blocks editor. So it doesn't have a listing block. And a lot of sites are using, still need to display dynamic listings of content. That's where collections come in. For clone six with the React front end, you probably don't use them. OK, we'll skip. You should read that content rules are so powerful, but we'll skip that because it's way too, we have way too little time for that. Also, working copies are super cool. Placeful workflows are great. You can create a folder and say, this folder is called internet. And it has a different default workflow than the rest of the site. And this default workflow that you're configuring there doesn't have a published state. So it only has, I don't know, private and internal. And so you only see this content when you're logged in user. So it's a perfect internet. So we have a lot of clients who have a website with clone and have an internet. And it's the same website, but it's a folder that has a placeful workflow where you are not able to see any of the content that is in there. So that's a powerful feature and a lot of other powerful features. If you don't know them, read up on that. We'll just keep on going. It is already 10 past four. I'd say when shall we make the first break? I think tomorrow maybe. Let's keep on going a little more, like 20 more minutes. If you don't have the attention span, just get some coffee and catch up what you missed on YouTube. Or yell at us in the chat. I see Sean had a question here. Kadi, can you look at that? And if it's important, we can discuss that in there. I have a look. Excellent. So yeah, we could probably skip all that, but it's not super unimportant. So if you're new to Plone, this is important. Plone has been around for 20 years. Yes, it's been the 20th birthday of Plone. Two or three weeks ago was the day. And like 20 years ago, there was no React or JavaScript framework. So it is obviously, it started as a back end system, this one. And it didn't start from scratch. It started as a user interface for a system called Zope. And Zope actually also started as something before, Bobo. That's what it was called. And it actually had another third name, whatever. So it's Plone, it sits on top of Zope, which is written in Python. So at the lowest level, there's Python, the programming language. On top of that, it's Zope. It's like an application server. If you know Java, Tomcat, comes to mind, for example. Then on top of Zope, there is CMF, is Content Management Framework. So it's a framework to build content management systems. So if you know some computer science theory, it's like a factory factory. And then the whole thing on top of CMF, the Content Management Framework, sits Plone, which inherits, like it's multi inheritance from CMF and from Zope, and a lot of objects on top of that. It uses a database called CMF, called the ZODB, Zope Object Database. And it's not a relational database. It stores Python objects natively in the database. They're serialized as pickles. If you have never heard of pickles, it's in the standard library. It's a way to serialize instantiated Python objects as stringish types. It's not meant for human consumption. You don't read that as a human. But it works great in the ZODB because you don't ever have to think about your database. You don't have to model or manage your database. Obviously, you need to back it up. But you don't have to write a schema for the object relation manager, the ORM. So we store the objects directly without an ORM. And there is no scene between the code and the database. This is an example how you actually store something. You inherit from persistent. Not going to go into that. And it's super powerful. It used no SQL when that name was not even invented. It can use multiple clients that talk to the database. And that's called ZEO. It can replicate using a system called ZRS. Or, alternatively, you can use Rails Storage to store these pickles, not in the ZODB, but in a relational database. Again, in Postgres or MySQL or MariaDB, you're not going to have tables with my title, here's my description, here's my relation to whatever. But you're going to have a Python pickle in there. So this is, again, not good to read for you. But it is Rails Storage. So you can give it to your database admin to say, OK, you guys take care of replication because I don't want to be bothered with ZRS, for example. So that's powerful. And binary data, obviously, is not stored in the database. These UDB as strings, but stored as files in the file system. It's called Blob Storage. And that happens automatically. Here, I'm not going to go into ZOP and CMF. There's a lot of history in here and nothing about Pyramid. There's a lot of to be done. The important thing here and now is that Plone has, for a couple of versions, now has a REST API and uses the REST API to decouple the back end and the database and the logic and the content from the front end that is called Volto. We should probably write something in there. Not going to go into the history of Plone. If you want to discuss that, there is plenty of time during the conference. This is Plone 6. You'll see it while we talk about that. And it only runs on Python 3, obviously. So Python 2 is dead, dead, dead. But you can migrate to Python 3 if you have a Python 2 database. There are talks about that. One is actually by me, I think, on Wednesday. Katja, I think that's one of your chapters, isn't it? Yeah. Excellent. So I'll stop my screen sharing and mute. Hi again. So Plone 6, what is Plone 6? What is new, which changes came with this version? Philip already mentioned the very important points. So what leads to me is, if you know Plone 5 already, I think most of you do. Why is there an inversion? What's new? For me, my personal opinion is that two important goals to review, to rework the whole thing is to achieve a new editing experience. The editing experience means how does an editor compose a page? There were several add-ons in Plone before. And the meaning was this could be easier. There are new techniques available. And we want to use it also in Plone. So yeah, as I said, to make the life of an editor a little bit easier. And Plone 6 itself with the Volto front end, as you've seen with Philip's presentation, comes with an editor, not an editor, a way to edit pages that use blocks where you can break your text in blocks and rearrange. This is part of default front end of Plone 6. But there are also already add-ons, which help you do even more to compose a page with columns and so on. And beside the editing experience that I think is really improved is that the development of a theme is much easier. Volto or Plone 6, as we say now, comes with several helpers to customize the default theme. It's possible to customize single isolated components. And it's also a lot easier to customize default components by the so-called component shadowing, which we will see in the next chapters. And yeah, I think these are two points that are important for Plone 6. About the details, I will skip the long list of cool things that comes with Plone 6. But maybe one important point you have, as you've seen already, two parts back end front end. And these two do communicate via the REST API. There are default endpoints, REST API endpoints, which we can use when we develop with Volto or the Plone 6 front end. For example, if we want to fetch data, we use the search endpoint. And we can also create add-ons with custom endpoints. If this is necessary to communicate in a special way with back end. So yeah, there are a lot of helpers to make the life of a developer easier. Two more component shadowing I already mentioned. The other one, which I want to mention and talk about in the after the next chapter is semantic UI. This is also a cool helper for making the code a little bit to enhance the readability of the code. Because semantic UI or the corresponding read package semantic UI read comes with helper components for small things you don't want to write again and again, like buttons, drop down menus, and so on. But we will see this in one of the next chapters, not in detail, but a short overview. What you can find there and where to look for these helpers to avoid writing a simple code again and again. And to concentrate on the important things of your add-ons, of your apps in Plone 6. And one more thing in this chapter I would like to mention is the question if you want or if you need to switch to Plone 6 or if you can stay with Plone 5 or even stay with Plone 4. And I think it's recommended to use the new front end. And the question if switching with an existing project, I think it depends if there are add-ons that can help you to realize what you want to achieve or if it's possible to implement that what's missing. And for me, it's important to realize that you can use existing Plone add-ons, back end add-ons, as long as they are doing business logic, as long as they do not touch the UI. So they are still valuable and they are there. And there's a huge ecosystem of Plone add-ons and the corresponding side, the ecosystem of Volto or Plone 6 add-ons is growing constantly. So yeah, I think it's worth thinking about. Thank you. Philip, do you want to continue with customizing Volto components? Sure. Good. So actually, before we customize Volto components, we'll quickly configure and customize in Plone through the web. We're not going to go into a lot of detail here. You should just click and see what's there or read the documentation that is here. And I will show you a couple of the most important things that there are. So when you go to the side route, sorry for that, you have this link to the site setup and you have the control panel. And they can configure, obviously, the date and time. And also, yeah, the first weekday. Excellent. I like Mondays. Actually, I don't. But for the first day of the week, Mondays is my favorite choice. I can configure languages. You can have multiple languages. We're not going to go into detail there. You set your mail server and user. You configure the navigation, which items should be shown up in the navigation, which should not show up. And so on. I can configure the search, what items should not show up in the search. And so on. Here's the most important one, sites, where you say what's the title of your site. For example, site title is Plone Conference 2035. In this case, I can add a site logo. You can't upload that here. That is only for Plone Classic. You see, this is a base64 encoded string. I already uploaded that something in Plone Classic. You'll see it soon. You can configure lots and lots of things here, including a JavaScript snippet for statistics and stuff like that. Save that. Go back. Then there is social media. The URL of the Volto site. That should be configured. That's something for hosting. You can install and uninstall add-ons. We'll do that pretty soon. You can actually inspect the database and see how many items there are in your database. And where it lives, you see it lives in back end bar, file search, datafs. That's your ZODB database. That's where it is. And there's a lot of other things. I'm not going to go into detail because we're going to see some of these later. And most of them are discussed here, especially the ones that are not there. And that's what's important here. So this is the control panel in Volto. And here is your control panel in Classic Plone. I just uploaded a logo. And you see that here, Mastering Plone. And here, it doesn't say Mastering Plone. Because it uses a different rendering. It is different there. And that's in the next chapter, we're going to customize the logo that shows up here. Here, you have the same control panels plus some more. Because some don't make any, it doesn't make sense to have them in Volto. And maybe there may be even others that are not yet implemented in Volto. I think everything that's important is already there. But for example, the theming control panel for Plone Classic doesn't make any sense in Volto. Because we're not going to use, it's a completely different theming setup. It doesn't make sense to use a diazo theme, if you know what I mean, in Volto. So this doesn't exist in Volto. Because it wouldn't make sense. And also in Plone, you have access to the ZOP user interface, called ZOP Management Interface, ZMI, for short. And there are even more in-depth things that you can use, mostly as an admin or a developer. So you will certainly spend time there. And we will go there today at a later stage, for example, to inspect the portal catalog, which is the search engine that is built into Plone. And we can see representation of all the content that I already created and how it can be found in using the search. You see the search bar here. This uses this catalog. And it has a couple of indexes that we can use. And you can create new ones. There's a lot of complexity and a lot of power there. We're not going to use much of that. We're going to skip ahead. And I suggest, since we already spent 1 and 1 half hours to make a five-minute break, or let's say seven minutes, to actually be able to visit the restroom and get some coffee, before we customize the first Volta component, where we're going to change the logo. And a couple of the views or the display ways, items are displayed. And there we're actually going to use the editor into some real development. There we're jumping in. But before that, you can steal yourself with your poison of choice. I'll get some coffee. And we will meet you in, let's say, seven minutes. OK. If you have questions so far, please put them in Slack. And we'll find time to answer that before the next slot. I'll mute myself in the recorded video. This break will be just mentioned as break. And you can skip to the next part, because we'll probably be too lazy to edit the whole breaks out of the video. Doesn't really hurt. Yep. I think we should continue. So no tough questions in the Slack. Let's continue. So next, we're customizing some things that we're seeing in the browser for our users. So this is a front end chapter in the documentation of the training. You see these sidebars here at the beginning of a chapter. Let's specify if a chapter is dealing with the front end or the back end. And if you're dealing with the front end, obviously you need to go to the folder front end and make changes here. And if you're dealing with the back end, you need to go to the folder back end. And if you want to follow that training, you can use the link here. You can use the link here to find out how you can check out the code from a certain chapter and use that chapter to start the beginning or at the end of this chapter. See what's the result of what should be the result of that. To customize a visual representation of what you're seeing in Plone, in Volto, we use a technology called, let me hide, it's a bit ugly, I'd say. So, yeah, I would love to hide some things, but I'm not going to go there. We're using a technology that allows us to override existing items. And you'll see that pattern over and over again, that we're not forking Plone or download Plone to your hard disk and change everything, whatever you want. I heard that is what you were supposed to do with some other systems at least. A couple of years ago that don't have a system where you can override features in React, that's called component shadowing. And it allows you to create a copy of an item that you want to modify in a certain container, in a certain folder, and it will be automatically found in a certain folder. And it will automatically found on startup, and it will replace the component, the file, that you want to, that you're overriding with. So that is pretty, pretty nifty, and it's basically the same as what you do, did in classic Plone, using an add-on called J-Bot, just a bunch of templates, where you just add a copy of a file into a certain folder, following a certain pattern, and on startup it's discovered and it replaces the original file that is shipped with Plone. There is also, I should mention that in this sidebar, there's a link to solve the same requirement in classic Plone. So if you follow that link, you'll see a broken navigation, because this item is not part of the navigation, but you will do the same tasks, or similar tasks, in classic Plone, in this case, the logo is already done in the early, early chapters, so we don't need to do that in Plone Classic, but here we're modifying the new item template, and the listing template, it's all discussed here, how that works in classic Plone, and it says classic UI, but we are going all in with Volto, so we're using this component shadowing technology to modify the logo in this, as a first step. You can download the logo here, it's on our website, it's just a SVG file, it's not very exciting, it just says mastering Plone in some crazy font. So what you need to do is you need to fire up the editor of your choice, I use VS Code, and I will actually use two instances of VS Code, so I already have one terminal here, one for the back end, one for the front end, and I'm going to fire up a new one, a third one, where I go into training Plone 6, that's where I have that front end, and I say code. Which starts up VS Code for me, and it opens the front end folder for me here. So I have two different editors that I can switch back and forth from. That is just to, I don't know, make it easier for me. You already have, upon installation for the training, you already got this whole site setup, and we're going to follow the documentation here to add stuff into our front end setup. The front end setup that we have here does not contain Volto. We didn't download Volto and started Volto, we actually created a package that depends on Volto. So in this case, as there is Volto installed, but not in SRC, SRC is a bunch of empty folders, so there is nothing here. And in your package, Jason, this package that you created depends on Volto, that's here, dependencies, Plone Volto, version 14.0.0, alpha 23. So that's the newest alpha of Volto, I think, that we're using here. And by building that using Yarn install, Volto got installed, and there's a couple of nifty things in this package that makes sure that Volto is actually available in a folder called omelet, and in there, omelet, SRC, I don't know, let's say actions or components, more importantly, theme, breadcrumbs. Here, this is Volto, this is the real Volto, you shouldn't change that because that is the equivalent of checking our Volto and changing it on your directly. We want to override that, so we're not changing Volto in the omelet directory, we're extending it and overriding it in our SRC folder, this is our package, this is our thing, it's called Plone Conf minus Volto, I think that's the name of the source checkout that we're using here. So we're using, we're going into the front-end folder and using this path and name to download the logo, to add the logo SVG, SRC, customizations, components, theme, logo, okay, I'll copy that, I already have a SRC folder, customizations, components, and that's empty, so I need theme and logo as a path, and I can add that like this, which is nice, components, theme, logo, and inside logo, I'll just open that in Finder, here is my folder, so if I go up, you see the structure, customizations, components, theme, logo, and inside logo, I'll just dump the SVG that I just downloaded, so it's here, and I should see it here, VSCode doesn't have a preview for SVG, so I could edit it if I wanted, I'm not going to do that, obviously, and now I need to restart Volto, is anything unclear about that, I guess not, we could just stop Volto using ctrl-c and yarn start, which will take a while, but while that happens, I can discuss, what should I discuss, there's nothing to discuss actually, so it's important to, you need to really restart your front-end instance, the process that is running the front-end, when you add something in a customization, when you modify at existing code, you don't have to do that, the development mode of the JavaScript application makes sure that it's automatically recompiled, and your browser is actually automatically updated, it shows the right stuff, but if you add a file, you actually need to restart that, and it takes a while to compile, and now it's done, and once I reload, I should actually, I already see it there, it says master in code, so master in code, so yay, for us, that is something we already did, so now you made a change, so you guys are professional developers, or actually you want to be, so if you go into the front-end folder, so you have this training code 6, you go into front-end, and you say, git status, and it tells you, hey, there is a new folder, a new thingy here, and you can see that you could commit, that is what you should do now, you shouldn't do that during this training, because it's already, everything is already there, but you should now make a commit, and say, okay, just add logo, and there you are, and that's, so that change is in your git history, okay, this was the logo, there was the easy part, now let's go to the footer, why are we not dealing with the footer, so there's no, we're not changing the footer, obviously, maybe the heading should go away, let's change the news item view, so why do you want to change the news item view, that's something that's been bothering me forever and ever and ever, so you visit the website, and you go to the press release, or news section, and you click on the news section, and news item, and you see it, and say, yeah, that's really nice and exciting, you have a new board member, or you have a new release, but what's missing? When was that, was that like 2015, and your site is super out of date, or is that super recent, was that a new release that was just made yesterday, so what's missing by default, which is, I still don't know why that is, it's probably so we don't have to change this training, and keep, can keep editing news items in Plon Classic and Plon, and in Volto, so we can display the date here, so what we want to have here is the date that this new item was actually published, let's do this, so first we need to find out where the actual news item is rendered, so the news item is, yeah, that's a bit tricky, I'm not going to go into there, that how to find that, but I'll just take my word for it, it is in Omlet, where Volto exists, source, components, theme, view, news item view, it's a same folder structure, and if you, it would be probably, since you know what the content type is called, you could just search for news item, hang on, why doesn't it find anything, news, that is weird, news item, files to include, files to exclude, why is my search not in all files, I seem to have misconfigured my search somehow, but I'll find it by hand, it is in SRC, components, theme, view, and here are all my content types, news item views, so here's my component, it's a React component, and I want to customize that, so how do I do that, I copy the whole file into a different folder, so this is the file, we can look at that actually a little bit, and you can see your first glance at Volto view in React, so it has some, a couple of imports, and it uses the object that you're looking at, so the news item, which is like this here, the whole, the thing that you can edit, the thing that lives in the content tree under this, in this branch, and this leaf on this branch probably, this is the content in this case, and it's passed to the news item view, and this creates some HTML basically, using the, yeah, that's JSX for you, I'm not going to go into detail how that all works, but it's basically, one example here, if there is a content title, so if the object has an attribute title, then render the h1 tag, class name, you see this is not HTML, like per se, this is React, so you can't use class because in JavaScript, that is a reserved statement, it needs to be class name, and pass, use document first heading as class name, and replace the content of that with content title, and then content subtitle, and whatever, there is more stuff in there, and then there's the description, and then there's the image, you see the image here, that is using the image, that is not a HTML tag called image, because you remember HTML, it's called IMG, the image is a component from semantic UI that is used here to render the image, and you pass some arguments here, the title, and the SRC, which is automatically created, it creates a URL for that image, and float it equals write, is also not a classical HTML tag, it is a argument for the image component that is used here, so this is how the view looks like, and it exports as the news item view, and has these prop types that you probably know when you're doing react development, if you're just overriding stuff, this is something you do not yet need to know, you just need to be able to understand the basic logic of that, saying that if the content has an argument, has an attribute called text, then render that text attribute as dangerously set in our HTML, that's a monomer of most favorite words here, so this is a react built-in thingy, obviously, and that renders then the content text data, which is the real HTML that is returned from the rest API. So here we have the view, now we want to modify that, so we copy this, the whole thing, into a, yeah, oh, there's, you should probably read that, because that's really interesting, because we should look at that just quickly. I hope I have my developer tools enabled, so why doesn't it, oh, God, I don't have my developer tools enabled, Katja, I guess you will have to do that in your later chapter, I seem to have disabled my React developer toolbar, doesn't really matter, we can redo inspecting an item in a later chapter, but it read that and it's important and it helps you to figure out where that component actually lives and how it works. So copy this file into customizations, components, theme view, so customizations, components, theme view, do we have that already, let's see, so in our, not in omlet, please, so close that, customizations, components, and then in components we need more, we need a folder called theme view, let's here now, so here is theme and view and then there we need the news item and I'll just copy, can I copy the whole file, probably can, if I find it here, where's my news item view, here copy, and I paste it in this folder, paste, here I have it, so again, I need to restart, I'll do that, and I don't get a message that, hey, I found a new customization, you'll just need to see that this happens, and now very important happens to me like every two weeks, don't modify the original, because now you have both items open, modify your copy, yes, it's confusing when you have an editor in the same file open twice, make, if you're uncertain, make sure that it starts with SRC customizations, then you're in the right place. And how do you find out that this is actually the one that is then rendered, once sort of is finished, it just takes a little while, so obviously just add some nonsense to it, and that is visible in the site. So let's see, let's just add a h1 saying hello world. So once I reload, I hope I didn't get an error, still starting up. It has interesting messages, which I will just ignore. And here we already see it, hello world. Yes, I actually already customized that. If I wouldn't make these stupid change like this, it would look exactly like the original, so without hello world. And you see, I just saved and upon saving, this is the message that I get, it does a reload. It reloads the site, the code. So, okay, good. So what was the task we should add a date. We could just type a date there, but that's stupid, we're not gonna do that. So the content I already said that is the item that we're looking at, the news item, submit your talks, and the news item, this item has, okay, I should really enable this feature because this is really important now, where is my React? Oh, I could just use Chrome for that. No, I'll just, developer toolbar, I think that's what it, oh, can't type. This is not it. Come on, React developer tools here, this one. You need this, you need this, no questions asked, you just need that. This is super important when you're developing for Volto or any kind of React stuff. And it's already there, here's my developer toolbar, it says, hey, it's development build, and now I can inspect my components, not the HTML, but actually the React components. I can inspect them, and I can inspect lots of things. I'm not gonna go into too much of the components, I'm not gonna go into too much detail, but this is like the code. Here is my news item view, and inside this news item view, when I at some point I find it, it's pretty complex tree, but here's the news item view component, and it has a content object, and this is this. This is the news item view, and it has a content attribute property, it's a prop actually, prop in these properties. Now, in content, in the property content, there is a lot of stuff, and we will figure out what we're gonna need from that, and I already found it. So for example, a lot of stuff is used here already, so type is the news item, there somewhere is probably the title, title here, submit your talks. Here's the text, which is a object itself, and has data, you remember I said this is data, that is rendered down here, content text data. So this item doesn't have any text, let's add some text, so it'll actually show up, and make that a bit more interesting, at least adding one HTML tag, and now where's my view component? I lost my component. Hang on, news item, you can search, which is super useful. So here's my news item view, it has content, and here now the text should be more, so we have data actually holds the HTML, that is passed from the backend via the REST API to the React application that's running here. So what do we need from that? We don't need the text, because when I have that, we want effective in this case. This is a date, and it's automatically the date that is used by Plone, no, created, we use created, it's the creation date. There are a couple of different dates, obviously, there's like the date you were born, there's the date you got your driver's license, and the permission to drink alcohol, and these are different dates, and depending on for content, it's also, if you write a news item, but you only publish it two weeks later, you need to talk to your client, what should be visible there, because every client has a different idea of what should be visible there, and what should be visible if the item is not yet published and stuff like, so there's huge complexity, and there's also the modification date. So if you have a news item from 2015, and someone changes the typo, is it suddenly a super new news item if you use the modified date? Yeah, so you see that's pretty complex. So created is the date we want here, so let's just use this thing, and add that here, content.created, and save, and I don't need to do anything else, because after I pressed save, you see this automatically happened, and this automatically happened, even didn't have to reload my browser, so that's modern JavaScript development for you. The tool chain is really, really good. The result obviously is total crap, because this is nothing a human can read, but it is a date, it's a visual representation of a date. We'll use JavaScript to fix that, we could do some nifty string replacement and changing, but we're not gonna do that. Instead, we use the power of the vast ecosystem of JavaScript libraries, and one is the moment library, we'll import moment from moment at the top here, and if you have a good editor, tells you, hey, you imported something, but you're not using that, please get rid of it, we'll use it in a second, be calm, and then pass that attribute that, where did we put it, actually after the image, not before the image, pass it here and move it after the image before the text, and don't, do we need a paragraph, oh, we want a paragraph around that. Okay, save, and again, if you have a good editor, it automatically takes care of indentation and form a code formatting. So we're passing this date object to the moment, it's actually a string to the moment library and passing it a format, LL is one of the default formats that you can use, can read up all about that in the documentation for the moment, JS library, we're not gonna do that, I'll just, I didn't have to reload and automatically we have this, so that's excellent, we're already done, we have the date, and yeah, that's it, I think, yeah, we have a date, excellent. So here's a whole discussion about what happens with these dates, gonna use the solution here, so use effective if there is effective, otherwise use created, so this is basically a try accept, or if else statement, save, and the editor even makes these nifty space thingies here, which are ugly, I still hate JavaScript with the vengeance, but it is a lot of fun developing with it. So yeah, that's it, we did our first, second customization, the logo, and the news item view, we have a date here, so you visit your website and you see the list of talk, the list of news, and you realize something, yeah, why is it blue? Okay, let's get rid of the blue, that's just selection nonsense, oh, because I clicked here, interesting, can I unclick that? Probably, yeah, like this, so I have my news, and here I have my date, but in this view, I don't have a date, ah, darn, okay, one more view to customize, and another task for you, so this case, we're customizing the summary view, that is not a view for one content type, but a view that is, a sign that is usable on folders, as there's more, what I wanna say is, there's more than one view for folders, you can go to a folder and say, hey, I wanna look at this as a listing or as a summary, in this case, we'll use the summary view, otherwise our changes will not show up here, so and again, we can use the inspector here to find the component that is used, summary view, here it is, and now we can, here's the, wow, there's the code, how do we, what happens here? Nothing happens if I click here, interesting, so summary view is the name of the game, and I can use my search, which is not working again, why is it not, yesterday it worked, oh, now it works, you, no, it doesn't work anymore, I have no idea what's wrong there, probably I configured it to not follow symbolic links or something like that, my editor, so I'll look for the summary, if you please fix your search, if you have the same issue as I do, it's in the same folder actually, so here's my summary view, I'm just gonna copy the whole file and drop it into the same folder, paste, paste, paste, where's paste, paste here, summary view, so now I added this file, and again, I need to start up, gonna skip through this part, but yeah, no, I'm not gonna skip through that because that's important, while I explain something, I'll restart, so you remember I said that in the news item view, the view component was passed the argument content, which is the content object that we're looking at, the instance of a news item that we're looking at directly, in news, in summary views, we're also looking at a content object, but the content object is actually not the news item, but a folder that contains these news items, and inside this summary view, the logic is iterating over the argument items or the property items of content, so content, which is the folder, obviously has something called items, and we're doing like four items in items, I always try to read JavaScript as if it was Python, because I really, really prefer Python to that, and then we have item is one of these items in the list of things that are displayed there, and then these again have URL, title, image, description, text and whatnot, so the same applies to item that used to apply to, again, I have two summary views open, don't modify the wrong one, close the original. So after I restart finished, so if I, oh, I can, let's try that, that's gonna be fun. I already did that, so I'm not gonna do that again, I just copy and paste my code in here and put it, I don't know, somewhere here under the, before the description, under the description, who cares, and save and say, hey, we're done, it's all good, reload the whole thing, because it realized we're broken, and there we are, so a moment does not define, didn't import moment, yeah, it's true, moment, here we are, reloading again, see, this, I have no idea what it's doing there, but it works, it's excellent, so, and here we are, so imagine next week, you add a new item, everything has the same date, that's the error you just made, or that I just made, is I use content effective, and not item effective, and that is, yeah, everyone does that, and everyone has to go through that, pitfall to pick the wrong item again, so, yeah, that's the solution here, that we're using save, item created, item effective, same display format, if we'd have news items that have a different creation date, and publishing date, then the date would change, so the question now comes up, can you actually, can you do that, yes, you can, so imagine the conference website was already online last week, you can just go there, and in dates, you say, hey, the publishing date, that's what we wanna display, that was on Monday, we were pretty early, so save, and when I reload this, when I hard reload this, hopefully, okay, October 17, that is correct, so that was the date that I selected, but for this view, maybe some crazy caching is happening there, because I fixed the code, it says item effective, or item created, and item effective is what we're displaying, so I'm gonna go ahead and go ahead and go ahead and go ahead and go ahead and go ahead and go ahead and go ahead and say item effective is what we're displaying, so this, I would say this is a caching issue, and it's always good to kill things by restarting it and figure out, find out if that is the issue before you do further debugging, because cache invalidation and naming things the two hardest things in IT. And this would be number one, cash invalidation. This hopefully it was that. Let's go further. We'll skip this next part where we let's see if it happens here. So you can do that at home. We also have a listing block. Actually, we should do that, but it will cost time that we don't have. So we're not doing that. In the next block here, 13.6, we would customize the listing block. You remember, we used the listing block on the front page to display news items. So the thing with the cash invalidation is obviously wrong. What is going wrong here? So did I modify the wrong file? Oh, yes, I did. I, yeah, exactly what I told you not to do. No, not. Sorry. Again, customizations. Kat, you do have any idea why that's happening? I thought first it was a type about the spelling is okay. Yeah, I think so too. Let's add foo. See if I made something. A stupid mistake. Foo shows up. It should be okay. I have no idea. It should work. Just tell your client it should work. That's always good enough. No, it's not. Let's move on. It actually, it works, pretend it works. So in the next chapter, we would modify the listing block, which we have here that we can use to display if we, hang on, if we edit that to actually display our news items that are here. And we want to show up the date here as well. And that's what we're doing in this chapter. And the tricky thing here is this is not a view for a content type. So it lives in a different place. It lives in components, manage, blocks, listing, default template. So that's a bit hard to figure out. You can figure it out using the the React developer tools. See that it's, hey, there is something called default template. And that's probably, probably what's used here. And then you can customize that. A lot of the knowledge, what to change. You can just remember, learn, but debugging is just, you can, it's not a problem. You can totally do that. Just change some file, put some typo in a file and see if something happens. So if I look at the default template, JSX, I'm not going to do that now. And see if you have something and it changes in your browser, you're obviously editing the right file. And then you can copy that into your customizations and do a proper change and make a commit in your Git history so that everybody can figure out that it's you who did this change. And if you've did that, then this will happen. Just a short note. Yes. I think it's important to, it's worth noting that if you want to find out what you have to customize, for example, this default template, the React developer tools are very useful because everything is a component. And if you find with the developer tools, the component here by this search thing, and then most of the time, you know the file name that you have to customize. Because this is most of the time corresponding component name and file name. Here's source. It actually says listing buddy JSX. It even tells you the line number. And yeah, but there is more with blocks extensions and anonymous default view. This is yeah, whatever. Inject Intel. Yeah. Lots of stuff to discover here. Let's one pretty important thing that we're not dealing in the whole training, except for this small part, we're not dealing with internationalization. In this part, we do because we want display is a date is displayed in different languages in a different way. So in English, it says October 23rd. And in German, it would be 20, 20, 26 October and so on. So and people, even though a website is in English, people would probably want the dates to be displayed in a way that they actually understand that. And to do that, we can use this react into internationalization stuff and pass set the locale that is used by the site. If you have a multilingual site, that's super important or the browser sets that. And then the moment moment JS automatically uses that. We're not going to do that now. You can do that at home. If you fancy that. Let's go to the next chapter about semantic you are cut here. Take it away. Thank you. Okay. Semantic UI, we already heard about it. It's a little helper on a big helper for developers to concentrate on the important things and to use the components that semantic UI or semantic UI react and provide. These components are things like buttons, labels, help us to structure the page by so called containers to have boxes with the already prepared padding and margin and so on. And so I skipped to the chapter talk where we have where we will create a view for our content type talk. And this would look something like this where we have a label for the audience of the talk. And we take the component label from semantic UI, which will, yeah, which is something like a widget with a theming and can easily choose a variation of it. For example, in this case, the color or also the shape and so on. And so you take the component from semantic UI, modify some attributes and it looks fine and you can concentrate on your main part of your development and also very often used as the container. And that's the bottom part of this page. You see a box with a border and the padding most of the time is okay. So we can put your content that you want to display in this container and, yeah, you have a structured page. So this is a semantic UI. It comes with a default. And also the default theme is relying on this red package based on semantic UI. So you, if you use it, your, your modifications fit in well to the default theme called Pasternakha. And when we now step to the next chapter where we want to do some small customizations, not writing on components or customizing, as we've seen in the last chapter, but only modify the some CSS style, then we can, I don't have my chapter. Oops. Yeah. Then one typical example is to change the font name, the font or the font size or what do I have here? Letterspacing. Then we can write on CSS rule and we will see now where we do it and that we can use predefined variables, like in this case font name. And for this to see where we customize, I should, do I have to change the sharing? Sorry for interrupting. I think you're, you're not clicking on your browser. Sorry, I forgot to say. I think you're clicking on the browser with a video and not the real browser. I don't get it. I have the, now I'm sharing the Firefox. Yeah, but you click with your mouse and drag. Drag. That is the video. You're looking, you're looking, you're clicking on the zoom window and not on the Firefox window. You're clicking on the Firefox within Zoom. Minimize your zoom window and then you have the browser and you can. Okay. Okay. So, you know, I should see my editor. The theming is done. We have seen the omnet folder for the Volto code and beside the source code where we have components, actions, reduces and so on, we have a folder called theme. Is my editor now share? Yes. Okay. Thank you. We have the folder theme and with Volto, we have the Pasternagar theme and also the default theme. A default means that this is what a semantic UI provides. Pasternagar is the theme of Volto or Plon6 and it comes with some variables and CSS rules. And if you want to, for example, customize the font, we will find a variable here and we have not one big CSS file, but it's organized by topics and something like font, which is a global thing we find in the global folder and most of the files here are pairs. A pair of a variable file and an overrides file. Our variable is here. We find our font name variable, which we can modify to change the font and we have the other part of the pair overrides and this case is empty. Normally, you will find CSS rules here. This is what comes with Volto. If you want to override, we create a new file with the same name and use the path. On top here, we see the path. We take the path from the theme name on. Globals and then our site overrides. Globals, we create a folder, Globals and the file site overrides. I want to do that now because it's really easy to do. Then we change the variable name and the other case would be that there is no variable and we want to place a custom CSS rule. We take the part site overrides. If it's global or if it's something like in collections, we have menu, table, forms, very important form overrides. Then we can pick the matching rule here and we also look at the path again. We are in collections, file form override and then we go to our app code. I close the omlet and I close the source part of our app. At the same level as the source folder, we have a theme folder and we change to this. I already prepared two folders, Globals folder where you find the customized site variables. Here I have my font name changed to Lato. This is a font name of Google font. If we use Google font, we can just modify the value and Volto cares about getting the source from Google Google font website. This would be the way to change the font. The other one would be, for example, to customize the letter spacing. Then I create the other part of the site modifications, site overrides and I place my CSS rule here. Then as we created new files, we have to start Volto again and then the modifications can be seen. I think it's easy. Questions? If not, I would hand over to Philippa then. Thank you. Wow, that was quick, the seeming part. Let me share my screen. I found out that we have a bug. We have a bug in Volto. We should have realized that a bit sooner because when we changed the view to summary view here from list, summary view, the picture is here. The images didn't show up. There are no images. In the template, there is an image tag. The objects obviously have an image, but they're not showing up. What's going wrong here? The reason is I didn't have time to find the change log message, but at some point in Volto, it's not that long ago, the default way that the content of folders is returned by the rest API to the front end was changed to not contain all information about the objects. How did I figure that out? That's important for you. Inspect components, look for the... Just use inspect to find the wrapper. Then we have the container. Where was that? Summary view. Here is the summary view. That's what we're using. It has a property called content. In the template, you remember I showed you this, we're iterating over item for item in items, for item in items of content, for item in content items. Here's the summary view. Here's content. Here is items. If I expand this, if I expand the first item here, it should become obvious that this is not what we saw when we looked at the news item earlier, where we saw lots of information. Here, we only see a tiny piece of information that is returned by the rest API, because this is the data that is available to the view component here. There is no effective, there is no image, and this is why image is not rendered. There is an error in the way that the summary view is constructed. It's not getting all the data from the items in this folder. This is why it's not possible to display a date. Why does it show a date then? Here comes the next crazy thing, the moments library that we're using to render the date. Let me go back to the original way. If you don't pass anything to the moments library, it returns the current date. It's similar to Python, date time, if you call date time, it returns the current date, at least the update time. I don't think Python date time does that. Maybe it does, I'm not sure. This returns today's date, and five years from now, this news item would still be from today, which is super stupid. I have no way of fixing that right now, but it's certainly something both creating a ticket for. Just to show you that, there is here, when I save it, you see this actually changes. Now we have a different visualization, because I changed the format, but the date is still, a date is still there. So, yeah, it's not our fault, it's Volta's fault. This makes no sense to be in the summary view, if the item is not passed to a summary view. How do you create a ticket? If Plone is, we haven't talked about that yet, it is an open source system, and it's maintained by a huge community of volunteers. There's not a single person paid for doing any of that. We don't, Katja and me, we both don't get anything from doing, for giving this trend, actually we do, but it's like, it's a symbolic gesture. And we actually, we even had to buy our own ticket for the conference. So, we're doing this out of the sheer goodness of our hearts. And the same is true for Volta, and the whole development that goes on goes on in Plone. And it's all open source, it's all available there. And Volta is a repository on GitHub, it's all open source, and there is a list of issues. And I could just create an issue that explains what's going on here, that the images are not showing up in the summary view, because the items that are passed to the summary view do not contain all necessary information. There is actually, in a later chapter, there is a small part where we actually discuss this issue, which is actually a feature, but I wasn't aware that it, the summary view was basically broken in Volta. Okay. And you're also invited to file a ticket for an issue, if you find something unclear in the training documentation. That is very true. The training is, okay, this is fixed base class, so there's nothing about the summary view. The training, since you mentioned that, I'm going to show that the training is also open source, it's actually, it's not even GPL, but it is Creative Commons Attribution 4, whatever that means, not a lawyer, but it means you can just do whatever you want with it, as long as you mention that it's open source, something like that. And you can create a ticket here and say, okay, there's something wonky, or you make a pull request and change stuff. And there's all the trainings that are in this, that are on trainingplown.org in there, and masteringplown is one of them. And we are right now, just to show you that in Volta-theming, so Volta-theming, this is the file we're looking at at the moment. And you can create a pull request, can make a change, feel free to do that. And it, yeah, that's important. So I'm not going to fix that issue now. Okay, we already saw that we can customize Plown by overriding components in React using component shadowing. And I quickly mentioned that we could do something similar in Plown Classic by creating a copy of that file in a certain folder. It's basically exactly the same approach. Plown, the point of this chapter is to teach you that Plown is built on top of the idea to be customized and extended and not forked and branched, but that you can take almost every a part and aspect of Plown and customize it to suit your requirements. And there are a couple of extension technologies that are discussed here. I'm not going to go into details. We're going to visit them soon, see how they interact with Plown in different ways. But basically, they allow, there's a whole system built to customize Plown. So why is that? Maybe that is important. Let me quickly go into the back end of Plown, the original back end in the Zope management interface. This is not only for looking at items. This is actually in Zope, was built to be as a development environment that you can write code in the browser. So there's this select type to add and then there is this script Python thingy where you can actually write Python code in the browser. And when you go to the URL of the idea of the object that you created, this Python code is executed in a safe way. So there is actually a whole huge package called how's it called? Restricted Python that makes sure that you're not deleting your hard drive by writing a Python script because obviously in Python you could do that, can delete your whole system. But you don't want to do that. So this is, it's basically, it's in a sandbox. It's a safe subset of Python commands that you can use in there. So Zope, when it was invented, was the whole idea is to give the user the power to customize it and extend it through the web in the browser. And the same approach is true for file system development. There's a lot of hooks and crannies that you can plug into to make Plone do what you actually wanted to do. So that's, and that is what we're doing. That's why in the whole training, especially in the, we're not going to write a lot of code. Most of the code that you write like lines of code wise is copying templates from one place to another. And it's not writing, it's copying and pasting. And the things that we actually write ourselves are not that much. So, and to do that, to allow Plone to hold our own code in the back end in this case, we need to write a Python package. And there is a lot of existing add-ons already, Python packages that are connecting to Plone that you can use to connect to Plone and extend Plone. And this chapter mentions a couple not going to go through that because we're not doing classic Plone. Most of them are using classic Plone. There's a later chapter about Volto add-ons that also exist. There are add-ons that change Plone back end that provide new content types, for example, that you can also use in the front end. And there are add-ons that provide functionality that is used visible in the browser that make no sense in Volto. So there you need a Volto React component to talk to the back end to that. So, in some cases, you actually have a Plone add-on that comes in two, in two, comes as two parts. And that is what we're doing in this training. We have one Plone add-on called Ploneconf site, which is a Python package that we create to provide extensions to Plone and overrides to Plone the back end. And we have Ploneconf-Volto, which is a front-end package that extends the front end of Plone. And both are not forks or branches. They override the minimum time that is necessary to make that possible. So we're going to skip this whole thingy and we're going to quickly... Oh, God. I could talk for hours about that, but I'm not going to do that. So this is to customize, to configure Plone in the front end. You have this package JSON that tells you it requires, I told you that before, this Ploneconf site package, Ploneconf Volto package, JavaScript package, NPM package, depends on Volto. So it extends that. In Python, we have something similar. And I haven't opened my editor for the back end yet, so I'm going to do that right now for the very, very first time. So let's add another terminal. Training Plone 6 back end code. So here is my VS code instance. Last year, that was actually convenient to have two different editors for two different things, because my laptop screen is pretty small, so it's easy to get confused. But a little confusion is just required. So this is the back end folder that we're looking at now. And it holds the configuration for Plone the back end or Plone classic. And the most important part is the build out CFG file here. So build out is a configuration or orchestration system that builds Plone, configures Plone, and extends Plone, and does a lot of other things. You can add other things. You can actually make build out, compile, Apache or solar for you, which is kind of stupid to do. But it is possible. That's why I say it can spawn all kinds of things. So here is the main file for build out. And here is a minimum example for a build out that is used to build a Python Plone site in Python. It has only two parts. So it's built, it's separated into parts. They are, this is the INI syntax. You probably know that. The square brackets build out. It has one part that's this one instance, and it extends. So it's pluggable and extendable, and it's increasingly complex if you use it in real life. And it has recipes. So it has build out itself. And it can use recipes that tell a part, in this case, the part instance, to do something. And in this case, build Plone using the recipe Plone Recipe Zog2 instance, which is the default recipe used to build a Plone site. And if you use that and run build out against that, using the logic that you learn when installing Plone, it's just call build out, which is executable. Then it reads that and installs and builds Plone from that. There's a lot of stuff in here that we're not going to go into in detail. Let's quickly look at the real life example of this training. So here's build out. It's like, it's always the main section. It extends versions from the freshly released 6, Plone 6 Alpha 1 version. This is a URL. If I open that in the browser, it holds Python package version pins only, lots of those. And it also extends the version pins of Zolk. And the Zolk version pins, again, oh, they don't open the browser. They extend and extend. So there's a whole, I don't know, a red tail, that you can follow there with versions. It also pulls in the versions of a file, version CFG, where we have pins for the versions that we added in our build out. And it has a couple of defaults. The most important ones are these eggs. Python packages like, I don't know, the request library or Plone itself or Pillow, which is a Python imaging library, are called eggs. That's, by the way, this is why the packages folder that's here used to be called omelet. And in Volta, it's still called omelet, even though the NPM packages are not called eggs. So that is, if, I don't know, five years from now, they don't change that. No one will ever remember that Python packages are also called eggs, because nobody uses that word anymore for Python packages. And so everyone would be irritated why is Volta and all the other dependencies in a folder called omelet. Yeah, so that's history for you. So these eggs declare what Python packages should be installed for the system we're building. And this is Plone. This Plone package pulls in everything itself. So this is a Python package. It actually lives on PyPy, but also on GitHub. If you go there, you can find that it's a meta package, doesn't have any code. It just depends on more packages. Oh, crap, there's setup. So you need setup CFG. Here are the packages that it depends on. Where are they? Install requires setup tools, REST API, and CMF Plone. And that is actually Plone itself. That's the package where the code of Plone exists on GitHub. So there's a shitload of code here. And that again pulls in many, this time the dependencies are obviously in setup Py. This pulls in a lot of additional packages. So when you install Plone, and you already did that, you have a executable called bin instance here. And it has all the Python packages that are used to run Plone. And these are 200 and more than 260 Python packages that I used here. So and build out is there to make it manageable to actually pin the right versions for each of these 260 packages without you going crazy. And on top of that, extend the Plone that you build with your own add-ons. For example, here's some development add-ons that we're going to discuss later. And the one that you're writing yourself, Plone comp site. And that one will also be tested because there's a test setup that is in there. And this package Plone comp site, hang on, is actually not downloaded from PyPy, the Python package index, but from GitHub using a build out extension called MrDeveloper. There is so much to learn and to know there. So if you're not not, if you don't know that, you will learn it through practice. It would take us a lot of time to discuss all of that. So we're going to just jump to the end of that. This is the build out of, there's lots of documentation here. I really, I beg you to read that. There's examples for super small build outs and really big and complex build outs and how to figure out how to work with that. And what are you going to, what are we going to do now? Or what already happened? So we're not going to do that. But it, if you would start from scratch, you could do that. And I'm going to just quickly show you how that works because it's really cool. So when you have, when you follow the installation instructions, you have Plone, but you don't have your own package yet. And you can create your own package using this Plone CLI command line interface helper thingy. I'll go into my backend folder. I'm going to call that Plone.com site too. So I'm not going to, I'm going to throw that away after I did that. I already installed Plone CLI and I create a, no, please don't correct this. Otherwise, it will override it. It'll ask you a couple of questions, including your GitHub username and what do you want to do that with that? And I'm going to just answer the questions as in, as instructed here, but you don't have to do that because you already have this package. I'm just going to show you that this actually works. There is still an issue. So I get an error on ISORT. I don't know why we need to ISORT it automatically. I wouldn't do that, but I'm not maintaining that package anymore. So yeah, can ignore this, this problem. And now when I open my editor in SRC, I just created a from a Plone Python package. If you know other systems, this is like cookie cutter. It's cookie cutter is more popular in for most systems, but this is real, used using something called Mr. Bob, whoever Bob is, I get probably Bob the builder or something like that. It's like names happen at Plone Sprints. So there is Mr. Bob and Bob templates, Plone creates this structure and that is a Python package that extends Plone. Why does it extend Plone? Because yeah, because it does. It is just it's built to extend Plone. It doesn't pull in Plone. You need to pull that into your Plone, but it is it is preconfigured to be a sane approach to Plone extensions. I'm going to delete the whole folder here, move it to the trash, and going to get to the next item. We already did the same thing for Volto add-ons when during installation you did the you used create Volto app or didn't we switch to you man or something like that. Yo, to create the add-on. So maybe this this is not the right name. Never mind. So let's have a look at the Python package that was created. I used the one that I checked out, but it's exactly the same. I created it this morning around nine o'clock and it's up on it's on GitHub. So there is a pretty complex folder structure here. And first of all, there's a lot of files here, and you will probably have no idea what they mean or what they what what's coverage RC, Bob template, CFG license. There is some license thingy here and manifest and the setup UI and what the heck is this all this stuff. And you could ignore a lot of that because it is mostly boilerplate. So what you see here is a build out CFG. And now you get super confused. Didn't we just didn't didn't that guy just say that we have our own build out for the training. It's in the folder back end. As a file called build out CFG. Where is it here? Build out CFG that pulls in the other package plonkrom side. And now I'm in plonkrom side here as RC plonkrom. And it has a builder of CFG again. Why is that the simple reason for that is you can just ignore that for your for the development purposes. But the default for when you create such a new package is that it should give you all the defaults to have this a self contained package that is testable and publishable publishable on pipy. So to be testable, it needs to have obviously plonkrom and some test runner and all this nifty things to automatically run tests on using GitHub actions, for example, or Travis CI and here GitHub actions, I guess that is here here GitHub actions. So run tests using plonkrom 6 and plonkrom 5 and do all that stuff. So all of that is already included in this boilerplate template. And you can ignore all of that. The only thing that you need is actually inside SRC, PONKROM site and our guard again. Why is that such a complex folder structure? Let's go into that. LL SRC plonkrom.side. This is a namespace package. SRC plonkrom site. So this is actually the, yeah. Let's go into SRC. When I look at the tree of the whole thing that I just created, I have a pretty complex folder structure, including all these files that you don't need and SRC and plonkrom site. And this is only there to make the again this file self contain, self testable, self buildable, and you can ignore all that. And every time we talk during this training about, hey, add a file in Bababa, Bababa actually starts here and not up here. If you, when you need to add a file, you always add it in inside this folder structure, where you have setup pandlers in it, configures that ZML and so on. So this is, this is where we actually add our stuff. I will repeat that a couple of times. Here is in the documentation it, yeah, let's quickly go through that. The most important, it's all explained here. The configure ZCML is a XML syntax file that registers components and stuff basically for your plonk side. In, if you use fast API or other frameworks, you probably are used to using decorators to register routes, for example. In a plonk, you don't do that that often, sometimes you do, but more often you use generic, ZOP component market language, ZCML to register, for example, a view for a content type or a interface or any code that you want to run. We're not going to do that much. So I'll just skip ahead to the next part where we have this empty shell that looks horrible and complex. But the good thing is you just run this command to create this, this, this, this package and then commit everything that's in there. If you look at the git history of the plonk.com side package, plonk.com side, I use source tree quite often for git stuff because it has a nice user interface. You see in this initial commit that has zero features but adds a ton of files, these are all auto-generated. So just add them and this is the important one. This is what you do yourself. This is where you write your own code. This commit actually, we are at the moment here at Volta. And this is like your commits, this is the important part. The small things where you say, hey, I'm a new content type. These 500 files, don't think about them. Don't get irritated by them, at least. Okay. Okay. Good. Content types. I hope that was not too horrible that part. But it is what it is. Python is a complex ecosystem, especially if you want your stuff tested and automated. And if you have a lot of dependencies, if you ever try to install Sentry, for example, by yourself, that it is horrible. It has so many dependencies. It is so many databases and containers that you need. It is just, if you have a complex system, you have to deal with this complexity that builds these things that you then can use. And we try with the clone and build out and build templates with these templates to make it as approachable as possible. And actually with clone 6 in the future, you can just say pip install clone and you get everything. But yeah, we're not 100% there yet. So installation is still a bit complex. Okay. Next chapter is content types. So it says dexterity one content type. Why does it say dexterity? Because in clone 4, we used to have also content types, obviously, because it's a content management system, but they were not called dexterity, but archetypes. These are basically two different frameworks. You can forget this name right away. It was really important to know that dexterity exists as long as you still had archetypes around. And if you need to migrate from archetypes to dexterity, that's a complex task. But if you start with clone 6 or clone 5, you will only have dexterity. So actually what we're talking about is just content types, ignore dexterity. So a content type is an instantiated object of a class that is defined in Python and stored in the database ZODB. That is from the clone perspective, that is a content type. From a user perspective, a content type is something that you click on an ad menu and select one of these content types and it is automatically created for you. So it needs to be somehow registered in clone that a content type like here, a document, page is called document in the back end, exists. And also the schema of that content type needs to be registered somewhere. You remember the news item had a different schema than the document. It had a lead image on top of the rich text and has lots of other fields. And also when you, a content type is, that's for the editor, that's a content type, but for the visitor, the content type is just a page that you look at. So it has three parts. One is the factory information that tells clone that you can actually create this thingy using a factory. Remember the factory factory. And the second is the schema that defines which kind of fields then should be available. And the third thing is the visual representation. So each content type usually comes with some kind of visual representation to the user. It doesn't have to have one, but in our case, we want one. So that is the important things about a content type already said some things about that. So in clone, as I said, as so is very configurable, it even goes a long way to make sure you can modify existing content types and add new content types through the web. So the first thing we'll do, we'll add a new feature to a existing content type. The news item, let's look at if you go to the control panel. Okay, I'll go back. So you might have missed that. If you go to site setup, this is the control panel here to the content type control panel. And there you have all these types that are available, including the news item, click on news item. And there you see the news item and the name and all this information. We're not going to change the name of the news item. We're going to go to the schema of the news item. We're going to not going to not adding the FTI but the schema, not the factory type information, but the schema and you actually get a schema editor in Volta, but you can add new fields. So what we're going to do is we're going to add a field of the type yes, no, so it's a Boolean field, you can check stuff and say, is this hot news or is this, I don't know, cold news. So let's do that. I'll add a new field. No, I'm not going to add a new field set. I'm going to add a new field here below that. And it's a yes, no field if I can find that. Where is my yes, no field? Come on. Type too many options here. Okay, there is a scroll bar and at the very end is the why. Yeah, maybe a small UI issue with nested scroll bars. Okay, the title is hot news for this field. So is this hot news? Yes and no. No description. It's not required, although if it would be required, everything would have to be a hot news, which doesn't kind of defeats the purpose. And here's my hot news field. And now, so what do I have to do? Yeah, I have to save. And then I'll open a the site in a new tab. And I go to my news listing here, and say the conference website online, and click on edit and guess what, it's already there. Isn't that super dark magic? So I can in I can without writing a single line of code without restarting the instance. I can add a new field to the schema to or model if Django if your Django developer, for example, that would be a model. And then you can add a hot news field and store this data. And it is stored in the database. So, okay, let me take you a little bit down the rabbit hole. So here is my news item. Let's go to the back end quickly. So the classic thingy here, we have news conference website online, I can edit that, I can see my button here, I can view that. And I was always saying, okay, this is some is the instance of a Python object. Let's make sure that it actually is PDB, if you don't know PDB, that's the Python debugger. And we have a add on in our build out that allows us to call PDB on any object. What happens, I don't see anything, why don't I see anything? Because I need to go to my back end, which runs in the foreground. And here I have a PDB. So self context is my news item. And my news item can be inspected. And my news item has a new field that I'm having programmed, hang on, that I added in the editor here. And it's called hot news. So there's a space in between, you can't have a Python, Python attribute with a space in between. Well, let's look at hot news self context. Hot news, no, minus wouldn't work as well. So maybe underscore. No, not that. This wouldn't work again. So how can we see that? Can we see that? No, that doesn't seem to work. Where is my hot news? Why can't I see that? Hang on. Am I on the right object conference website online? Yes. And it has hot news as a field. How is that called in this case? Let's just use the capital H. Who came up with that idea? Self context. Here. That's, yeah. Here it is. Hot news, too. It's a, it replaces the space with underscore. And this is my attribute. And it's actually stored persistently in the database. So, yes. We not only magically created a field, it is actually stored in the same way in the database. I'm not a huge fan that in Volto, new fields have uppercase characters that seem stupid, because in the back end, these are all lowercase automatically. But yeah, that's tiny, tiny issues. Okay. We did that. We can edit that. We save it. We can look at plenty of stuff here. We can even create complete new content types through the web, not only modify existing ones, we can just say add talk and have a talk content type there. And do I need to restart my site? Do I need to do anything? No, it's already there. So it's beyond imaginable what dexterity or the content types in Plone allow you to do. You can create custom types through the web, but we're not going to do that. I'm going to delete this thing because we are not mouse pushers. We are developers. We want to do stuff and have people check the Git history in two years from now and say, hey, what did you do there? That was horrible. Why did you do that? So we're going to write a content type in Python and register that in a Python package as a dependency of Plone. So why do we do that? Not only because we want to show off what we know, but because our projects tend to have a long lifespan. We're doing sites for universities. I don't know. Plone runs the Brazilian government website and a lot of UN sites and stuff like that. So it's not your neighbor bakery. These projects have a lifespan not of years, but sometimes of decades. And they have tens of thousands to millions of content and a lot of custom code. So they need to fulfill certain requirements for stability. And if you just create your content type through the web and click around a little until you achieved your goal, if you leave this project and five years from now a new developer comes in, he will be completely lost and he will hate you, which may not interest you, but his life is then miserable and it should be in your interest to make the life of other developers great. So we're going to do that in Python and not through the web. Oh, God. It's already half past six. Katja, what do you think? We're way behind schedule. Should we make a short break before we do another half hour to actually create these talks? Last year we were quicker. I had the feeling. What do you think? You want to make a break? We have only half an hour now from now. I think people are going to break. Yes, but maybe just a short bathroom break, five minutes. Maybe I don't know what time it is for you. You can get a beer depending on your time zone. Five minutes? Good. Let's have a five minute break. Let's actually really meet in five minutes so we can get the most out of the next 30 minutes. Again, if you have questions, please put them in the Slack. We'll try to discuss them. Am I too? Someone said I was not loud enough. Sorry. I didn't see that. See you in five. I pull. Maybe you need to restart your browser or reload the page. The developer tools, when they're freshly installed, may need a little kicking. Here are components and then you can inspect. There's two inspect buttons. This is for HTML and this is for React components. I always click the wrong one. This should then show up. You need, for some reason, this scrolls very far to the right. Then you can find that. Let's continue with the talks. We are way behind schedule, but that's just what it is. I don't know where we lost so much time, but it is, yeah. Okay. We will register a custom type. Why do we do that? Excellent, Paul. Good. So since we're organizing a conference and we could have talks as documents and say, okay, it's topic, speaker, whatever, information is just in HTML. But if you want to sort your content by speaker or link to that specifically or order that by date or day or room or stuff like that, it is good to have structured content. Data is just, it's perfect as structured content. Talks is perfect for that. So first, I said we need three parts for a content type. First is the type factory information. So we tell Plone that there is a content type called talk that should be available in Plone. So to do that, we go into Plonkconf site and that means SRC, Plonkconf.site, SRC, Plonkconf site, and then in profiles. So from now on, we say go into profiles or Plonkconf site profiles. This is where you end up because the Python namespace actually ends up here also in use the default profile, not the uninstalled profiles. The default profile is where configuration ends up that you put configuration in there that is applied upon installation of your add-on. And here you create a new file called types.xml and you paste the content from this snippet here. So, oh, God, why is this not Python? Can't we do this in Python? This is XML. This is something called generic setup. It's one of the extension approaches for Plone. And everything you have something in XML and you make a change in XML, this is only applied upon installation of an add-on. So after I save that and I reinstall or install my add-on, this code is then read. Restarting, reloading the browser won't help you. You need to install this package because then the generic setup profile is applied. And this generic setup profile will raise an error because it tells there should be a type called talk in my profile. And it's not. There's only document for some reason, for a very good reason. But we need a talk XML as well. And that lower case for custom content types, I always use lower case. You can do what you want. But I think lower case is a good best practice. You copy and paste the whole FTI here. So I haven't learned anything right now. That's what you probably say. That's not true. You've learned something very, very important because you can copy and paste code. And you can copy and paste code from the location where it is always updated. Not every month. But it uses these same defaults. And it is what I do at least twice or three times a week when I write my custom client projects. I go to the mastering plan training documentation, go to the chapter that is close to what I'm doing and copy the defaults that are in there and make the changes that are required for my client project. Because I'm just way too lazy to type all this stuff. And I don't want to memorize that. And nobody wants to. There is a lot of documentation about the mastering plan training is certainly the most up to date and the most comprehensive one if you want to deal with creating content and modifying content types. And it's basically the same approach as like, plan does it. You don't fork plan. You also don't fork the mastering plan training. You just take the bits and pieces that you want for your projects and you use them and then you change what is required in your projects. This time we don't have to change anything obviously because we specify a content type called talk, which has a title with an uppercase t. It has an icon. It has a factory and ad view and it has a class, a Python class that is used to instantiate the object that then lives in the database and it has a schema that is both not yet in existence. And it has a couple of behaviors. We'll get to these later. So this is your FTI and it's basically the first part is done. You registered a content type for PLOM. The second part is the schema. You need a schema and the schema can be done in multiple ways. Again, we can use the browser to add fields until we have our schema and that is absolutely okay to do that if your project has a limited lifespan or if you do rapid prototyping. But if you have a longer running project, it would be better to do that in the file system. And again, there's two ways to do that. One is again in XML, but I hate XML. XML is written for robots. I'm not a robot. I'm a human being. I want to read Python. I want to write Python because Python is meant for humans. So we're going to create a Python module inside our package. A module is a folder with an initpy file if you don't know that. So here content, that folder obviously already exists. For me, for you, it should not initpy. So now we have, there is a module called content. It doesn't contain anything yet. And in there, we create a file called talkpy. That then is a, talkpy. And that will hold the schema. I'll just copy and paste the whole thing and save it. And then I'll go through that step by step. So here we define the schema. It is basically a Python representation of what we saw. Not here, but here. This is the user facing representation of the same thing that we're looking at now for a different type, obviously. So we define a schema, which is an interface. I'm not going to go into interface theory, but it defines a contract, basically. The contract that talks have these attributes and can do these and such and such a thing. So this is the interface and it's defined, it is referenced here in the class. So this here, this is the interface with the schema. And then there's class again with a k because c is, would be reserved, reserved word at some point when you load that and you get error messages. So that's why it has a k. And then there, hang on, here at the very bottom is a talk. This is the instance. So when you create a talk then or a news item, this is the object or the news item in that, in the other case, would be instantiated and it implements this interface so that it can store these, the values that are added to the fields in the, the, the widgets in the browser and can store these, these values as attributes persistently in the database. So, okay. So this always, again, you don't have to memorize all of that. When you write your own content type, you go to the mastering, the training and you copy and paste this thing here, the whole thing, if you want, and you throw away what you don't need and you add what you need. I'll show you how you, how to find what else you need in the next chapter. So here we have a couple of fields type of talk and they follow a, they have a certain, each field has a certain type. You remember, we added the hot news field, which was a yes, no field or Boolean would be like this. In this case, it's a choice field which stores strings. So text here, one of these values. Here we have a rich text field. It's in, it's imported from a different place, the definition of rich text field. But again, this is something you don't have to memorize. You can copy and paste that from dexterity documentation. That is actually the next chapter is the dexterity documentation itself. And each field has a title and what is the title? It's not super unexpected. This is the title of the field here. The title of the field is summary. Here the title of the field is hot news. A description of a field is, for example, this here, this is some help text. This is the description. You can configure various fields as required. So there has to be something in there. Either a box needs to be checked, we need to add some text and so on. Here's a text line field. That's a very simple one. Speaker, where you can just add a line of text as title description is not required. And so on and so forth. And there's an interesting one. There's block image, name block image, which makes sure that the file is actually stored not in the database, but only a pointer to that file stored in the database. And the file itself ends up in the file system where binary data is best stored. So after I save this file, I need to, what do you think would I need to do? So this is, we extended blown in two ways. We added a factory type information for talk here. We referenced that factory type information in the portal types tool. That's how it's called. That we extend here with type. And then we added a Python module. Python modules are only read on startup. So reloading the browser wouldn't help us. So at least we need to restart our instance. Just stop it with control C and restart it with instance FG. Remember that you need to do that for the back end, not the front end in this case. And then we, as I said, the XML files are only read upon installation of the add-on. So here we have a problem because the add-on, Plurcon site that we created here is already installed. Remember when we created the site, we checked the box at the very bottom of the list of add-ons that are available. So it's already installed. How do we reinstall that? Actually, the way to reinstall that is to uninstall it and install it again. And at this point, it is actually still safe to do so. It will at some point no longer be safe, but we'll see that. So where are my add-ons? Here are add-ons. So in the list of installed add-ons here, five installed add-ons, there's Plurcon site. And I can say, just drop that. Remove it. And Plurcon site, just reinstall it. Install it again. So I clicked my button here. And a couple of things happened in the back end. So you see, it was applying the profile, Plurcon site default. Remember the folder called default. Held this. Then it installed roles permissions. Why is that? Did we do anything there? No, we didn't do anything there. But there is a empty roll map XML that can hold custom roles and permissions. And that file was read and applied. So nothing changed, but it said, hey, there's a file. I'm going to read that. Catalog was imported because there is a catalog XML. Again, empty. But there is more types tool imported, document type import and more, and talk in type info imported. So it obviously saw that we have this. And it saw that we have these files, these two files. And then it applied these profiles. So this configuration is now somehow in the database. And it's, yes, it is visible in the front end. When we move this thing, when we move to the to the dexterity control panel, we can see our new type here. Talk. Here it is. Similar to when we would have added that. And we can actually see the schema that we defined. So this is the schema. It's all there. And it's not only visible in Volto as a admin where you can modify that. But again, in the back end, in clone has, clone has a, the classic has a equivalent of that user interface. Not going to show you that. But the types tool that stores that configuration actually has a user interface in this soap management interface, portal types. And here is my talk content type. And here is my reference to the schema and the behaviors and the class and whatnot. So a lot of stuff is in multiple places. And that is because clone wasn't built last year. It has quite some history. And this is where you see that various layers on top of each other. Okay, to make, to finish that, let's go to schedule and add a talk. Here it is. Dexterity. Some Latin nonsense, please. A type of talk. Here we have, I said it's a choice field with strings text, pick from here you have rich text field, you can add some rich text. Add that and yeah, edit that. Add stuff. Audience again, a choice field. In this case, a multiple choice field. Beginner advanced professional dexterity is for everyone. A field for the speaker. Or let's go back into history. Martin has barely one of the brains that dreamt up dexterity. No idea where he works at the moment. Let's pretend he works for me. That would be fun. Martin had clone.org. Probably shouldn't have a space. Let's see. Yay. So this email field, obviously, isn't a text field because otherwise it wouldn't have complained that it's not a valid email address. So having the schemata is not only cool to show off that you know how to do that, but with certain fields you get data validation, input validation. And that is obviously super important. So you have valid data in your database and that you don't end up with invalid data. So if, I don't know, clone, if you later try to send an email to all these people who registered as speakers, you don't get error messages. Philip, that's a question. Yes. Let's see. Paul, it is stored in the DB, but you should have the schema. Oh, the first one. If you create the schema through the web, it gets stored in the database. Yes, is that why you say it's not good for long projects? Yes. The schema is, if you create it through the web, it is stored as XML in the database. And there is a way to export that. You can export a schema into as XML and import that into your Python package to put it into version control. So that is a totally valid approach. And the earlier versions of this trainings actually did this. So we created the content type talk through the web, exported the XML into your Git repository and committed that to Git. But as I, it's a matter of choice. I much prefer Python to XML. It is feature equivalent a schema, but there are tiny things that you, sometimes you need to use in more complex projects that you can't do in XML, for example, default context aware defaults. So a method that runs to define a default value for some field that takes into account where this object is created. And so on. A couple more examples like that. So in Python, you can basically do everything. So that's why I much prefer that. And Paul had another question. He answered that question. That is actually nice. Okay, let's add some website here. I think like that. Okay, I'm not Martin. I should not pretend to be Martin. He's so much smarter than me. So let's pretend I will do that. Talk. I can upload an image of biography. You see, obviously. And the third part that is, that I meant said a content type is made out a view representation. You get that automatically. But, and that's a big but. This is, we don't see any of the exciting data that we just added. Only a Plon classic so far has that feature. So when we go to schedule and see the Dix 30 for the wind talk, I can see a default rendering of the values in that schema, which is often good enough, especially for, I don't know, internal use as intranet stuff like that. But as a for visual representation to client or conference website, this is not like this is like Excel. Just dump me a Excel sheet. What do you think you're doing? That's not okay. So don't do that. We in the next chapter, we're going to much improve that. So we already created this type. We tested this type. You see the user interface is slightly is slightly different. But if we change this to a keynote, for example, and save it. And we added that again in Volto. We have the same data. So here's the keynote now. So this is, again, this is the react front and talks to the same back end. As Katja already said, at some point, you need to decide which user interface you need to take. And you can't really switch back and forth in like one folder in Volto and the other in classic. But the data is in both places. We're going to take care of the visual representation of the data in react only. So what else is there to say? Yeah. Ignore that. We created a first content type custom one in Python. So it's readable. It's great to test and to repeat the process and change it for the next iteration. You can control all kinds of data that is stored in the database for these talks. You could extend the schema now in the later stage. We can do that. And if I said this is what you learned was not how to actually write a schema, but you learned how to copy and paste the best parts for writing a schema. And the next chapter, we have five more minutes to discuss that, is the dexterity reference documentation about all schemas that are fields that are available for you when you create your content types. And this is where you need to go when you have a requirement and you need to fulfill that. And this is how do I do that? How do I do this? These are all types of fields that are available. And they are all which is kind of crazy. They're all in one content type. So there is one content type called example in this schema. And it has various field sets for various field types. So these are field sets here, different field sets. And you can, so they're sorted by, you see these sort dates, date fields have one field set, text fields have one field set and stuff like that. And here are, I'm not going to go through them. Don't worry. Here are examples for all kinds of fields that you will probably ever need. And there are plenty of those. And to make things even better, there is a screenshot for how these field sets, also also these fields look like in Plone Classic, all of them. And also how they look like in Volto in Modern Plone. Six. So you can check them. They, some look a bit different, handle a bit different. That's all discussed here. And on, to make things even better, there is a Python package that has this content type and it's installable. And it is, where the heck is it? I should have added a link to that. Example. Come on. Example.con, I obviously forgot to add the link. It's called example.content type. It comes in two branches. One is called Volto and one is called master. Because some fields don't work in Volto. They are fields that you don't usually use. For example, time delta, which stores the difference of time, which is why would you need a field for that? Yeah. So, and so this is where you can, you can install that. You can check that and look at the, here's the schema for Plone Classic and here's the schema for Volto, which has a couple of fields are commented out because they don't work. For example, the data grid field, you can use that, install that and play around with these packages or you just copy and paste the schema from here. There is more. There are third party packages in here. The data grid field, which is in Plone Classic and the excellent addition to Plone to add an arbitrary number of lines of, again, schema defined data. So you have, for example, the field would be talks, but the talks are not content types itself, but a simplified schema with title, speaker and type of talk. And you have an infinite, can add an infinite number of rows of that data to that. And something very similar is the mixed field that Katja, I think, created where you have something, again, a data grid where you can add rows of data that are defined in a schema. And the code for all of this is here. You can have that. You need the object list widget to display to be able to import that data. And then you have these kinds of items. You will actually see these at a later stage in some of the blocks in Plone. For example, no, I'm not going to jump ahead there. There's a search block or actually the collection. The collection has, you can have defined criteria and each criterion, you can have infinite number of criteria is similar to a data grid field or these kinds of blocks. And that's the code for that. Here's some links to widgets. And there is also information in this chapter. This is why you really need to keep this bookmark, because it has all the information what you can do on top of that with the schema, because I went over that. Here's a directive to change the widget of the choice field to a radio field widget to make that radio buttons and not a drop-down list. So you might wonder, yeah, why did I see a drop-down list then? Because this radio field widget doesn't work in Volto yet. There is only a radio field widget for Plone Classic. So when you edit a content type, this content type in Plone Classic, you get a different widget. This one instead of a drop-down widget, because I think it is stupid to have a drop-down if you only have three options. If it's 50 options, obviously you want drop-down and not radio buttons, because you'll have a full page of radio buttons, which is horrible. So you can define the widgets here. And there is more. There's validation. You can constrain the data. You can have methods that are then called upon entering the data automatically. They're called. And they can raise an exception if the date, for example, of an event is after the start date of the event. Or you can't start after you already stopped, obviously. Then a default factory or default values. What else is there? There's an invariant to validate data, a context aware. That's it. You can do basically everything with these couple of snippets. So we've reached the end of today's training. And next up, tomorrow morning, no, not tomorrow morning, tomorrow afternoon, at least for us Europeans, we will realize that our schedule is, the talk is nice, but it looks terrible. We want to display the data. So Katja will walk you through creating a view for this talk in Plone in Volto with the new front end. And if you're so inclined, you could also look at the old version of that, where we discuss creating view for the same content type in Plone Classic, which obviously works completely different. You write page templates in Chameleon, while here you write your page templates, which are not page templates as view components in React. So that's an exciting chapter for tomorrow. If you want, we can, if you have any questions, we can discuss them now. We don't need to go right away. Obviously, there at some point, there is dinner in Sorrento, I think at 8 for Katja. But if you have questions or feedback, ask it now or in the chat and we will answer that tomorrow morning before Katja starts off with this chapter 23. Yeah, I think we should start tomorrow with 10 minutes questions and answers. But if you have no questions, that it would be good, then we would be better prepared tomorrow. Yeah. I guess we drown you in content. Don't be shy. If you have questions, you can also ask us by email or in the Slack. We'll see that tomorrow morning. Slack will be open all day, all night. Obviously, stop sharing my screen. Maybe for those who are still there, a sneak preview as a motivator for what's coming tomorrow, we're probably not going to get as far as we did last year. But after we created this, the visual representation for talks, you will probably learn, unless you already know all of that, a lot about React components and what nice tools you can use there to make beautiful pages, renders. We will learn about behaviors to extend the content type. The content type is there to extend clone with a new content type, obviously, and behavior can be there to extend that content type. It's like Lego. You put these little Legos. It's even better than Lego. I don't have a reference right now. But we can extend this piece with something that already exists to give it more features and more functionality without writing a single line of code, which is extremely powerful. The best code that is created in this training is the one that is not written, but is reused by just plugging the right components on top or together, just connecting the right dots. That's what we're going to do in the chapter 24, also 25. We're not going to write a lot of code there. More complex code will be the listing view for talks. That's more React for you. We'll discuss, I love chapter 30, where we do a small change to give a lot of power to existing content types again. If we can get there, we'll add customizable functionality, a control panel where you can configure the site. Here in clone, you can go here to the control panel and configure all kinds of things. Obviously, you can do that with your own add-ons to say, I don't know what types should be listed in what kind of listing block and stuff like that. You can do whatever you like to extend flow. That's super powerful. That's the whole idea of this training to teach you about these little components that you can hook into to reuse them. Because clone is already so big, most of the requirements you get from your projects or your clients, you won't have to write a single line of code for them, because they either exist as an add-on or as a feature that you don't know, or as a combination of these two. You just combine this add-on with this feature and what are they? That's the magic. I don't see any questions. Katja, what's for dinner in Seren to? I don't know. Sorry. The restaurant in Seren to is excellent. If you ever get a chance to go to Plone Open Garden, which usually happens in spring and is now not Plone Open Garden, but the Plone Conference Fan Zone, the restaurant on top of the restaurant with a beautiful view of Mount Vizouvius, has gorgeous food. It's just terrible. You come back a couple of pounds heavier than you went there, probably. I always do. Okay. Any last words for today? Thanks for bearing with us. Thanks for listening. I hope we didn't bore you to death or overloaded you with information. The good thing is you can always re-watch the video on YouTube, make it slower or faster, or skip chapters. The second good thing is you can read all of that, or most of that, at least, in the online training. Thanks for tuning in and have a great evening and talk to you tomorrow. Good appetite. Katja. Thanks for attending and hope to see you tomorrow.
What you will learn: The core technologies involved in Plone 6 programming and how to write your own add-on package and customize your Plone site by writing Python code and React components. Prerequisites: Basic Python and Javascript Knowledge. 0:00 Introduction 9:25 The Case Study 13:18 What is Plone? 22:12 Installing Plone for the Training 25:58 The Features of Plone 1:08:05 The Anatomy of Plone 1:13:58 Volto Basics 1:21:52 Configuring and Customizing Plone "Through the Web" 1:27:36 Break 1 1:37:10 Customizing Volto Components 2:20:50 Semantic UI 2:26:59 Theming in Volto 2:40:35 Extending Plone 2:44:18 Extending Plone With Add-on Packages 2:46:21 Buildout I 2:54:57 Write Your Own Python Add-On to Customize Plone 3:03:25 Dexterity I: Content types 3:19:15 Break 2 3:20:33 Dexterity II: Talks 3:45:00 Dexterity: Reference 3:52:28 Preview of Day 2
10.5446/55895 (DOI)
All right, hi everyone. It's time for the final talk slot for today. For this one we have Victor Fernandez de Alba joining us from Sorrento. Victor is the CTO at KIT Concept and has been around the Plone community since 2006. Victor and I have served together on the board for the last few years. He has done a lot of work that has gone into core Plone, including Volto, Plone 6, and helped build the current PloneComp.org. So Victor is going to be showing off the new Plone style bike called Quanta. So go ahead, Victor. Thank you, Chrissy. Yeah, let's start with a little bit of history. For those who don't know him, this is Albert Casado. He's a former colleague of mine back in my university times in the Barcelona Tech University. And he's a long, long time, long friend of mine as well. He's been contributing to the Plone community since I dragged him into it also back in 2014. We both implemented together the Plone 5 Barcelona theme here in these premises when we are now in Sorento in the Plone Open Garden in 2015. So one day he called me because he wanted to show me one nice thing that he done one day. So we met in Barcelona and we went to have an evening snack together involving Javogo Ham, of course. Yeah, we always do that when you go together in Barcelona. And he showed me what he did in the free time for the next generation of Plone. And it was Pasta Naga UI and Pasta Naga icons. I was blown away. I mean, it was amazing. He showed that he did no more, no less than 250 icons back in the day. And then the whole Pasta Naga UI was amazing. Right? So it was not until the late 2017 that we managed to implement Pasta Naga UI back in the original Plone React code. Initially Plone React was born in 2016 and it initially had and used the semantic UI before CSS. It was a little bit ugly back in the day, but yeah, I said late 2017 we managed to sneak Pasta Naga in Plone React. And lately when Plone React eventually became Volto, then Pasta Naga UI was finished and was delivered with Volto itself. This happened, as I said, just in time for the Tokyo Plone conference. And oh my God, yes, Albert has a passion for designing logos. And he delivered through the years. So the first logo that he made us was for the Plone Mosaic Spring in Barcelona that was also in 2014. And to be honest, it's one of the nicest logos that he ever done and is probably one that I like the most. Although sorry, it doesn't follow the logo code of conduct or guidelines, right? But still, it's one of my favorites. Oh, magnificent. So and then he followed with the Guillotine logo one. And again, the Plone conference logo of 2017, which I also loved. And he came to me not long ago and he told me. So the truth is that I only wanted to update Volto's logos because I didn't like this mass thing that much. And I always not 100% sure of it. And somehow I ended up by creating Quanta as well. This is the Volto updated logo that we had from that time. And we not only get that, but we also got Quanta. So what's Quanta? Quanta aims to be an evolution of Pasta Naga UI. Albert created it to keep up with the design trends and the modern UX perspective and also taking into account the latest feedback that we had from clients. So and it became a reality and this is how it looks like. So after the story, welcome to my talk. I'm Victor Fernando de Alva. I work at the concept and today I will present to you Quanta, the new Plone style guide. So what's a style guide? So we all know and we heard that and we also probably took a look at different style guides, but probably without any noticing or. But the style guide is a document, is a place, is a place of reference, a single source of truth that developers and designers alike used to check out as a design baseline and look at field for a given project. So yes, a style guide is documentation. And the main goal of this documentation is to ensure the communication because between teams and these and these actors in a team, especially developers, designers and UX experts. Why a style guide matters? It allows us to deliver faster. So not only projects, but also tasks also. I don't know, small pieces of our content or pages, right? It ensures design consistency through all our pages and all our the elements that form that pages. It's a one stop guide of reference, especially for new team members that doesn't know anything about the project and they have this point, this entry point for look into how the project look like and how is the relationship between all these small elements of the project. The code and the design are linked, right? So you have a reference on how the element should be coded and then transferred to the code to achieve what the element promises. Also it allows us to standardize everything that we do. So every time that we got a requirement for the client, we go there and we say, okay, this requirement, this is element that the client is, want me to implement in their project. So I already have something that looks like it or can I made up the thing that the client is requiring me up to from other basic elements that my design guide have, right? So I can grab these and these and these and these basic elements, put them all together and form the thing that the client is expecting without me having to reinvent the wheel and came up with new things, right? And it also allows us not to repeat ourselves. So it allows us to be DRI and use code, reuse code through not only in the same project but probably amongst different projects. Then you all will also heard about design systems, right? And you could ask, yeah, how about them? So a design system is a single source of truth that holds record of the code standards and the details of various of different basic blocks, components and elements and patterns of a site or an application. And this collection of things are meant to be reusable and combine themselves in a way that they play well together. So if I put two elements, one decides another, they are not going to cringe and they're going to look fine, right? So I have this bunch of bricks. We could think about bricks and then we can put them together to build our sites. So this collection is a collection of pre-built blocks and ranging from basic to complex, right? Because we can have an element that is as small as a button or as a link, right? To bigger ones that are composed of different of the basic ones to, for example, form a bolt-to-block, right? They play well with each other, as I said. So if I put one after another, they look fine. And they also allows you to standardize code and don't replete ourselves. So these approach, the blocks approach works extremely well with Plon6 blocks model, of course, because Plon6 is made of blocks, at least the page, but not only the page, the pages, but also the elements that compound the frame, like the hider, the footer, another helper element or support element, right? And nowadays, websites are designed precisely following this approach, the modular approach. So the websites are modular, so we can break the design of a project into these small blocks that are from our design system is composed. And then just after these slides, we only have to put the pieces of the puzzle together, right? So immediately this approach maps directly to how Plon6 behaves, how Plon6 works with the blocks model. So Quanta is a style guide to Oris a design system. And it's both, it aims to be both, because we need it to be a style guide, where the basic approach of how to, the elements behave and how they should look like. And this design system that has this collection of blocks that play well with each other, met, and we can use them as reference for building not only Volto itself, but also on Plon6, but also our projects, right? So yeah, this presentation should have been mid-Quanta, Plonstyle guide and design system, but it was too long and I wanted just to keep it simple, because at the end, people know about Plonstyle guides, sorry, style guides and not design systems, right? But yeah, you're correct. So using this new approach, using a design system, show us that through over time, we had different workflows to communicate with designers and UX experts, right? How it started, it started with PDF files or image files, right? We all been there, where you had to got the color of a specific thing using a color picker and then, yay, yay, this is one. And then grab the X value and then go to the CSS and put it there. Then it bold and then somehow the designers got lazy and they just send you the Photoshop files and then you had to have Photoshop in your machine, of course, and then, yay, with Photoshop, you could use the color picker of Photoshop and then look how the things were made and Adobe Illustrator did that. But right now, there are tools that facilitate and ease this communication like Figma or Sketch or Zeppelin or Adobe XD, right? So the design system fits very well into these new workflows, right? And we have to learn how to adapt to times and start using also these tools, right? So our style guides or our design systems not only become something that is static, but they are living, they are alive. And we have also other kind of tooling like Storybook or Style Guide Dist in the real world that allow us to create this living style guide. They use the actual implementation of the components, of the real components that we are using, where our design system is made of. And by using the real components and the real CSS, we have the ability to have a playground that the users or the members of the team can play with. And you have live components that you can play with and make them behave differently to test them or to look if the client could achieve their requirements with these components, right? And for this, we have quanta.pasta.nabagas.io, which is the first initial implementation of this style guide. So first of all, quanta is not done, is not finished, but we are on it. And while we are on it, we are building this living style guide at the same time that we are building it. It looks a little bit like this. So we have a Storybook set. Storybook is being used by Plon6 right now. And it also allows us to have these components documented in a way that, as I said, you can play with them and you can even try them and see how they behave and look like. So right now, I want to show to note that we are at a point that we are using another time of development with another workflow, that is the style guide driven development. Or, but as I said, Storybook driven development in our case since we are using Storybook. So I found very, very useful this workflow where you start like you used to start, so you unit test, then write the code, then make the unit test work, then write the acceptance test, make the acceptance test pass and make the, fix the bugs that could come out of it. And lastly, write the Storybook element of the component that you are writing. And what do you, when you are doing this? So it's amazing how much you learn by isolating your component because you normally use to develop a component inside a context, right? So it has things around, but it could be that by having these things around, you miss things, means bugs, right? And by isolating your component, you realize that you could have wrong margins, wrong paddings, CSS leaking issues. So in the context, it's working well because may, might be that as CSS is leaking and it looks, it makes your component looks fine. But when you isolate it, then you see that that CSS is missing because it was leaking from somewhere else. So it's very, very valuable that you finally do the storybook, not only for having it documented and in your living style guide, but also for yourself or the sake of completion on the component itself. So as I said, in Plon6 front end, we also have a storybook. It's also an ongoing effort because as you might imagine, building a storybook for every component that we have is hard, but the idea from now on is that if you develop something from the core, you should provide always the storybook counterpart. So we are feeling this storybook bit by bit, right? Right now, this is how it looks like. You can check it out in the URL as well. And we have, yeah, for now, some widgets, for example, the object widget, which is this one. Like we have also others like the object browser or some other widgets or basic things like the breadcrumbs, for example. So take a look into it because it's very, very powerful and it gives you some insights that otherwise you won't have. And it's not only useful for Plon6 or for Quanta itself, it could be also useful for your add-ons or for your projects. So if you're building your project and at the same time you build your storybook, it's super valuable for you, your team, and your client that is seeing live how his product evolves and is built with. So we need help. We need help, please. So we have, of course, a lot of made-up components in Plon6 frontend. And yeah, if you want to give it a try, take a look at the existing ones, build the environment on your machine and start creating them. Back to Quanta. Quanta has some basic elements that we will, as you will imagine, and we'll have to build them because, as I said, Quanta is not finished. We barely started. And we start now the effort to accomplish having Quanta finished. For now, we have focused and we know that we should focus in the same fundamentals like Pasternakaui has, which is this one. So we should simplify and focus. Don't make the user think. Let the user go with the flow. The user should also have a happy path action. So the user doesn't have to think more about now what's next. Have a structural visual hierarchy as building blocks. We have to earn the trust of the user every day. So it's not enough that we start well. And then all of a sudden, we do a thing that is not consistent with what we did before. And then the user gets like, what? I mean, this is not what it used to be. I used to not have to choose. And now these guys made me choose between hundreds of combinations, right? And make feel the user that he's understood and treat him as a person. So these are the fundamentals that we don't have to miss to get out of sight. We've been working on the strategy for Quanta these days. We is still a working progress, but as a basics, the aim is to have it at some point during... Yeah, I mean, we're not going to promise anything, right? But the first target is that we deliver in packages that doesn't harm the release of Plone 6, right? So we will have some nice things from Quanta in Plone 6, but we will not jeopardize the release of Plone 6 because we are still building Quanta. And that's also a premise that we want to have. So yeah, we will build up, depending on how Quanta develops through the years, through the... Better, through the next month. So I'm going to be optimistic on this. And yeah, and here, the strategy, what I can show is something that, yes, we already decided is that with Quanta, you will be able to build a theme with any framework, CSS, or design system that you choose. And then the CMS UI of Plone 6 or the future of Plones will be Quanta and will be built in this... Using the Quanta design system. So you could basically remove the theme and you still will have Quanta and the management screens working and you could replace the theme with whatever other theme that you want. And Quanta with CMS UI will be still be there. So that's a summary of what we are thinking of. Okay, so now, what we're going to do in the last... In these last minutes is to show you Quanta, right? You are all waiting for this. Of course, since I mostly unprepared, I wanted to make some nice animation. So I'm clicking here, I'm clicking there, then I get this, but I'm going to walk through it, right? And you're going to pretend that this animation happened, right? So this is the main, the homepage. It could change. The reason is a first prototype of what the theme part of Quanta could be, right? Then if we click into the login, we will go to this kind of screen that... We will see a trend in Quanta that Quanta is very clean, very clean, and go straight to the point, right? Without too much cluttered things and analytics. And once we are logged in, then we get the toolbar. This is a slightly revamped toolbar from Pasta Naga White, right? And it has its collapse version, which is a little bit fancier than the one before, right? Is this... So then we have the More button. This is the Remumble More button, where we can change the state, change the view, or accessing to other views, like the sharing view or the history view. Then we have the Add More, the Add Content button, that will show these first two more important elements, right? That probably will be... not probably, but I'm sure of it that it will be configurable, and you could put here the most useful content that your client want to use, then the rest of the contents below. This is the Personal Tools menu, with the usual access to your profile, your settings, and the set setup, and that you can change your profile picture from here, and also log out up in the right corner. These are the mobile views of some of them. As you will see, these... the menus come from down. They used to come from up, but they come from up, from down in a drawer fashion. Let's see the folder contents. The main feature of the folder contents is that it resembles an app on itself. It doesn't have the top part, the header, or the footer, and focus on what the folder content should do, which is manage content. We don't need the frame of the site here. We don't need the constraints of the site main content with. We want to use the whole space that is available, both vertical and horizontal. This is some of the things. The folder contents has an add content button as well here on the right, and you can select and then the tools appear on the top following this approach that simplifies things. I won't show the tools if the user cannot use them. Only when you select things, you can have access to the tools. Then a quick mobile view also with some modal in case that you want to delete something, also in a drawer fashion. I will hurry because they give me the... This is a form, normal form with no blocks. We're going to create a content object here, and we have here a document where you can type your title to the description, and there you can create... Sorry. Here in the Cof, you can open the metadata of the site here in the sidebar, and you can close the sidebar. One of the things is that the sidebar by default, it's not present only when the user wants to show it. That could change depending on the content type, but for now it's going to be like this. We have an accordion there for accessing the different tabs, and we have this, which is what we call the quanta toolbar, and it will be a toolbar that is present in every block. From this toolbar, we can access the most useful tools that we can have to act upon over our block. For example, paragraph titles, blah, blah, blah. For the remove block comes in here. Here it will also place the cut, the copy, and things like that. Then there we have the block chooser. This is the block chooser. This is an image. This is the browser content, the object browser content. We can choose the content. We will have different views on the chooser. We can search, then add the image. This is the sidebar of the image. And finally, this is the mockup of the listing, the listing block that will be updated when we will put the criteria here in the middle. And then when we are editing the criteria, we'll have it in this fashion on the sidebar. And this is a little bit the design of it. And one last thing. So Quanta will have a dark mode. Thank you, Victor. Yeah, everything there looks great. I'm sure there will be people that want to be asking questions in Jitsi. So go ahead and hop over there. Thank you, Victor. And join us back in about 10 minutes for the annual meeting.
The web development workflow often includes the look and feel for the website, intranet or app you are building. The way that designers and developers communicate over the years have changed over time and so the tools used. From PDF mockups to Figma, through Sketch and Zeplin. Nowadays, the most complete way to communicate and model a look and feel is through living style guides. A living style guide is the place of reference that developers and designers alike go to checkout the design baseline and look and feel of a project. It also holds record of code standards that details the various basic blocks, components, elements and patterns of a site or application. So, by having this at hand, both developers and designers know the baseline and can easily build brand new components of any complexity, based on the basic blocks stated by the Style Guide. Also, they provide the starting point for new look and feels that different projects might require, without having to reinvent the wheel every time. Style guides story in Plone started with Pastanaga UI, and now it has evolved to the next level with Quanta.
10.5446/55955 (DOI)
Let me quickly talk about myself and America. I'm working with Plone since 2004. Many, many incarnations nowadays I work for a key concept. I've been deploying Plone with Docker since 2015 with different approaches. At some point in 2018 I started deploying Plone with Docker and Plone. The Docker image being based on PIP. I honestly believe this is a huge advancement when compared to deploying with the Docker image with Buildout. During this training we are going to explain how the new Docker image for Plone 6 works. We are going to implement some of those images, especially the Plone backend one for Plone 5252, 5212, 345, just as a backward compatibility, so you can use it. But in general this whole training is focusing on Plone 6. So we updated everything to run with the latest alpha that was released yesterday and with the Voto 1400 alpha 23. So it's bleeding edge. And I ask again for you to take a look at the training setup. It's on the screen here. And also on Slack channel you have the link. You can follow the first three steps with the documentation in training.plone.org. We are going to update that over the week so you can do it yourself again. But first step, just make sure you have everything working as listed in the training setup documentation. Okay. If you have any questions, write on Slack. Fred and myself, we are on Slack. We can take a look and help you specifically to fix your environment. Okay. So first thing is after you set up also make sure you do have an account on the GitHub. Okay. We are going at some point we are going to push the images we developed to Docker Hub. If we have time by the end of the training, we are going to do that using GitHub actions as well. If not, I can point you to how to implement yourself. It's really fast to do it and it's the preferred way of doing that. Also configure your Docker to be logged in. So you do Docker login with the credentials you have for Docker Hub that help you to get images and then to push. So Alexander, you have a question. Yes. As some organizations like ours working a lot with GitLab, GitLab CI and the internal container registry of GitLab, could you, if you have the time and the knowledge, also talk about that because sometimes organizations don't want to push their images to a lot of registry like Docker Hub but to their local registry. There are some things that I need to set up. I try to make this training as more comprehensive as possible but trying to avoid answering all the possible local questions. So yes, of course, you can use your local repository for the container images and yes, you can use GitLab to be the CI to publish the image and so on and so forth. That does not change a bit. It's kind of the same approach. One important thing in here that I want to leave it clear, this is supposed to work also for very, very, very small setups. So I'll give you an example. The goal of this training was to show how can I get a $5 machine on digital ocean and deploy Voto and Plone and eventually even a database with in this machine without having to worry about building everything and so on and so forth. So we are going to tackle this scenario and growing from this scenario to moot machine is not hard and going from this scenario to a Kubernetes setup, for instance, it's also not hard with Rancher, without Rancher or whatever but we are going to try to do the basics. Also, the second thing is if you see here, there's a code repository, this one. So I'm going to open in a new tab for me here. I ask you to please fork it to your own account. I have bazillion organizations, so we start forking it. And then we are going to clone it to our local environment. Okay, this is like the first step from here. Then CD and deploy. Then I'm going to use VS code. You can use whatever editor you're familiar with. So here it is. It's going to ask if I trust Erico, I hope I do. And then that's it. So then the first thing we need to do and it's documented here in the training setup is to replace occurrences of Docker Hub user change me to your Docker Hub username. I hope this is like this. Not this. I have a hard time with code. Let me see. Okay, search this and replace with in my case, alcohol, sex, replace, should be okay. And then see if it's okay. Evening the documentation is good. So yeah. Yeah. I hope it's okay. My own Docker username. Okay. Okay. Okay. I'll wait for you to say that. So pay on your sign. Okay. Please write on Slack when you're done so far. Okay. Okay. Everyone else are still working on it? I'm slow but keep going. No, no, no. It's important that we are on same page here. Okay. Okay. Tony is also done. Okay. Seriously, Eric, keep going. Okay. Everybody else besides Kim. Okay. The next step, it's going to be a simple one. Just install Plon. Right. I believe you've done that many, many times. But before I want to show you, if you see here in the structure of this repository, we have a directory for Ansible. That's the one we are going to use to set up a remote deployment. We have in here a back end. That's the configuration for the Plon back end. We have in here the, sorry, we have in here docker files. It's a set of examples of docker compose files that we're going to use. Docks right now has only the image that's used on the readme of this file. And then front end has the basics with a docker file and a make file. Okay. This REPL has also on the root of make file that we're going to use just to speed up some stuff. And a readme file that explains what we're going to do. Right. We already have all the setup. And now what we're going to do is set up the back end. We leverage, as we are working with a PIP installation of Plon. And one of the things we are still missing is from the build out days is a bunch of helper scripts that build out would give you for free. So right now we are leveraging a lot of things. We are using a lot of things. So we are leveraging make file to do this. So in here in back end, make file, we're going to have clean, of course, to remove the whole installation. And then in here, you have the target to create the instance that depends on the virtual environment here. And then if you do the make build, it's going to install everything we need. And then create the instance with the full user admin, password admin. Okay. And the first thing that you need to be aware, you see that every reference right now in PIP. It uses PIP install and then we have this command here. Use deprecated legacy resolver. Since PIP version 20.3, PIP the package installer for Python has a new release over and for clone installations that are really complex, a bunch of things depending on a bunch of other things. It's failing. And then the release manager of clone already submitted a pull request, open an issue and send a pull request to fix that on PIP. But until the moment they emerge, we need to make sure we are using this flag in here otherwise the installation never finishes. It keeps running, running, running, running, running, running, running, and you have no idea. Okay. So, if we go back here, what we're going to do is basically make backend, a CD backend, and then it's make build not make setup. Make CD backend and make build in here. Make. Make build. Okay. And this is going to take a while. And we go. Could you please directly adapt the written off the repositories cause if there's no make or set up. Yes, I'm going to fix that on the original one. Thank you. And still a lot of persons joining. Well, thank you. Let me see here. Okay, welcome me, Alex. Hi. Is around here. Hello, David. So, I'm going to changes. Okay, it's already updated on the source repository. And it's running here. And of course, I forgot to push the scale directory. Let me see. It was nothing the original one. Damn. Let me fix that. Backends. I'm going to open a new terminal here. I'm fixing the lack of scale that was not pushed in the original one. Okay. Okay. Okay. Okay. Okay. Maybe in the meantime, can we make just a few more learnings about how your background so whom of you already had knowledge with container and Docker and everything. You can write on either slack. Or just with the reaction in zoom, probably you have the voting so agree disagree or what it is. Yes, no, to say, oh, the reactions. Yes. My, my zoom interface is quite limited. Okay. So I see Alexander said yes, reaction. And I believe we should also ask on which operating systems you are working at the moment because that's also makes a different which limits what your zoom client can do. I know a few of us using Mac OS, the others. Linux is, I hope there is no one on a Windows machine. I'm not sure. Actually, not sure anyone using Windows now. Nope. So, who we dodged that one. As Kim also mentioned the thing with the M one or stronger machines from the new Apple setup. As you said, virtual boxes required which a box does not run on the M one. Do you have any substitutes or is there really virtual box required. Yes, I know I did not try with without a virtual box to be honest and that that could be a problem when we reached that point. Okay, so we can test that. Fred, I know you have none one chemo has none one and Kim seems to have one. He does not have one here. He's going to log into one to test. He knows what's going to happen. So he's going ahead. Okay, is there. Do you have something else that could work together with vagrants, for example, the beard or parallels installed. I did not try. We're going to pass. We're using virtual box now to run the doc to run a Docker machine have a basic platform that's the product I tested for Eric this morning. But if you what's what's running in Docker is still just a blown back and which is Python something and it's a Node.js thing running the photo wrestle intermediate server. And those things should work perfectly also on on an arm based Linux or on any kind of platform based Linux. So we're going to set this up to to have a kind of safety net to have it isolated but during the training you could also say okay let's not docker build and build it on on the virtual box on your machine isolating Linux but build it directly on on Docker running natively in your system. So everything if you're doing get now at a moment with Docker desktop on M one, it's probably starting an emulator in the background and it's getting very slow so that's something we'll have to figure out maybe also after today and in the coming days, how we can cater for that that that the blown back and the virtual server for server side rendering is behaving decently both on x86 container runtime machines and also on arm based runtime machines. Thank you, Fred. So for a while I'm just testing something here and I'm going to to update. Okay, so. I'm going to do this second. And this. Okay, can you see my screen, the three comments. It's really, really small. Yep. Yes. Okay, just run these comments on your command line. So you put them either in the chat or in the slide. Oh yes, it's even better. After rebasing. Should they make build work. Let me see here. So, make it. What is missing here. No search file or director. That's what happens to me too. Yeah, yeah. That's strange. Yeah. So clean. And then make it. As I should have now the, the back end. And then I'll scale. Yep. And still. It's not, it's failing with being. If somebody else is. Yeah, the makes open. So let's go to the. Whiskey instance, it's not running. It's not there now. That's right. I have the same problem. Yeah, it's happening with everyone. So let's try again. Hey. Yeah. Okay, okay, okay. Got it. What we are missing here is the following. The make clean is not updated. I'm not going to cycle that now. So I'm going to test if it works here, I basically pass this to 64. Includes then pi vm.cfg. Okay, and now make build. Yes, it is possible. Whoa, this is a different message. Let me see. The container, I'll take a look at this. 64, include. 64, add.cfg. I'm looking at the invalid command, bethys will, we need to probably install, you install will as well. So Kim, first thing is to make sure that the thing works. I posted on Slack, remove, leave, 64, include, and then run again. Make build. Can you put it in chat please? I put there. Thanks. And Kim, for you as well, if it still fails, you do this. And then run again, the make builds. Hey, thanks. It's interesting because Kim is appearing twice for me. Yeah, I got you running on zoom on two machines so I can, I can listen to you while I'm also looking at the screen. Okay. So is everyone running now? Perfect. You're just running, it takes a while. So, is everyone running now? Perfect. Everyone running, it takes a while. Here. So we start fire off improvements to the build. And do this done. Perfect. So, saving. And here as well. Perfect. Okay. I'm not waiting for you Kim. I'm waiting for more people to basically say the build is done. Otherwise it becomes a presentation less. Could we write in the chat or in the Slack? Yeah. Okay. Take a look at the, there's a RM minus RF. Remove. I need to, to fix the, the make clean. Okay. So, you see the file you're still running. I want to, I want to show you something here. You see, we have a requirements file. First, the requirements.txt file here. That's only adding. So you see that the, that's only adding. So the final evaluation looks for plone from requirements. Plone in here. So it basically goes. To the latest version. So we install with. People install. Minus. Minus are pointing requirements. Plone. Installs the plone version. I have also. I have also installed a new. I have also installed this approach of. For instance, the development libraries I put in a dev.txt. And also I have in here. Unreleased. That's now empty. For instance, for packages that I want to check out from a. Get. Because we do not have a stable version. It's the way to, for instance, okay. I fixed something I want to test. So I'm going to put that in here. So. Here we add the add-ons you need. And that's going to come in handy. And in a few moments when we start playing with doctor. Okay. Okay. Okay. Done. So we do have many of you already done. We can move on to the next step. That's going to be hard. So pay. A lot of attention here. It's. Okay, let's start. Okay. And then we do have a. A plume back and running on port 8080. So in here, instead of Docker hub, I'm going to. I'm going to use the local host. Eight 80. And voila, we have a plan six zero zero alpha one. Us, we want to test this with Volto. I am going to ask you to click in here on advance. So advanced. And then we're going to ask for user and password. And the user and password are admin. I mean. And please do not say, don't ask me. And here. Just to speed up the process. We want to check the front end. We don't want the example content. And we want to check. Long front end. The full content. For homepage and long front end. Volto. We are going to rename this. Wait, wait, wait. I'm going to go to the right. Where's mine. Come on. Come on. So take a look. We want to set up this too. When we have blown. Six zero alpha two. They're going to have different names in here just to set up the order. And we're probably going to, to come up already with an easy way for you to say, okay, I want a plume sex plastic. Or I want a plume sex Volto. So I'm going to leave this a while here. So you can see the options I selected. On the read me. Of this. This package. You'll probably have the same options. Check out on the image. So you can take a look. Okay. And then. Come here. Create a plan site. And that's it. That's the back end for or Volto sites. Right. Please tell me when you're, when you're here with me. Meanwhile, I'm going to stop this. Okay. Okay. We are going for the next step because that was only the, the, the back end we need the front end. Actually, I will leave it running here. I'm going to open a new tab. Good. Here. And I'm going to, I'm going back to the roots of the project. Okay. So now I have two tabs, one that's running plume on part 80. And then another one that's in the roots of the package. So I'm going to get back to the project. Can you please show the dawn options again. Or where is it? Do you have a link? Which option? Sorry. When you created the site. Oh, sorry. I'm going to. To link to an image here. Is the two. You want to check off the two plume six front end choices. Okay. There's this image here. Open imagine. So, everything shared it. Yeah. Tom shared as well. So here it is. Okay. Now on the roots of the project, we are going to do this. So I'm going to use Volta. I'm going to paste also on the on slack. So everybody can follow. And this is the moment that we have to move Victor around. If something fails now, we have the experts to point us what we do wrong. Okay. So I'm going to start this takes a long time. Usually. Starting and explaining we are basically we are going to create a new Volta project inside that folder. In the folder front end. Okay. So that's why you need to be on the roots of the project. So the first question. The, the Volta creator asks you is, what's the name of the project, and it's going to be, we are going to name it fountain just to, to follow the, the same folder structure. And then Volta asks you to, oh, do you want to have an additional installer or not. And we are going to, to, to answer just to install one thing that's voters late that's going to be the default text editor for Plon six. So here, first question, instead of training deploy project, it's going to be front end. Okay. So the next question is, would you like to add add ons, instead of false we're going to say true. And then we are going to add these add on, I'm going to, to, to paste that on slack as well. Here the value. Leave NPX. Thanks. Alexander. I'm going to add that. I think that goes my list in here. Just, just is one or not the wrong. Jesus, the only looking we're going to use now. The only add on for Volta, right. And then false. And by the full installer, the name project, the center is front end. Thank you. And we also had another person burnt complaining about leave NPX. Let me see. Not version. No, it should be 14. Do I add add ons when I ask. Yes, the only add on is the one I put in here. It's the voters late as the fall. So let me. I'm going to add here. Oh, no, this is another thing. Let me get. This is the, the add on you add Volta dash slate. Some colon, ask the fold. And then the next one is false. Let me just point one thing. If you go to the read me of the project, the roots one, like this one here. We are basically following here. Right. So one, then false. And then we are going for the next. Let me go back to the NPX. Yeah. Tom probably the, the issue is you and selected a Barcelona. I can try to get on site. But don't worry for that's not important right now. Okay. And it seems burn have has the latest note. Has 1418. Okay. Let's go with the node installation for sure. It could be with NPM, NPM version problem. Could you type a, a, trying to find out which version of NPM you have. I believe it's Yeah. NPM is really old and even so works. Anyway, that's a, let's go on. Okay. And I'm going to go back to the NPM. Because even if that's a problem, the, the, in the next steps, we're going to catch up when we use Docker to solve this kind of issue. Okay. The only thing I ask you all now is to go for the next step. That's get a newer. Voto version. And we have a problem if you do not fix the NPM because you don't have all the versions. But Timo, are you taking a look at the, this version. Timo is going to help with this. And here you'll basically go to front end package as a and the version, the current version is 13.15.1. And it wanted to be 14, 0, 0 alpha 23. So, here, content. And then lots of files. And then we have a jason. Change, save it. And then go to the front end folder. And then go to the bottom end folder. And then go to the bottom end folder. And then go to the bottom end folder. And then go to the bottom end folder to get the latest version. Can you show again what do we have to change. Yes, of course. Here. And then you have the value that you need to have 14.0.0. dash alpha dot 23. Yeah, like pointed to the, to read me of the project already has this, the steps. And of course, Timo mentioned that you could just run the command line. After this, what you need to do is yarn install. Oh, not the only start. Then you can run the. Perfect. Burn is open running again. Just follow the, the read me steps for the front end. And you're going to read this point. And then Volta is running. So, in here. And then you have the local host. And you have Volta already talking with the, the plumb back end. So here. Again, I mean, I mean, you have Volta. I'm sorry, but even if I type false, it keeps asking if I want to add another packet. And control C is the. Actually, what you should do is basically. If you type enter, instead of typing false, you just do enter. It's also nothing happened. No, probably try to. F, okay. Capital F them. Again. Oh, that's interesting. Mac. No, no, no. Freddie's threatening me here. Oh my God. I'm drawing. So the moment you do that, you, if you arrive in here, you do have a Volta. Upper running talking with with blown, right. So, one thing I want to, to, to mention is the following. This is what you probably do on a daily basis to, to work with development. You're going to start the back end and then keep calling changing the front end, adding new logo, changing the layout and so on. So for based on this project you created. And then you want to test how this is going to behave in production. How this is going to look in production. Then in here, in this project, we provide you with a Docker file. There's Docker files. There's a file called local dev dot YAML. Okay. So that starts. That's going to, first of all, it's going to build the front end. It's going to build the back end. It's going to create a data volume for the back end. That's going to be available here. And I'm not going to, to, to, to deep to discuss data volumes and so on so forth. So I'm going to point you to, to the official documentation. You can either use the pure data volume or you can persist the data on the host file system. Right. It's just how you change how you set up the values here. This is the full one. So you have this volume name data that you can work with it. And the point is, by default. We are going to use what blown, blown, a vote of 14 calls seamless mode. We just need to tell. Please point internally look for back end. Because otherwise it's going to look for local host. And if it looks for local host inside the container, it's going to see the container itself. If you do the, the back end and back end is the name here. So you're basically pointing to the other container. It does that magic internally inside back ends. And inside front ends, you have a Docker file. So here for the front end one, want to show you, we start with no 14 as base. It's a mood stage build. So we start, okay, builder, create the work here. We need Python tree and the compilation support to in Debian to build our code. First thing, copy package dot JSON and yarn dot block run the installation in prod mode. Meaning if you have dev dependencies, they are not going to be installed. So this is like, okay, I have all my leaders and stuff. They are not going to be installed here. Okay, and then you copy your entire source code. And you do the build. Right. Everything's happening inside us are SRC app. So you're basically building everything. And then here. You start another create another image based on the on the base image. Create this work, go with the work there, copy from builder. Everything that's inside here. And then you start to build your project. And then you say that the work director is going to be this one exposed the local, the port 3000 and you start the build, you start your project in production mode. Okay. This image is, it's, I believe 200 megabytes is limer than if we did everything in one image. And when we use the, the thought we also do not carry production stuff that's not going to be used anyway. Okay. So probably for every project you're going to, to have exactly like this is not a big change. The only thing you're going to, to update here is the label information for you. Everything else is quite the same. Of course, if you want to use a different node version, you know, instead of starting with 14 you start with 16 that's going to be the new LTS by the end of this month, if I'm not wrong. So this is how you do it. And now for the back end, the Docker file, it's way more complex. We start referring to clones slash back end version here. Copy the local directory, install the requirements file. Again, using the feet flag. And that's it. So, this is how you extend the base image with your own, your own code. If you had like a, a, a policy package that's inside SRC. That's it. The moment you copy everything is there. You work with it. Okay. We did this. So, what I want you to do now is to stop. Voto is top the front end and stop the back end. Up to the root folder of the package of your repository. Okay. And make start images. This is a helper that basically goes and build the images and also here makes our images. And we're going to call the Docker compose this Docker compose will build both images, and then we'll start them for 3000 port 8080. Okay. I like the areas and VM was build off. It's really good. Okay, so make start images. The moment you do that, it starts building images. And this is already there. And Vicente has a problem here with yarn and a vote or I don't know. Yes. Yeah. Can you, can you share here your know. Let me see what could be happening here. Oh yeah, you must have something wrong in the in your package. Let me do something else here. Or vote or I don't. I'm not sure that I'm not committing. I'm going to point you to my repository so you can compare your. Okay, your code to mine. Let me just. Then the latest breath pool and see here. So I'm facing here on chat, the link to my. You to have repository, you can go to front end. I compare. Okay, and compare the package Jason. And here you can see that is still running the building the front end. And actually that was installing everything. And now it's going to copy the code and then build. I hope I did not forget to add node modules to, to Dr. Otherwise we're going to learn a valuable lesson. Yeah, it's probably there. Just check here. Content. Yeah, there's no Dr. I'll fix that later, compiling clients. Okay. Okay. Okay. Okay. Okay. Okay, it started. It finished compiling everything it started. Let's go back to the login. Again, go back to. Oh, important. If I go now. And I'm going to log out because it's a different server. The login is not going to work even though they are on the same port. If I do this. And refresh. Why is it starting already with a content. You see. Already have a site. Oh, sorry, I did have a content already. In my local, my local volume, probably that was it. I'm going to go to manage. If it's your first time you're not going to have any volumes here so you need to go to 8080. Again, advanced. Same thing. You can add multiple contents and add everything. And then when you, when you access the port 3000. You should have auto again. That's stranger. Kim. Let me see. You cash. Package not found. Do you have when you start make star image. What's the name of where is your folder installed. I think it is solved now. There was an error in one line of the package. Oh, perfect. Because I copied this. Something was written like both to add on instead of both to slave. Okay. I really don't know where I got from where I copied, but now it seems to work. Okay. Let me just double check something here. When you do all of this that we've done so far, you're going to to to have Docker, a Docker compose with two services, one is the front end one is the back end. And it's going to have both over. Right. You can test it's going to work. So this is the way of guarantee that the moment you deploy, it's going to is going to work. Kim, can you, can you show us the count the content you have inside training the project. Do a listing list. So we can see what's there. Oh, you install everything you stop basically install. You run the package in did not provide the name front end. And everything went to the root folder. So you need to to move packages on yarn lock omelette node modules, build everything that was that was not on the package go inside from 10. Just do move, move all those things. Yes. So I should whatever that's not whatever that's added to the root should not be there should be inside front end. Okay, so I should have cd to front end first. When you installed only installed meaning, sorry, which part when you run the yarn, you should have done it inside. The front end folder but before during the generation probably when the as for the name of the project you, you hit enter. Yeah, or something like that. Yeah. That's it. I should have said front end. Yes. Okay, got it. Thanks. I still have some weird issue where sometimes the geomand generator to generate a fault and directory with a photo setup in in the in the questions sometimes asks me to if I want to install an extra workspace and the other time it doesn't right after the add ons and I haven't figured out it yet if everybody else anybody else has seen it. But I think it's a local fluke. I sold that before. We are going to have like a pause at some point in here and we crash from the other training where Victor looks busy and ask him. So the work spaces they are only for the local development so so inside if you want to develop some add ons and if you provide the work spaces. Generally we provide these work spaces as an SSC slash add ons slash a globe sign so so that's for the local I think we don't need it to. The build of the image. You can. I'm going to stop here the containers. We need to update that and should be actually 14 or 16 the main versions. So, get more get fetch again. I need to get fetch upstream. Then my case it opens to blind, I confirm. And now there's a doctor ignore. On the root of front end. And then we need to get to the build process. Okay. I'm waiting for you. Leave a message telling. How are you, where my back end is running with doctor also is missing some styling. That's strange. Oh, Tom. Do not access with 127.0.0.1. That's all the other thing that basically if you access with 127, it's not going to show anything for you. It's worth. It's on my. It's on my get. I going to place it here again. I'm going to get it better. Perfect. Perfect. Perfect. That's good. Remember that when you restart the, you'll build, you also need to manually add the latest vote a version. Okay. Okay. Okay. Okay. Tom, yes. This training is going to be is being recorded. And we are going to post it later on YouTube. It's not going to be right now. But it's going to be some there. Okay, because my dog is asking to go out. No, actually, just. I'm going to do one thing and we are going to stop for 20 minutes so people can take the dog out and stuff like that. Okay. And now I see a lot of, a lot of people except for Kim, of course, because he's running on M1 and something would be wrong, of course. But importantly, though, once, once we take that break, we're going to have to come back and ask who took the dog out. Oh, yeah. So, if you have the, the, if you are logged in with Docker, the next natural step is make release images. Yeah, it takes a while. Okay. And then do the make release images. Yeah, you could use the make inside each folder. So back end and front end, you have make, make space, build image that actually does that you can change the name, the image. But if you go to the make file, you're going to see exactly the name of each one. If you do, no, don't worry. If you go on the make start flow, what it does is actually it starts a Docker compose that's going to be the images for you. I'm gonna do it. Thank you. Okay, so run make release images. And there was an error for me. What. Oh, that's cool registry docker is not there. Let me try again. Or maybe because I already pushed mine. It's going. Okay, it's there. So now, probably I will have here. Docker. To additional images project back end project front end. Okay. And this old one in here is the, the, the one with people that I use a few years ago. But these two here are the, the ones from this, from this train. Okay. How did you upload in your doctor image. Sorry, make release images. Make release. Of course, if you go to the make files, both on the, on the roots and on the each folder you're going to see what happens behind scenes but it's basically docker built docker tag and then docker push. Okay. Yes, I'm going, I'm going to keep it running because this background is beautiful. So you should please cut this from the, from the, the edition. Yes, exactly like this. So we are going to do a 20 minute break now. So we are back at five 10 past five for central European summertime. So we're going to do a 20 minute see you all. Meanwhile, please try to reach this point and then I'm going just to make sure that all the content is already on on on updated both on the original. So I'm going to start in on my look at my, my fork. So you can compare with mine. Okay. Sorry. No other group should have made release images though. Is that the expected point. Do we all have to stay with make release images. You basically go and press here. And it's going to push. It's going to build and push for the front end for the back end for the roots, because it does for both. The, the, the point is in the, imagine the, the following the approach in here is, I'm going to have one image for content and one in image for backend to speed up this on the root. We have a make file that actually calls both, both projects and start. So let me show you the code. Just by executing on the front end folder and in the backend folder. No, it's not going to be a problem. Okay, so I will, I will take a look later to do it in just one shot. But now, actually, if you go to the front end folder, you have the make file. Yes. And you have make release image. Yes, yes. On the root is plural because you're doing for both on the on each folder. It's the singular. So if you do on the front end make release image. So if you do on the back end make release image also solves the problem. Okay. Okay. Okay, so see you soon. Thank you. Any more question. Take a break here. Okay, wait a second someone had a problem. Oh, it's again. Of course it's me. Come on. But yeah, if you try again, it's probably going to work because it seems like an issue. It's a century. That's that's interesting. Interesting job that problem. But see you soon. 20 minutes. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Yes, I'm here. One question. My build Docker or build images still runs off start images. So I'm still waiting. I've seen your fantastic slides from your presentation at icon as you know if there is a YouTube recording of that. Yes, basically go to the link I posted. Okay. And yes, I mentioned your by name and I say that you inspire me. Yeah, really. Honestly, it was good. It was a great product of it. Let me let me find out. Okay. I'm sharing with you on this like this. So, is everyone back. We still have a few minutes but I wanted to check if all of you reached the point where you have your own Docker hub. If possible just push just based on slack. The your Docker hub username so we can take a look as well. Still fighting. You're a fighter. Not a quitter. Getting ready. Okay, I'm just so Is there as well. Perfect. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Let's, let's start. So, because I have no idea what is this. I was just trying out something that Eric had suggested, but anyway, let's see. And then see if someone can help you. Otherwise they become your tech support after the train. Yep. You send me a Mac book. So I can test it. All right. Okay, so let's start. The second part of this training is the is the real deployment. Right. So, to implement the to simulate the deployment, we are going to use Vagrant to have one, like a virtual machine that's going to host and going to behave like a new server. So in here, we do have a simple folder. Let me see. Just the answerable folder, you're going to have a reading as well. Okay. So, first, we're going to do in the second part of the training. First thing is I need to make some it's claimers. The, we use make as well. And to make is more complex than it should be because of vagrant. Okay. Just being 100% clear. And we are going to start with a CD as well make clean make setup. This is going to install as well in here virtual environment as always and small. And then we are going for vagrant provision. Let's take a look at the make file. So I can explain to you. The whole installing as well. It's quite easy. No, no big deal here. And then the vagrant provision. He uses pseudo. Okay. And now the question is why the hell are we using pseudo? We use pseudo because vagrant will not forward parts lower than 1024 from the guest to the host. And then we are going to set up a engine X server listening on port. Okay. So it would be impossible for us to test this without vagrant starting as a pseudo. And that adds another complexity that is vagrant is going to install. And when you do the vagrant up the vagrant provision here, it's going to create a private key that's owned by the user, the root user. And then we will need to copy these to the root of the folder with this name changed. The, the owner of this key to be the user, your own user. And also in here, there's one thing that we removed from the known host, any mention to this server, because if we go vagrant, vagrant destroy and so on so forth. The second time this, this is going to bite you when you start running into SS age into the machine is going to fail. Okay. And from here is Ansible. And we are going to use Docker compose. So first things first is, we are going to go to the Ansible folder. Make clean and make setup. Okay, so good folder. So we are going to start city as well. Make clean. And then we will remove the working makes it up. Okay. Thanks, Kim. Kim is getting his old Intel Mac. So that part is, is done. I'll wait for you. Sorry. It's here. I suggest you already start with make vague, vagrant provision because if this is the first time, it's going to take a long time to download everything so make vagrant provision. And then you have to ask for your password to do the pseudo of course my password this is awesome. And, or not, or I forgot my own password. Okay. I do have the image already. So it's quite fast. This is the moment that it complains that if you are not root, the ports are not going to work. It's setting up the machine. There's going to be a user co vagrant with the private key in here. So I'm going to copy the key to be named now vagrant private key. And set the permissions, remove the, the, the host in my case as you can see, I already had. It finishes. So, as I see people arriving back now, I'm going to, to go again, enter the answerable folder of the installation of your rebel. Make clean, and then make setup, and then make vagrant provision. Okay. Very cool. Yes. So, once more comment you use the vagrant box, the one to focus 64. Yep. If you switch that to more generic there is the vagrant namespace generic and then one to 2004. And that would be helpful for users on different systems because the Robox project builds most of the operating systems for different hyper visors and different setups and is straightforward exactly the same. So it could be run on the beard it could be run on parallel it could be run on hyper V. So, can you, can you, can you write on on slack. Yes. And Valentina seem to have this problem. Oh no, why the providers live words and not virtual box. Okay, perfect. Then let me have a check. Please write on the on on slack channel when you're when you're finished with the vagrant provision. So I know when to move on. Whoa. The center is already done. That's when you you try to remove a host that's not there. It's going to complain but life goes on. It's not a big deal. Mikael, that's expected. So move this make file to first check if the entries there. Good. And also, see for those who finished basically you can come here and check if vagrant is up. That's okay. Okay. Anyone else have any problems. Okay. Okay. Did you work for you. I'm still having the problem with my Mac. I nine. So it's very slow. Oh my God. Okay. It's that it's a known terminal problem with the out next 16 inch MacBook Pro with the online computer. I don't know why it is so weird, but yeah. But I can catch up so no problem. Excuse me when when you're in a production environment. How many notice are no servers we have and how many loans. Server. That's that's an interesting question. It really depends on every every site. Right. For instance, one of the advantages of both is the fact that is really lightweight compared to to plan when it comes to hindering the front end. So it makes life easier with a smaller with smaller setup you can basically support the same amount of requests for bigger sites. So it's really important that you have a network and there's there's going to be a talk, unfortunately important is with the Brazilian electoral justice where we had, I believe four or five front end servers and 14 blown back end servers. Each one had four or five. And that's one of the advantages. But it, it, we're talking about one of the biggest sites in Brazil. And during one, a few hours in one day, every two years, it is the biggest site in Brazil, because it's when we have the electron results, and then everybody and their dogs point to that site, but you can scale this up easily now. And you need node to be running running. Yeah, you have a process with node and we are encapsulating that with with docker now. Because many times you only need no because of search and gynes position. Exactly. SSR. It's exactly what happens. You don't need these SSM. Do you do you still need the node to be running for vote. Yes, because everything is built that way. Of course, depending on the type of sites you have. I have a different setup and I believe the University of Eva's killer in Finland, they have a setup where they have one process running note for the content management. But the site itself is deployed using getspy. That's also based on react, but you just generate the the build with the contents and push to a certain content. Exactly. Exactly. So for an internet, for example, where everything is private. So it's a good idea to have just content JavaScript. And the plan to serve the content. Actually, I do believe that for for an internet, what's going to happen is the fact that you do not care about SEO, but you care about speed. You know, you have to do it that way. Right. And when the community built the front end with Volto, we had to care about those two scenarios. That's why Volto implement SSR. But I'm sorry. No, no, don't worry. Don't worry. We have time I still believe Alexander looks confused, meaning his computer is still building everything. Okay, anybody else. Oh, go out like Senator sorry to interrupt it. I just weird situation is what with my setup so that just reallocated some memory to my Docker set up. There's still something falling and everything. But if you have a machine with 64 gigabytes of memory, and the default is just having two gigabytes of memory allocated to Docker is a bit weird. I was really excited about having a machine with 32 gigabytes and yeah. And I'm running with 16 now. So basically, if you reach this point with pseudo vagrant status, you, you have your machine up and running, you can even connect to it using vagrants as a sage. And voila, you're inside the server. The next step is actually running a playbook. And it's here. But before we start, and I would ask you to do it right now, just to avoid problems is to look at the playbook dots, play do a playbook sit setup dot email. And look at this line in here. This is my public key. You need to replace with the public key you public key you use by the following your system. Okay. So, come in here, playbook setup. It's around the line. It's line 39. You replace that this line with your public key. And it's coming back here to your exactly here. This is both label will do the following. It's going to install some base packages, not many. And of course, we can keep adding stuff in here. The goal of our team is by the end of the conference we're going to have like a better setup for the for the to use display book in here setting up a firewall and other stuff that we have in the playbook for blown by default, but install base packages that are going to be engine x Docker, and Docker compose their create a username blown. And your key to the user plunge so you can basically SSH into this machine with your key configured to the engine x web server to listen on port 80. And in here, you can see files in gen X default. It's in here the configuration we are going to look into that later on, and copy Docker compose. To the server. Okay. Based on that. We are going to to to run the playbook. It's going to create everything and then we are going to start our machine. I'm going to copy it in a single file I didn't and couldn't log in. What happened. Did you run the playbook already. Yes, I run it, but I made a mistake I just echoed the, the, the public key and copy it and it was, and I pasted in three lines so. Okay. I'm going to copy it in a single line pasted in a single line. Yeah, I will. Alexander, I decided to go against this because this playbook. The idea is that you can basically add another, another server. That's not going to be vagrant. And it does exactly the same thing. So we can use, we spin up a droplet on digital ocean. We add a configuration and say okay now apply this, it's going to do the same thing on this, the digital ocean server. That's the, that's the goal. So you do this in here, change. You can use your public key, and then make playbook setup in here. Make playbook setup. It's going to install everything. There's our message. Okay, what. So, it's running, everything here. If you have a message post on the slide. So I can see. And welcome Lucas, or better. Welcome. Okay, so if everything is okay, you should be able to do this. And with my local, my, my using my private key, I'm able to connect to to vocal. So, I'm going to use my private key. So, I'm going to use my private key. So, I'm going to use my private key. So, Alex, yeah, I know, you're going to understand why we used with the with items. It's because in some companies and talking for instance, if you want to run the deploy, you can do it. Or if I want to run the player can do it. Erica, I have the same so you can, it's also possible with the fight lock so that you don't have public keys or any keys in the repository, if necessary. So, what we just take you pull it out of a credential store, even the public keys so that we have a centralized management to say okay this key is corrupted so we woke it from every machine and just yes that's that's a really good point. Who, who just a question, right on slack, if you arrive to the point that the playbook run. If it did not, please write what happened that so we can help you go on, Alexander. I was just about the security of your setup should also always be in mind if, even if you're still working on the development setup. You always look in the future, because if you don't secure your setup, then you might have problems. Oh yes for sure. Okay. Yeah, we're having a discussion about whether we should all standardize that the company on Linux MacBooks, Linux laptops. That's scary. Okay, so. So, we come to a point that that that was something new for me over over a few days ago. So, let me just make sure everyone is on the same page here. We are going to following is still following the rhythm. We are going to create a new Docker context, a Docker context allow you to from your local machine. Run commands on different environments, right. If everything is right on your end. You just need to run this command in here. We are going to create a contest name vagrant. We add a simple description, and the Docker is going to be accessible at the host SSH flow. One to seven zero zero one port 2222 by the user plone. So we come in here, then copy. And then, and then successfully created contest vagrants. So you can basically check context. You, it shows your contexts. In my case I have just the default and one for vagrants. Quite simple. So, in the other environments you could have many other ones. Right. Oh, I want the staging environment production environment and so on so forth. Let me see. Exactly. This way you can run Docker commands remotely. And there's an in here, I want to to stress that. So, the environmental variable that you can change the default context. But I'm not pointing this, because this could kill people. Right. You, you change the context and then you go there. Oh, it's not running let me destroy the environment and then, pa, pa, your environment, your production service gone. So, I try to be as explicitly as possible in here. After this, we have make compose pool. But instead of just mentioning I want to show you what this comment is going to do is going to run a Docker compose in the context vagrant. Actually, I did not show the playbook so sorry, let's go back one step. Playbook installed these packages here. It's engine x Docker and Python trip. We install Docker and Docker compose Python packages. We create a user plone. We add this user to the group Docker that allows this user basically interact with Docker. We set the default key to accident. We create a project directory inside home loan project. And here, I do that most to explore the idea that you can have many projects running on the same server under the same user. So you could have like the plone conference and then the plone or website and so on, all there in different directories. So, we have the Docker compose to this project folder. We copy the engine x configuration that we're going to say a bit more later on, and we restart engine x. There are many of the optimizations that are possible here, and they are going to be on the on the final version of the playbook. It's easy to explain easier to explain than simply point to notify and other stuff. Now, when we come here and say make compose pool. What happened is inside the the the vagrant box, we are going to ask Docker compose to pull the images that are mentioned on the Docker compose file. Okay. In there, we have database that's postgres. And that's plone 600 alpha one, and we have us off this moment, a front end that's a plane vote. Right so downloading everything. And let's see this work. Okay, let's see this work. I'm extracting. Yeah, I was looking at me here. It was way faster when I was the only one doing that. I have no idea why. Okay, the back end is done. And now it's postgres and both. Yeah, I should I should show something else in here. I was not expecting this situation. And of course, this is not leveraging anything you have on your local machine because it's running inside vagrants. So you're running commands on the remote right. Okay, so I downloaded the images there. And even as one example, I could docker compose my local Docker compose context vagrants. And then there's the project is the, what's the name we use in here. I don't know, but I need to point project. So, project directory project. Yes. And there's nothing running. Why there's nothing running. And this process never started. We just got the images. This is something I usually do I first pull the images to see if there's a new version. And this process usually takes longer than actually creating the images and then may compose up. And it's the equivalent to this. Okay, but let's go with the may compose up. And then creating the default drive at the network it's creating the, the full driver, the volume, creating the first container, the database second container is the long back end. The container is. The front end. And again, if I go back and show me what's happening there with a PS. And we have the processes running. And you can always say okay I want to see the logs for the front end. And we see the logs of the front end. If I want for instance to enter one of the containers I can do the same with exact and so on so forth. So, the moment we finish the make compose up. We are going to come here. We are going to type local host without any port. And there's a problem. Oh my God, something is wrong. What did I do wrong. Of course, that was for only for your amusement because, again, we did not. We did not create the default. Actually, let's use this same space here. So, for 8080. There's no site yet. So, we go again, advance it. Simple content. And then, from content, then, from content. Create. And now if you run the logs you're going to see everything have a working there. And then I could come here and local host. No, no, no, no. We have voter running on the server. And going back here. So I can see. So, this is a little late exactly. Okay, I have. I want to deploy a photo in a plant six stack in production. This is how you do it with Ansible and a new complete new server. Right now, I want to show you the Docker compose. So, this is a similar to the one we saw before we in here we point to the plume front end image, the plume back end image. And we use post base as a database. Right, and we are using the volume available here. So, we have a lot of different types so far so good. But what if I don't want this image. I want the image that we created earlier. Anyone. How do I do it. So, we change blown the name and point the name of the image so it's going to be a record. And the name of the images project. Front end. And of course, if you see any type of tell me, and we get the latest version. Right. Remember this configuration is in my local host. So, we do the make playbook setup again. Everything is the same copy the configuration. It's a little bit different. But by now it's going to to behave exactly the same because we just copy a new Docker compose. And then this is the moment we do the make compose pool. And it's going to look Oh, do we have another version of us, not for the DB not for the back end, but for the front end we have any version. And it's downloading the the deep between the previous where the previous image and this one. And to be honest, it's basically the entire new image. When we use the deep. The approach we have for the clone images. Better in terms of reusing the layers. And then we have the doc compose up. It's going to look. Okay database is saying back end is going to be the same, but we need to recreate the front end. Yeah. And then we come in here. And it should be run up and running. Mostly the same thing. So if we go back to our project in the front end folder change something there. And then we create a new version of the image publish, we're going to see it. We're going to see it here. Right. So we have a huge message. Just a question. Did you log in with Docker. Oh, okay. Just, just in case I don't, I don't know. I don't know what the name of this. And then, Alexander. If we tweak the name of the image and we do the logging into the, to the, to our registry. Everything else is going to work as well, because I basically pushed to my Git lab and lapland so on so forth. And the tricky, the tricky here is to have the correct, the correct image names and logging to the correct registry. Thank you. Unfortunately, Peter is giving his training on seeming classic, a plan six classic. But I know for a fact that he's using Git lab plus Kubernetes for blue dynamics. So I would be interested to start playing with like these images in a Kubernetes and have like a default health chart. So you can deploy on Kubernetes as well. Again, one step at a time. Remember the Docker image for the plon backend was recreated earlier today, because more it's released. Plon six alpha. One. And by the end of this week, we're going to have to record the Docker thing. I wasn't logged in. So it's working. Thank you. Oh, let's mark in here. But one of the points I want to stress in here is the fact that don't worry. Slow network connection happened to all of us. One really, really, really important thing is I'm going to point you to the conference site. With her. Long. This was done. It's using the plon five to five. It's using older images. Oh, that's a good question. Michael, but you see this grapple here is the public is the public descendant of the one we are playing now. And every time there are changes. We generate on main. We generate new images. And here is one thing I really like about GitHub actions the fact that I say, Oh, you're going to run this action only if there are changes in slash API slash in our cases slash back and slash. And then build everything. Right. I do some old tricks in here. There are better ways now you see I point to my own local grapple for plant peep. While it was not so stable, the, the, the old version. And then set up the cache. I have to ping some versions just to, to avoid the legacy dependency checker. We install plon. We stole the add-on the core and the add-ons and doing that because I can leverage cash as well. And then I hear I do make tests. I do make a equivalent to make tests, because even the deploy even the development environment I'm using peep, right. And then make links to to link the code. Right. And then I have something here that's Docker. That depends on the on the previous step. And then here, check out set up Camel. Set up the Docker builder. Logging to Docker hub. You see there's a username and password here. This is controlled in the secrets for the GitHub. Apple or for your organization. So remember that. You can set up in the organization or for each repel. So if you set up in the organization, it goes down for every single repel so you can reuse. And then build. And then in here, I just say, with what, how to push, which tags I'm going to use. It's available. We have the same thing for the front end, of course. And then in here actions. Fix times on also for trainings. Testing push to Docker, and then in production I run the Docker compose pool, and then Docker compose up minus the. Right. So it's possible for you to automate the creation. And of course, there's, there's another thing that you could basically do. Add another step that automate also the, the deployment running the Docker compose remotely from here, but then you need to set up the SSH key in the environment. And because what you just shown us on the test AP one that that that looks like the new, the new GitHub actions version of our build recipes so you can just say from Docker. Wait, wait, wait, wait, wait, wait, one second, because I'm going to sound like a prima donna but I'm afraid we couldn't really hear you. And also, one thing I plan to sprint during this week, during the conference is, again, I am passionate about the idea of killing build out, leaving build out in the past. Build out was amazing. It saved us, right, but everybody else moved. And one thing I want to, to, to, to prepare is a GitHub action that you can use that already set up splone with the testing set environment so basically in your add on, you are going to say okay set up for me these are the, and then you pass the versions. It does that, and then you just, oh, now run the test for my add on and run the test and run the. I would say the whole blown running pip story right now have two points that need to be improved. One, the fact that we have really, really awesome build out recipes that we need to, to support to something or something like that. Over the years what I had was, as I was not able to rely on code analysis. I had really strict. I had eight configuration that would do almost the same, but was missing the, the, the complaints about blown API. Right. So you should use blown API here and so on so far. So, the, the second thing is the, the, is the, the fact that we're missing some of the helpers so to do the, to run the tests. Instead of just being test, you need to come in here and let me see if I can increase this because I'm kind of blind. So dash test runner auto color out progress test path. And then it runs. Right. This is, this is the, the, the tricky part if we start working on that we can provide everyone that's not core developer with like a decent story about how do you write your add ons and test your add ons with with blown. Right. And this moves us forward and allows us to bring more like new blow to the, to the community. This is the, the, the idea. Yes, I'm rewriting a build off recipe to get a bit alive and jankings and so on so forth. But in general, things that I, I, I strongly recommend after the training here it's quite over, we can, of course, play a bit with other scenarios like, oh have more than one vagrant in the vagrant have more than one box and say, in one box you're going to, to set up the front end the other box you're going to set up just the, the plume versions and the, if you have a third one with postpress I in the train documentation. I believe it's in here. Intro to plan stack. I start documenting the ideas that are behind this. We have a web server we have a plume from then we have a plume back end we have a database. And this is the part that's not really needed in any, in all possible cases. I strongly recommend you to always set up with a data like external database, because it allows allows you to play with okay one while the instance is running I can run another container just to generate the backup whatever I want. And I, I, I rely a lot on. I'm not sure what's going on. And I older plume developers probably do that on a daily basis as well. Go there. Do is start the instance in the book modes, and then play and try to find out what the hell is happening. So, in here, basic setup would be web server plume from 10 plume back end with a database web server plume from 10 plume back end a database. Also another thing we provide a zeal image. So if you want to test this. It's available here. Or actually, I'm sorry. I'm wasting my typing skills because plume Docker image right now for insects we are recreating the images. And then you have the plume front end you have the plume back end. And you have plume sale. For the plume back ends. I want to, to add some of the configuration options here. So we have the old dev version, we have 60 alpha one. And we have versions with Python 393837. The idea is to also add Python 310. Soon. How you run one of those. If you want to do a zeal setup. It's as hard as Docker compose environment zeal address. So the zeal here is like this. If you want to do that with real storage, we do the same that we are doing the project. You can extend the image, build your own image, and there's a list of possible configuration variables. And here I'm going to say. I want to test plume. I want to test my add on with plume six. You have add ons variable that supports the name of the add on. And like a list of string that's going to be passed to people install in a startup time. And you have to test that only to test stuff never in production because otherwise, every time you recreate your image, you're going to, every time you start your image, you're going to. Oh, now I need to start the add on. I need to install those. And you go install everything. And the beauty of having the this whole container and word is the fact that you can create one image today and say, okay, this image is like this, have everything, all the dependencies. You can start this. And, oh, I upgrade one version, generate another image, another image, another image and so on so far you have many versions rollback whenever you want. And if during startup time you simply say, oh, now we stole the new version of this add on, you're losing this possibility, you lose this one of the biggest selling points of containers. So many variables you have the book mode security policy implementation of a little security defaults the publisher and coding for CEO. You have some variables as well. Here, share blob here we don't need client read the storage client cash key drop cash rather verify for relational database this image by the full supports full stress. Oh, I want to support instead of full stress I want to support my sequel or Oracle, you extend the base image, do the people install for your the the driver, you need the package that you need. I would suggest strongly that you get a binary version of it. And then just do the configuration, you need, right. This is how you set up their health storage with post list. It's quite simple. It's okay simple now, right. We do not try to to cover every possible use case we do not try to to do a lot of configuration on the fly. But, oh, I'm not happy with the amount of options. What you can do is simply override the existing. Either break to global. You can simply override the configuration files, right, because for every one of these versions, you can, you come here, like 60. 60 alpha one. You have a skeleton that's where are the configurations so. You can use the CTC. And then zeal health storage. So, and they use the leverage on the environmental variables that you have. We try to avoid parts the file he write the file. And then we set up the environmental variables on the on the Docker image and it should work. If this is not working for you, just put override the configuration and it's going to work without any problem. Okay, so plan back and plan front end. If you want to see how the front end works. So, you can see the front end. Front end, local image. And in here we have the, an example that's pretty similar to the one we are using. So, basic basic basic setup for Volto. That's using the version 23. 21. Yeah, damn it. Need to to update that. I fixed and enough push, which is even worse. But that's it. That's how everything is at top questions. Now it's the time. Wait, wait, Fred asked, which container service to use three minutes or less. That's the next step. Now this is the back to the ground level. Okay, then you have to venture. Yes, or what, or your own service. That's, I think the question for most people with deployment. Yeah, this is still the first layer. Yes, this. And that they wanted to migrate from plon for to plan to Volto. And first of all, thanks, Phillips. Thank you Fred for a collective import export that saved the data, migrating everything from one place to the but they run. They certainly have a $5 droplet on digital version. They're not going to change that, and they are the ones paying that's why this setup starts really simple. Imagine you are a company that hosts many, many services running plon. First thing is Kubernetes. And which provider you want if you want to Amazon with EKS, right. EKS, because ECS is the elastic container service that's not Kubernetes, but you, you use the EKS, you could go to Google, you could go to digital ocean has one Azure. Right. We do not provide yet a Helm chart for for our setup, from setup. This is probably something that we should do over the next few weeks. And then you basically deploy plon same way you deploy WordPress. Right. But first thing, I will not use a container container version of the database. I would get one managed service from any of those players. Even though we are not discussing this level but in my previous life I had a company that was invested by the same VC, and they, they proved me that the Postgres, the air, the running Postgres inside the Kubernetes container had a huge overhead so they moved to, to, to their own setup. Right. Okay, I have no idea what's happening with Kim. I'll go back there to see. So, first thing is, you have your environment or setup Kubernetes. And I heard a lot of good things about renter. So, Peter holds, it's using production. Oh wait, no he's using a pure Kubernetes so get that. But I have friends using renter and makes life easier. So it's not a, it's not a big deal. You can set this up on top of Kubernetes. In the past, I used a solution that's not existing anymore because Microsoft bought them. I'm going to see if I can search here. That was this one I gave talks in the many Plum conference about this. Okay, I accept. So this and this is the first one to burn that is that was a open source where Oracle, her local. Here. They's work for. This was the solution I use. And basically what happened was, Microsoft, Microsoft bought the company that developed this. And then they went to the Azure team, they implemented a bunch of new stuff on Azure and this project was fought with a different name that I do not recall. But this was the easiest way for you to, to start your own Kubernetes cluster a few years ago. And then you go back in time and see other presentations I gave probably 2017 and 18. I'm going to mention the service that was like the way to go at the time. So, more questions. How do you collect the logs. Because, go on. Yes, usually when I have to work with Docker on production. It's quite difficult to handle logs in real time or get the log as the same way you have it on your being instance or your job, quickly a client. Yes. So you always have Docker logs. It's not the perfect answer. One thing that the image had, and it was changed a few days ago was the fact that I was keeping the logs in same place. We, we use them in the bar slash log blah blah blah. But now they are being streamed to the, to the, to the console. Now, going back a few, a few lifetimes, a lifetime ago, I would, I would suggest that you, you set up some kind of log consolidation service, and string the logs to this service. Of course, when you have just one. As more client, it makes no sense it's overkill. But the moment you have more than one or you have many containers running, set up like log stash to ban elastic search solution, and stream the, the, the Docker logs to there. And then you, you have access to everything and then you can tweak. What do you want and so on. Also, I can strongly recommend if money is not a big issue for you to take a look at data dog. Data dog offers a log consolidation solution on top of their APM on top of their air handling. So they offered the best package that I worked so far and for container applications in my previous company we had data dog, collecting the logs for the entire Kubernetes cluster, and we even implemented some, some code that I cannot share with you, and that's part of my deal with them on plow to push additional information to data dog about the request and so on so forth. Thank you. Oh, let's go a burn a share that Docker also supports sending via C's log. Perfect. So, I'm just going to start off by handling that. Oh, cool. Yeah, we use for me to keep concept, or we used. Yes. I, I am a fan of their products and yeah, I am one of those, those people that say they are the right ones with their fight with Amazon, even though I did not like their change of the licensing and the release and so on so far. Also for elastic search. One thing I tested a few months ago was to to a product that collective elastic search. There, there are products there to solutions there's one that's this simple replacement for catalog. And there's another one that was developed by Jens and Jens Klein and Peter hold that they broke these in two different pieces. As a solution for me I use the other one the the simple version one. And I was also in the same Docker compose I had. I was running also salary and elastic to to provide the solution of using the catalog search on the elastic. So, yes, for me it's, it's, it's very important to have the same way of working with with blown when developing and also deploying because that is the, the learning code for the new for my new colleagues. It's something that we actually we currently have with build out, we developed the same, the same way locally and also in in deployment and in testing and some other environments, because we just run the, the C, the correct files. We can put thanks to Mr developer we can put the PDVs all over the place so we can debug and so on with this Docker base setup. I would like to have the same story for the developing take a look at what I'm showing now it's the Plum conference website right. And again, the problem right now is the fact that I did not finish yet solution with cookie cutter to create the folders, this structure, including the, the name in here that you see it's stars plume com for the core that has a setup know, if you take a look, there's no build out configuration here. And then it will go down and that's just an add on. Yeah, is just an add on that has the policy and then there's the tests in here. That's something I want to achieve too because until now every time I run a plume with with Docker I failed to use Mr developer and things because some permission issues and so on it was very hard to explain my colleagues how to work with that so we did the Docker way of working because of this so I will I will definitely have a look at this. And take a look how it works in here. Yeah, it's not Mr. What is just a developer. Oh, and actually you see in here. This is the default saw that we are going to use on the on the inside Docker. And in here, we have two things we have the make build, and we have the make build dev, because they behave differently. So going down here. I'm going to increase because sorry guys I'm getting old. So make build dev. Same thing requirements dash dev dot txt. What's different here. So you can basically put a PDB or are actually not PDB anymore. You put a break point. And adding the test to get the test working and black and I sort. And this is where I'm going to add also flake eight and I'm going to add also flake eight dot strings and also blah blah blah blah and it's going to work. And when I run this in GitHub. If you check there was a target that was make links. And I'm going to add a new code. Just for understanding that are separate. And I'm going to add a new code. And I'm going to add a new code. And I'm going to add a new code. And I'm going to add a new code. And I'm going to basically so I'm released a code black I sort in installs the code package in editable mode in including tests, because then I can test, but for the production. I can just install the code inside as SRC without adding any extra and without being editable. Okay, it's, it's a different approach. Simply. Here we go one question. Yes. So with this requirement, but the x t that's just doing the the PIP version of Mr developer you could also get a bit of repository in the requirement. This is just the device of think we've learned now that for policy packages you should maybe just store them in the build out and not make an extra add on you have to release all the time. And one of the things is, I created here SRC and then there's the policy package inside. But for other projects. If I had, like, oh, additional things that I'm working. That's the, like, for instance, additional add on. I could put also under SRC, or I can simply store with the editable. And it's going to to to be available for me. So I'm pointing here that I wanted to to be explicit. The difference between build and build that is the file that's been used. Okay. In the past I had a live and dev and pointed to different, different places. And here, make links, it's going to do link I sort link black. Check the check in the path. I start check only here. If it fails fails the the the links. And then we basically say that this is not okay. But I also have a make format that make this beautiful and fix all the code with black and I sort. So the conference site. I wanted to show you something that again, lazy, right. Make install front end the star front end build API create sites. And then I want to make format. If I run make format on the on the roots. It apply all the settings I want to API and front end. So if I want to have a pre-commit. This is the, the, the, the thing I'm going to, to run with pre-commit. Just let me check. I had something else I wanted to share. And in here, you, we have a script create site that basically create a new plan site. Already with the, the, the profiles I want. And then the profiles as you are aware, require other things and so on so forth. And also, in this case, I believe if, if it exists, I delete. I'm quite aggressive, especially during development. Create site delete existing one. There's a new one. And I can play with. And I plan to have something similar to that on the, the full Docker image for the backend so you can run the creation of a site from a Docker command. If you need to. Okay. And in here, for those that are not used to the new way of with peep. The create site is like, like, it goes down and down being Z console run passing the ZOP configuration you're using. And then the script. If I wanted to, to, to run in the bold mode would be Z console debug here. And I can basically work. It's the bug mode. So, except for the fact that we, we had nicer scripts thanks to, to the recipes and the fact that we, we do not have the code analysis implemented in the house we can work. More questions. Thank you. I would ask you all to, to be in this channel during the conference, because I'm going to implement some improvements in the, in the training documentation the base repel. And I'm going to push here saying, Oh, there's a new version and so on so forth. I would love also your feedback you can write me here, you can send me a direct message as well. And any feedback is welcome. Okay, basically that was it. I'm glad to have you all during this, doing this day and I'm looking forward to see you all in the during the plume conference. Bye bye. Thank you. Thank you.
Topics: * Create a new project for Volto and Plone * Developing and testing locally * Using a CI to create images for deploy * Web server configuration
10.5446/55965 (DOI)
Okay, welcome back to the conference, the Plone Conference 2021. And it is our fourth day of talks. And with me now is Will Gwynne, who is the manager of web communications at the Purdue College of Engineering, a slightly well known university institution in the US. And Annette Lewis, who is my colleague at Six Feet Up, she's a rock star developer. And today they're going to be talking to us about a Zope to Plone migration for the Purdue College of Engineering. Take it away, Will and Annette. Hello, and welcome to our talk today from Zope to Plone, thinking user first during migration. We already had introductions, but I'm still going to do it again. So once again, my name is Annette. I'm a senior Python developer at Six Feet Up. I got it started with Pone in 2013. And I've been running since wildly with it ever since. So lots of fun. And then today I have a co-presenter with me. Hello, my name is Will Gwynne. So I'm the manager of web communications for Purdue University's College of Engineering. My background is in UX UI design. So I'm definitely used to looking at the front end. But in the last few years, I've gotten more into project management. So I manage the Pelogist team and a lot of our digital marketing and communications initiatives for the college. So I'm really pleased to be here with Annette today presenting on this topic. So getting into it. Go ahead. No, go ahead. Okay, so jumping right in here, we have the, we had this, the Purdue College of Engineering has this massive migration project that really needed to be done really for the last several years. So one thing, you know, you'll, one thing you find out about working in higher ed is that higher ed tends to move a little slower. People get really used to the things that they work in. They resist change actively. So it's always a pretty big undertaking. Anytime you have a big project like this one where you're migrating out of one content management system into a new one. So we have, we have this, we have this current CMS. Go ahead and go to the next slide Annette. We're in a current CMS called Zope and it's, it's Zopes built in, built in Python. It's, it's, it's got a lot of really, really positive good features that we, that we use at the college. It's really our entire college infrastructure is built in Python using Zope as sort of the framework. So one of the big things we need to be able to do was, was maintain that, that Python framework. We really had so much of our web apps, a lot of our, a lot of our, you know, tools and things that our college uses on a day to day basis. We're all Python based web apps. So that was one of the biggest things we had to focus on in moving to a new CMS. So we introduced Python to the college back in 2001. And because of, because of the version of Python we were using, we were seeing a lot of security concerns come up. And that was one of the big drivers for, for change. One of the big motivations we had to get out of Zope and into Plone. And obviously that there were a great deal of modern, modernization issues. So there were a lot of things in Zope that, you know, our current content editors, a lot of the folks that work in the college on websites and manage websites, they were having to utilize, you know, code copy and paste code. These are people who are not web developers, they're not web designers. They have no background. We weren't about to try to teach them all of these different languages. So we really needed to provide a way, a really easy way for them to be able to manage and maintain their websites. So that was another really big driver for change for us. Go ahead and go to the next slide there. And that, so kind of looking at the numbers behind this entire project, when you look at the Purdue College of Engineering, it really is kind of a, kind of a mini university in and of itself. It's, it's massive. When you look at all of our sites, we only really had 15 content editors, which, which isn't a lot when you look at how many subsites we have, more than 40 public facing subsites. So there's a lot of, there's a lot of content sharing, a lot of double dipping here. There's a lot of, there's, there's, you know, individuals managing multiple sites. Although sites consist of over 30,000 pages. So, you know, it really is, it's, it's quite a, quite an undertaking when you're, when you're talking about migrating from, from one CMS to another one. Those 30,000 pages really consist of 20, about 25 departments and units. In the college, there's a, there's a slew of, of other research labs and, and faculty websites out there as well. So it's, it's a, it's a massive undertaking. And it's in this one that we, that we really had to be very deliberate about from, from the very beginning. So go ahead and head to the next slide there and that's, so we really, we sat down at the table. We, we just started, we started to discuss, okay, what's the best solution? If we had to start all over again, what will we go with? And that's, you know, that's where a lot of, a lot of time and effort and a lot of discussions were had. So we worked with our IT department that's called ECN, Engineering Computing, Computer Network. They are sort of the, the, the masterminds behind sort of the infrastructure and what, you know, the different servers and the security and everything that goes into what we've been using in Zope for all these years. So we, we all agreed at the table, we really need to look at modernizing our current CMS. So we agreed that, you know, staying in Python was really the best choice because so many of our web apps, like I mentioned earlier, are built in Python. We, we rely on it almost exclusively for, for so many of the, the inner workings of our college, a lot of faculty depend on it to sort of have web apps for research. So we really need to stay in that, in that environment. So we looked at, we looked at Drupal, we looked at WordPress, we looked at some of the PHP based solutions out there and we just decided that that's just going to be, that's going to become too much, too much of an undertaking to try to change our environment like this. So that's when, that's when Plone entered the discussion and that's where we really enlisted the assistance of 6 feet up to help us kind of work through some of those infrastructure issues. How do we actually get this thing up and running? What are some of the concerns? What are some of the challenges going to be? This all started way back in 2018. So, you know, there was a lot of just preparation, a lot of discussion right from the get go. Now that we have a basic understanding of kind of where the project came from, the first question you always ask is how do we move from requirements to actionable items, especially in this case, which is such a large entity, we wanted to minimize the effect on users, we want to improve the user's experiences so we could, and then that means I really have to ask me a question. Who are my users? Not just that, but who are the users of the client ultimately down the road? Because I'm trying to make products for Will and his team, but eventually it's going to be the content editors down the line that really need to work with this. So, looking at that, we start by identifying the current challenges and we kind of broke that up into some big broad groups, just like the user editing experience is something that we could hear clearly was a problem for them. And that with the users having to blindly paste code into templates to try and build their pages and using a code editor for formatting text, we knew Plone was a great fit for that, which is rich text editors. Also user management and user permissions, because right now they're managing users almost locally to areas. So if you had to find a user, you have to go find that user in that area. So a decentralized user list is difficult, but Plone has a great user permissions management story. And then just the infrastructure alone, that you're working with an aging system, and you have excellent in-house expertise for what you have right now. And it's really important to us that you can maintain the stack and that you're familiar and you can take ownership with it if you need to. So that introduced a lot of challenges during this whole process. So one of the things we really had to keep in mind, really midway through this migration effort in working with 6 feet up, we had a Purdue University Central Marketing Office initiated a rebrand, really a complete overhaul of our entire brand standards. So we had one look, one front end look that we were going with from really the beginning of 2016 that had to be changed just from stem to stern. So we began the process halfway through this migration, trying to adapt the templates, the front end look of everything we were doing in Plone, to match those brand standards. So there had to be a way to seamlessly blend all of the subsites that were using the old look, the old template, into the new look, which was what our parent site was utilizing. So all new colors, all new typefaces, really everything from top to bottom had to be overhauled. So that was one of the big challenges. It's just how do you deal with a rebrand from the university midway through a migration when you've already established all of your looks of all your templates. Another challenge we faced was converting all of our content types from Zope over to some reciprocal or congruent content types in Plone. There were just, I mean, there were countless content types we were using in Zope over the years. Like I said, we've been in Zope since 2001. And so how do we try to marry those up with what Plone's doing? So there was a lot of discussion and planning around, okay, is this going to be a one-to-one transition? Is it going to be seamless? Is there going to be anything that we have to move on from? Is there legacy stuff that we can bring over with it? So a lot of challenges around that. And then obviously just the user experience. So one of the big drivers I mentioned earlier behind this migration effort was trying to give our content editors a modernized interface to work within. Something that doesn't require them to have to use to no code or to use code. So, you know, a lot of the, every step along the way, we were trying to think about the end user. You know, what is it they're going to do? How are they going to utilize this tool? Is it visual enough for them to be able to know how to use it even without any instruction from us? So that's always a challenge. So how do you, you know, when you think through how someone without training is going to utilize your platform, is it going to work? So from here, whenever I'm dealing with such a big project, especially since we've got many living pieces, it's an active production website. There's many stakeholders. The first thing I like to do is distill it down into a few broad categories. So I can look at my requirements and say, okay, this is the essence of the project. So with this scale of a project, I would think that comes down to accessibility, usability, flexibility, and security. And what these broad categories really are is just the purpose and essence of the projects. So once I get down into the technical requirements, I don't lose sight of that when building the system. And I'm really continuing with that user first development style. And of course, best practices are always a given. But I kind of use these as my focal point when I'm starting to develop any solutions. Now, of course, the next thing about that is giving chances. Developers love to craft beautiful systems. We want cutting edge, modern technologies, solid architecture. We want the best of the best. Looking at this especially, the absolute best might not always be the right answer. And that's where collaboration becomes key. We always like to communicate. So like, whenever we saw something, we said, well, here's an idea of some way we could do something. Let's explain it. Let's talk about this. And sometimes we get an idea and we're just like, maybe that's not the best. It's okay to say no, but we always would have an alternative solution ready and just kind of work to get through what's going to be the best solution for everybody involved. Never ignore best practices, though. But that open communication, especially with Purdue and engineering, really helped us, I think, come up with even some better solutions. Still, we always think. And I like to ask myself every time, like, is it a best practice? Is it a bad practice? Is it a dangerous practice? Always going to avoid the last ones, always aiming for the best. And sometimes we land a little bit in the middle there. But if the users are having a great experience and it makes them feel like they can take ownership, then I think we're doing a great job. Yeah, really, I really want to underscore this because, you know, like I mentioned, you're going to see kind of a recurring theme throughout this presentation about, you know, collaboration, discussion, kind of sharing ideas, transferring ideas. And that was such a crucial aspect of this entire process was, you know, really coming out of our silos as a college, especially our IT group and our communications group, and having someone at the table with us to be able to kind of demystify a lot of the things that we didn't know. In a lot of cases, we didn't know what we didn't know. So having six feet up there, having a net around, having Chrissy, having everyone there to be able to say, okay, here's your best bet. This is what people are doing in today's industry. Here's the standard. This is what you need to look at. And so that's such a crucial aspect of this. I can't emphasize that enough. So, on to our actual development goals. Since we're going soap to the clone, it could be really easy to think these are pretty similar, you know, soap is on the back end of the clone, we can just go from here to there and it'll be okay. But we really wanted to make sure that we were doing things in a clone-like way for whatever we were transferring. And, you know, technology is always evolving. So the next thing is if there's something that's being done right now, and so how can we bring that to the best practices of now? And always, always, people aren't going to read. If you have a huge document, they're going to try and skim and get to the base of what they need to do quickly. And that could be even more overwhelming for someone who's not technical or code-based. So we tried to make it intuitive, especially because people have their biases. I like to say, if you pick up your toothbrush with your right hand, every day, if it's on the left, you're going to reach for your right first. So when we were making the new systems, especially since they were existing systems that your users were familiar with, we were going to try and make it intuitive so that it makes sense. And then make sure that if anything did have to be changed to be vastly different, we tried to make sure it was definitely going to be an improvement. Like, let's not overturn everything and say, well, here's something you're stuck with, let's try and make it better. So they think, this is a great experience, I like this, I like this new way of doing things. And to get to that stage, as a developer, I often have to think in many roles, not just me as a developer, or even as well as like the facilitator on their end, but really the system administrators, the content editors, the site administrators, and the technical support staff who have to support that stack, we want to give them something that they can all use. So now into actually crafting some of the solutions and some of the things we put in place. And we did so many interesting things with this site. So it was tough, like Will and I went back and forth and tried to pick out, what are some cool things that we did? And even just in that conversation, some of the things I'm like, this is the coolest thing ever, which were just technical. It was amazing, some of the simpler things that I take for granted that really benefited Will and his team. Yeah, you know, in a lot of ways, this process of looking at the solutions before we moved forward with them, it felt almost like going car shopping. In a way, it was like, you know, you get that excitement of going to a dealership and you see all the fancy new cars, you see all the features associated with them, you got, you kind of, your eyes kind of glaze over because you're like, what do I go with? There's so many opportunities here, so many ways I could solve this problem. And so like looking at these solutions, like, you know, Mosaic, Diazo, there's all these things you can do that will solve your problem in infinite different ways. So, yeah, you kind of get that excitement built up. That's when things are to get really interesting, is going through some of these solutions with 6 feet up. So we picked just a couple. We talked, I thought the content migration was really neat. We, of course, Diazo was key to helping to port an existing theme into Pone, especially because it was still changing. Mosaic, which I think was one of the real superstars for the editors and being able to have that through the web, with the experience, and then the sandbox. So we're going to dive a little bit deeper into each one of these, starting with content migration. And the challenge about this is we weren't moving everything at one shot. We needed to be able to pull pieces of this, but they were changing. I couldn't just pull one export and then say, just pick up a piece. So we need to be able to move the subsites one by one as needed. So we ended up modifying some of the migration codes so that we could target specific areas and folders of the site and just export that. Then we also had the pipelines to translate the content and the images and everything that they had existing into new clone content types. And then I think one of the really cool things that we did is we had it so that you could re-import content over existing content non-destructively. So if we had to redo all the images or something, we could just delete all the current images in the staging site and just re-import that content, which really did come in handy. And of course, we had to rebuild a couple structures by hand, but that's a given because that was the improvement in the modernization of what we were doing with this. And then theming, which was quite a challenge and that was about retrofitting blown into the existing theme. Thankfully, that's exactly what DIAZO is meant for doing. So we were able to use DIAZO and XSALT to do a lot of transformations on content to make it really look like their current brand. But we also had to think because they did change their brand halfway through with new assets, new colors, new everything. And thankfully, we didn't theme ourselves into a corner. What we do is we have one file that's like the brand prod truth file. And then we've got some other files for like migration fixes, just to mitigate things that need to look more like clone, and then anything else that we had to add. So that keeps these separate. And that's so if engineering ever just had a whole new file, we could at least just drop that in and then still just do smaller adjustments for the clone part. And that really helped us to implement the new features in clone in a clone like way without getting stuck in a corner and saying, oh, we need to go change this file and then having to figure out what differences that we make down the line. Yeah, that's, and that's going to get more into that here in a little bit. But that's, that's very true. We really truly have a living theme. The central Purdue University marketing office is constantly introducing little changes to the web templates that they're requiring all the other colleges to incorporate. So, yeah, we're very much making little tweaks and changes along the way. So this this diazo solution really, really kind of makes that no sweat for us. It's not a problem to sort of change everything globally at once. There's really no headache involved. We also had to answer a great question, which is how do you let 40 plus subsites own their content while staying within visual brand brand guidelines, but make little tweaks here and there. And Purdue had an interesting solution in place, which is their local dot CSS. So each little sub site could have its own local dot CSS file. So they could change some minor things like little fixings and spaces. That is not a plone thing at all to begin with. And that was one of the, do you really need this? Can we change this? Can we mark this? But in talking with Purdue, we found out this was really important for engineering to have that. So we ended up coming up with a really interesting solution. And Chrissy gets this wherever you are. Thanks Chrissy. But we recreated that feature through the web in the ZMI. So we added a custom browser view called local dot CSS, which is great because since it's got a dot CSS ending and it's got the right mind type, browsers interpret it as CSS and we'll catch it. And then we have an action in the menu for each page where you can edit this local CSS and it can even inherit CSS from the parents. So what that ends up looking like is this page right here. So we have this local CSS. You can view the inherited CSS. You can type in your new code. And then that makes it really easy for the users locally to make a couple of mile changes that they need to without having to go through an entire release process or trying to specifically get just one section of the site since this is all in one big site. Yeah, this a lot of the big driver behind this, this local CSS solution, this is, you know, this is kind of what you get into working in higher ed. You have a lot of different department heads. They have a lot of people, a lot of different people in charge of subsites. And so, you know, try to keep them all within brand, you have to meet that basic requirement. But then beyond that, there's a lot of little tweaks that not all department heads agree should be the way their site looks. So you need to have that sort of granular control over little aspects of a site, like maybe sometimes, you know, this department head wants a different shade of gold for that heading. So you have the ability using local CSS to be able to make that tweak on a site to site basis. So that's that's been a big help. And another thing that we ended up adding is we have all these subsites and they needed to be able to do some type of control of some of the things, particularly the navigation. As we started going through, there were different types of navigation menus. So we needed a way for them to switch and then site header text and just minor things. So our solution was a sub site settings control panel. And since these are all lineage sites, we were able to use lineage.registry, which gives you a local clone app registry for each site. And then we could have customized the subsites through that with a little panel that we made. So here's an example of the panel. And you can see we have the managed subsites properties, that's right at the base of the site. And it's only at the very base of the site. So you don't get to it by accident. And then once you go into this, you could set things like the navigation menu. So the first example I have up top is the engineering typical menu, what they would have across sites with that also use a dynamically generated menu. So this is just like clones typical menu. And on top of that, because sometimes you need that extra layer of customization, we have the manually managed to soft structure. So someone who's really knows exactly what they want could actually put in their HTML directly for their menu. And that would populate up in the menu bar. And we're using Diazo to make those switches along with that registry value. Another thing we can actually do is set the parent text for the banner. So if you needed to set extra text here, and then you can actually link that to a custom destination if you needed to. So that's all through the web at a site level that site users can manage this and get some more customization in their site. Yeah, this is huge because, you know, before in Zope, our content editors had to use, they had to rely upon code and insert properties into the site using code. This took all of the effort out of my team to be able to train content editors who had no prior experience in using code to be able to select a menu, a drop down, pick what they need, and move on, move along. So, you know, there is still that solution for someone who knows code, they can manually adjust their navigation menus if they want. Great, we have that option. But this takes away all of the effort that we used to have to try to manage when setting up a new site. And then the other thing we put in is Mosaic, which was an easy choice once we looked at the structure of how the engineering pages were put together, especially with each little subsite needing its own homepage. This was an easy choice here. And I'll start. Yeah, so Mosaic is something I get really fired up about because, you know, going back to the car dealership metaphor here, it's like that this was like the Ferrari in the showroom, right? This was the thing that got us really excited about this move to Plone. Mosaic solved so many problems. I could probably do an entire 45 minute presentation just on Mosaic and how good of a fit it is for the College of Engineering, but just to put it onto a nutshell. So, the way our content editors were asked to build their landing pages was through sort of a library of code snippets. We call them blocks. And so we would ask them in training to build their landing pages, go to this site that shows all the blocks, copy and paste what they want, and then, you know, that would be sort of the way they build their landing pages. All done using a code editor. Nothing visual at all about it. So what Mosaic allowed us to do now is to actually take away all of these code blocks and just instead put it into a really easy, intuitive, simple editing interface, drag and drop, selecting from contextual menus, having a way to do this without any code at all. And so that's what Mosaic was able to do for us. We can through the web, drag and drop, build out a layout, rapidly prototype something that would have taken several hours in the old system, you know, copying and pasting snippets of code. Now it's not only is it easy, but it's actually kind of fun. So now we have content editors who are just having a blast building out different landing pages, sub-lending pages. It really is a joy to use. And it's something that we really kind of hang our hat on now as we as we've moved into the plone into the plone CMS. That's one of the things that gets our content editors most excited. They can't wait to get in there and use it when we demonstrate it for them. So when we look at the next slide here, this is really kind of a this is an example of one of the code blocks that we would ask them to copy from our block website, our block library into the code editor. And so this is like a basic, you know, homepage banner. This really isn't even the most intimidating looking block. We had some that were pretty gnarly looking that would go, you know, dozens of lines that we expect them to be able to, and they have no idea what they're looking at. They're copying blocks of code, and they're not sure if they're supposed to take the whole thing or just a snippet of it or portion of it. So this would cause a lot of problems for us and raise a lot of questions. And so when you look at the next slide, you kind of get an idea of what it is they're building. What's the actual front end at the end of the day? This is, you know, this is all just Python code and how it translates into a landing page. And so, you know, that's the idea of what you ideally get from working in Zope. And then we go to the next slide. The reality is this is what they're working with and it's a code editor. And so there's that's kind of a really good representation of what it looks like to our content editors. And again, these are these are not web developers, these are people who are just trying to do their jobs and, you know, web design development is not a part of that job description. So we get a lot of problems, a lot of requests for help. There's a lot of things that break because they just this is not their day to day. So Mosaic really, you know, it solves this humongous problem of ours just taking away all that code, putting it in a really elegant easy to use intuitive interface and making it actually kind of fun. So I'm going to let Annette get into the demo. Yeah, I'll admit even looking at this page, I've looked at their code before and it takes some reading to figure out what does what even with the guide. So I couldn't only imagine what it felt like for a non technical user to say, well, I'm just putting this here and hopefully this shows up on the page and everything's okay. But that's exactly why we put in something like Mosaic and I have a little demo of a page that I built here. There we go. So here is a Mosaic page that I made up with in the Purdue template and just kind of that the features that I've got. And this is almost a toss that we were able to create them. And so I'll go ahead and edit this. And so this is a banner tile. So now when they need to put in the banner, all they have to do is insert banner image, and they just pick where their image comes from. So I have like an images folder. They can add a title and they can add a description. And it comes in different styles. If they want the links underneath, we have this handy link. So this is like the one place they'll have to know some code, but you can insert banner links. And it actually comes with a starter template. So you can just copy and paste like this is a small chunk of code that you'd have to edit as opposed to a whole page. And then that's the banner links at the bottom. Once again, the rich text tile was really big because, you know, it's so easy to do. And it's like a word processor. They can highlight something. They can bold it. They can italicize. They can center. They can do all the things that maybe me as a developer might be horrified by. But you know what? That's their choice. They can format this in the rich text. And that's a big change from having to type things and format like paragraphs and links and all of those little bits for code editors. And we want them to make content. That's the purpose to make great content. Some other cool blocks that we have is we have the content listing block. And I'll just open this one here. And content listing is already like a default mosaic block. But what we have on these is we have different views. So they can change the view depending on what type of content they're trying to list. So this is curated events view. But let's say I had a conference and I wanted an agenda view. I could change that. And there we go. It's got a whole new look just that easily. So it can really go through and replicate a lot of their existing styles with just a drop down menu. And they can even control what fields are listing in that view, which is great. So you can have news with a date. You can have news with just titles. You can just list files. You have a lot of power. You can just use a lot of code. But you can just use a lot of code. And then you can just use a lot of different types of code. Of course, for the people who love HTML, we've still got this raw HTML tile where you can just put in your code directly. But this is great if you have like embeds from sites like mosaics or Twitter embeds or anything. They have the ability to embed that on their site. And we don't have to go through a code release or go to the back end. So that's a great way to do that. And the first thing that I want to do is to do a beginning of that. And what's even more interesting about that, when I cancel in case I did anything crazy, is with the display view, they have the full width mosaic view, which is great if you're doing a homepage. But let's say I'm in the page, but I need access to these tiles. They can just do the typical mosaic layout view. And there's a lot of much fanciness on a normal page, but that allows them to use news blogs or people listings, all within the context of their page without having to do all these different display views and remember this does this and this does that other thing. So that's some of the fun thing we did for them in mosaic and I know there's so much more and as we come across different requirements in the site, we add tiles or change layouts that they can get more and more features. So one of the things I touched on earlier was, you know, just the, the resistance to change that happens sometimes, really in any context or industry but especially in higher education so we wanted things we had to come up with was a way to kind of alleviate some of the anxiety around changing CMS is especially going from a really old CMS like soap into something new and exciting like blown. So one of the solutions in working with six feet up we came up with was a sandbox and so it really the name kind of implies what it does it really, it becomes kind of a safe space for our users to try things demo a certain, you know, trying out mosaic demo at demo a, you know, one of the plugins try things just for the first time without any fear of messing anything up so, you know, it's one of the problems one of the big problems we have in soap is just you know with the development environment is is the production environment, you know you don't have any way to try something because you might you might mess something up and so you become fearful and then when you're fearful, you don't learn, and that's a problem so the sandbox really kind of becomes that place where they can test things we can train them, and they can stay in that sandbox as long as they want until they're comfortable enough to start working on production and start really editing their site. So that's that was that was one of the big things we decided to add to this this migration effort if you go to the next slide. It becomes as one of our big outcomes. And so that's where, you know, kind of putting a bow on all of this. You know, we kind of talked to we kind of hit on this idea of collaboration and discussion. I mentioned how just like in a lot of higher ed settings there's a lot of siloing that goes on there's a lot of, you know, departments that that work and don't talk to one another so having having someone like six feet up like a net like Chrissy like every all the folks that six feet up were able to really be that glue that that brought us all together got us all the table. We started to understand more about what our IT group ECN needs. They started understood they started to understand what the communications group ECO needs, and then six feet up is made sure to translate back and forth. Okay, we have a solution for this. There's a way that that we can make everyone, you know, if fulfills everyone's needs. Now we're talking, you know, now we're actually discussing every single every feature every single sub site that we roll out. There's a discussion that happens and so that that's doing large part to, you know, six feet up ability to to break down those barriers. Go to the next slide, you know, kind of looking at sort of the 32,000 foot view of this entire project I want to kind of touch on some of the milestones we hit. So you're obviously moving from zoop to clone. We had that was a major milestone we're actually in a more modernized infrastructure, we're using the latest version of Python. It resolves a lot of those security concerns that we were having around using the old version of Python. We have a brand compliant front end all of our templates styles everything is right on right and lockstep with the university's brand guidelines. We have a sandbox like a demo environment to alleviate a lot of the anxiety and some of the training concerns we had for our new users. That's been huge for us. And then obviously at the end of the day, you know what you get what comes out of all this are content editors people who are like I said who are not designers and not developers no background. You end up with a trained a team of individuals who can build sites prototype sites rapidly. It gets a lot of that, a lot of that interference a lot of that stuff that would get in the way of them doing their jobs doing what they do best is now no longer concerned. And so when you look at this next slide here in that we now have content editors who feel truly empowered to work on their websites without any fear of messing anything up. They have a visual interface, and they can they can work confidently in a more modernized and secure environment. And that's really what this has been all about. I mean from the from day one this entire migration has been about trying to bring people who are not web designers and developers into this environment in a way that they can they can perform actions, maintain their sites with very little to no oversight from from my team and so now my team can focus on some of the new and exciting initiatives that we're working on. And so when we're talking about what's coming next. Next slide becomes the discussion so really this become this is really an ongoing project we have. I mean we've done so much already, but really this has been the tip of the iceberg. So there are other things we can do utilizing some of the technology and abilities that clone affords us so we have I mentioned it earlier, you know we have over 30,000 pages a lot of those pages are maintained by faculty and their research groups and grad students and so how do we start to really scale the solution so that they can they can now become a part of it and they can start working in mosaic and they can start rapidly prototyping sites of their own. That's where we're going to start, you know we're start looking at our web apps like how can we start building out new and exciting web apps using using the same technology and in clone that solves a lot of those problems that we've been relying on sincerely 2000s so I truly couldn't be more pleased with with how all this is gone. I cannot wait to see what comes with the future holds. Obviously working with a net has been a dream come true. And not only that not only her but the six feet up team. So I for anyone who's about to start a migration. There's a lot of options out there. If you I highly encourage anyone to look at blown. Obviously this is the phone conference but it's been, it's been just incredible just to see what this thing is capable of and to see where it's going. And it's backed by such a wonderful community of people who know, you know, what's, what's coming and where we're going to go so. Yeah, I, at this point I guess we're, we're open to any questions. Once again, like I'm, I was very excited to get on this project. It's always fun collaborating with groups I love it when we can collaborate and actually come up with solutions and I think few things make me happier even than making something really cool than seeing a user who is empowered and owns their content and actually is excited to do that so it's been really fun working with you guys I, I'm excited to see what else like I know there's more surprises somewhere in your site but coming up with the solutions and just trying to come up with things that work the best for the users who are really going to be using it. That is a dream goal for me. Yeah, thank you for attending our talk once again I'm in that you can find me and that at six feet up and my covert center will thank you so much. Yes, we'll head over to the jitsie take it away Kim. Thank you so much will and Annette for giving this talk it just looking, listening to you from the perspective of somebody's been using plone for a long time and who cares deeply about the project in the community it's, it's like a, it's like a heartfelt love song so thank you. I know it's for Lamborghini but I mean I'll take it. We do have a question for you from Philip Bauer is the sandbox open for all editors and is the content synced from production regularly. And then a second part is the sandbox. On top of a test and staging environment. Yeah, so the sandbox. We really we utilize it as sort of kind of that first step when we have a unit that is interested in starting to move their site over to clone. We kind of use the sandbox as a way to bring them in so you know anyone who who we know is going to be working in plone, we always kind of make that sandbox we give them access to the sandbox first. We let them play around to do as you might do in a sandbox, and then we sort of transition that into a training like a guided training. And that's kind of how it all begins we don't like not necessarily anybody has access to the sandbox at any given time it's kind of like the sort of like the holding place for people who are kind of the, you know, the the clone padawans if you will they're kind of starting out their, their their clone experience and eventually they get moved over to the workshop environment once they're done in the sandbox. And so it's always like resetting the sandbox this is one thing I forgot to mention. Yeah, it's super simple for us to reset so you know we can actually we know with like three really easy steps we can, we can wipe anything that anyone's done on the sandbox, and revert it back to the way our workshop or production environment looks so if they want to get a really good simulation of what it's like to work, you know, in those environments, we can just, you can just wipe it clean and duplicate everything that's that's on that production environment they can manipulate it they can try things out. So hopefully the answers, the question I let me know if there's anything else I can elaborate on with the sandbox. Thank you all. I believe Philip will probably be joining while everybody please join will and Annette in the jitsi. I posted the link in the slack in the conference to track to sorry conference track to slack channel, but it's, it's the blue button underneath the video frame that you're looking at us in in loud swarm, you know the drill. So, thank you again will and Annette and let's head over to the jitsi for more q amp a and feedback suggestions. Thank you. Thank you.
Migrating a site is always a challenging task, but when you have dozens of subsites with specific brand standards and custom user functionality, the challenge becomes mammoth. Six Feet Up worked hand-in-hand with Purdue’s College of Engineering to migrate their existing Zope site and its subsites into a new Plone instance running on Plone 5.2 / Python 3. Throughout the migration process, we considered the project scale, timelines, and limiting the impact on end users — all while managing the balance between user needs and best practices. During this presentation, you will learn: - why it matters to think user-first during migration, - about creative solutions for translating content and functionality into Plone, and - how to successfully migrate subsites.
10.5446/57249 (DOI)
Yeah. One, two, three. Yeah. Okay. Perfect. Yes. Go ahead. We can hear, we can hear you and see you. Okay. Would you like to share those again? It's just. Use the presentation right now. Yes, we're ready for you to start to give the presentation. If you'd like to share that with us again. Get that back up on stage for you. Thank you so much. So. Here you go. Okay. Please go ahead. So thank you so much. Thank you. My presentation is entitled toward the establishment of a new open source spatial remote sensing virtual research environments for e by diversity ecosystem services and climate change modeling adaptation. Well, it's a little bit long, but you will understand why. So I am talking here not only on my behalf, but also on behalf of the community. And I am talking about the structure of biodiversity and ecosystem research. In fact, we want connections there are a lot of connections with lighting as you know, well with the work with natural is a university master as well with Buenos Aires, many people worldwide. So I will explain you later. And this presentation focusing on the spatial observation will be very interesting from the point of view of how to show open source developments. Many, many developments. Many, many developments that code design code deployment majors. So I'm going to try to move the area. What all of us have clear ideas about that we're living in certain ecosystems, core sharing core system, but species, the factors of the scale of the Earth. And that there are five big, there is a few big five hot ecological topics we are talking about biodiversity, climate change, our environmental laws, invasive and spaces component but also taking into account that all the nature all the system or the system can be considered as a complex system to model taking into account what is either a sphere, lithosphere, atmosphere and the high nonlinear processes happening over there. And this is important to address this complexity because we need to in turn address the environmental challenges and support knowledge basis strategic solutions to environmental preservation in particular the biodiversity laws that we are experiencing in a context of more than evident climate change. So, I will go step by step. I'm going to explain to you introduce what is about the ecosystem services cascade from biodiversity to human well being. What is true, what is very clear is that we need to assess and monitor the ecosystem functions, the ecological processes and the biodiversity of humans well being depends on. We go to biodiversity and ecological processes and we need to understand how the ecosystem functions are working in order to further provide sustainable ecosystem services to guarantee the benefits for humans and preservation of the entire Earth system. You may be now better than me that there is a big sister initiative called I guess, the intruvenimental science policy platform on biodiversity and ecosystem services and there are a new model paradigm of the economics of biodiversity and ecosystem. The so-called TV since 2010. Because we need to understand based on this new paradigm, the environment limits ecosystem resilience and relationships among biodiversity and ecosystem services. We need to understand that to provide, to make a feedback, a proper feedback to preserve and to live in harmony with nature. Based on observations, based on observations of what? Well, as you know well, sustainable development goals are not per se. They are there in order to guarantee the human well being. And the human well being is related to the preservation of nature and the consistent with nature. And that consistent with nature depends on the way we understand the ecosystem functions and the services that are working there. And now how we can feedback with the provision of proper sustainable ecosystem services. As you can see here, we have identified our matrix. You know, the one hand and one axis, you can see the identification of certain sustainable development goals. And on the other, the assessment of 12 key ecosystem services by a survival best period identifying a minimum of 231 extra dependencies of sustainable development goals targets on a multiple ecosystem services. Well, see the legend on the right, please. Level support for a contribution. Many of them are not certainly well assessed yet. So we need to be based on fair data, findable, accessible, interoperable, and reasonable, which are the basis of scientific evidence to understand how to guarantee in a proper way this tacit knowledge derived from the fact of how can we provide these sustainable development goals. Why? And these ecosystem based approaches. Must be based through an urgent and concerted force fostering transformative change and transformative change in which policy making institutional processes and regulatory framework. At the same time that society, capacity, and knowledge, economic systems, ecosystem and natural resource management are working all over together. All together. And this is like what Eric about the watch Eric is about a European is science distributed infrastructure on biodiversity and ecosystem research in order to create this regulatory framework, this framework. In order to understand what is going on, what's the impact of global climate change on earth by diversity and ecosystem research as a structure in tour the European research area by means of providing the state of art of ICT, the data, the learning, intelligence, computing, blockchain, money in an organized manner. What do I mean? This that based on the global biodiversity informatics outlooks by my colleague and dear friend, how many Thales in 2012, like what Eric is on the top layer on the understanding, based on the culture, data and scientific evidence on understanding from the point of view of how can we we can build model representations of biodiversity patterns and properties based on any possible evidence and in turn, based on the following components, which is case based modeling trends and predictions, modeling biological system association dissemination, privacy, new data capture on all of this. We can put together all of them into let's say software software systems. So called the select is very funny why we decided to call the select the evolution of a cube, a meta cube is the select. As a new model, new paradigm or with the research environments and life block chain system, our system, not only address to citation, the hesitation provenant anti tampering, but also tokenization, socio economics tokenization of ecosystem services. Now I'm going to the solutions step by step. Why because of this because of that since 17 on March of 2017 we are considered as a European is science European research infrastructure for the city research composed for certain number of member states from European Union, based on the council regulation. At the moment, we are Belgium, Bulgaria, Greece, Spain, Italy, the Netherlands, Portugal, Slovenia, and almost almost to be full member Cyprus, Republic of Ireland, Israel, Slovakia, Ukraine, and Romania. We have our headquarters and the main ICT coordination office here in Spain and the Lucía region, Sevilla, Malaga, today we are here in Malaga. The service center is in the region in Puglia, in Italy, the southern Italy, and the virtual labs and innovation center is settled in Amsterdam, the Netherlands without forgetting the regional components of the European Union as you know as well. And we are not walking alone. Of course, we are working with our sisters and brothers from the global biodiversity information facility, Sevilla, the research that aliens, Copernicus, the European Union, the European Open Science Club, the famous AOS group maybe heard about another, there are a lot of things I couldn't stop all night telling you all night all day that in Buenos Aires, all the people. At the same time, now we have been specifically requested by the European commission about the Green Deal chapter and one of the Green Deal chapters in turn is the biodiversity chapter, biodiversity because biodiversity is everywhere. It's about the ecosystems providing food, fresh water, clean air, shelter and mitigating natural disasters best to assist and help regulate the climate. That's clear. At the same time, we are experiencing Europe and throughout the world a new digitization process. This is the case in the industrial 4.0, there was a 5.0 industrialization in Europe where digital innovation hubs, regulations and skills plus from our coming all together, take into account that the quadruple elix of innovation where academy, private companies including small and medium companies, civil society organizations, must come all together in a transdisciplinary way to understand what is going on. It's not about pure research. Research is data cannot be in a box isolated there. Decision makers must take affordable, reliable decision making processes for professional professional services of environment based on data. Entrepreneurs, ICT environmental companies must take decisions on that and citizen scientists must be aware of the importance of preservation environment by including an intergenerational approach for the elderly people, the children. That's clear. So capacity building, capacity building is essential. We need to create an analog catalog, training education and increase volume, mapping strategy research area, competence and skills. And always thinking that the psychology of the people are different because they got different approaches to the problem of biodiversity and ecosystem preservation. It's not the same approach of a researcher, a public administration decision maker and an entrepreneur or citizen scientist. That's more clear. We must satisfy the user's requirement worldwide on the fact of that indigenous knowledge from one side or the globalization are affecting different ways from the knowledge, organizational knowledge way, the way we tackle with the problem of preservation by diversity and ecosystem services. That's the reason that we are having identified as one of the first priority research infrastructure from the EULAQ, the European Union, Latin America and Caribbean working group. These are some nice pictures in Doniana, here in Andalusia, there in Costa Rica. And at the same time we are leaders, let's say in the subgroup from Green Delizus to foster the creation of a partnership from Green Transition and Energy Act between the European Union and the African Union. In fact, the seven agenda conclusions and the forthcoming COP 27 illustrate the biodiversity as a vital and critical component of the European Union public policy agenda in Europe and globally in Europe for the future. What does it mean? Together we are strong. We cannot face global challenges that can't be changed alone. And here we go. This is a conceptual paper signed by myself, and I am from the United States of Korea, Walter Adink, Tim Hitch, Christopher Ben Thittis, Tony Jose, Saint Alvarez, Peter Schork. They are wiser than me in the field of e-biodiversity, facing e-biodiversity challenges together as a real framework based on how JVF, Disco, Ida Dio, Setup, TdwG, and like what's Eric, we have agreed to deal with the challenge of preservation of biodiversity based on the best state of art of ICT. And step by step, my friends, I will go to the observations. A little bit of patience. Let's go. Again, the select and the block. Why? Because you can, this is key on the right, on the left. I don't know if you can see with enough resolution. You go there, citizens, government, the pre-nursing researchers, that's right. On the other hand, you got essential biodiversity variables, environmental ecology, environmental economic domains, and then you got different ICT here components. And you need to find a way, a user-friendly way to provide to the users on the top. Researchers, decision makers, entrepreneurs, citizens, what do they need? One of those components is related to remote sensing, is related to observations. And how through those tools, we can compose the proper workflow services through the semantic composition mechanisms, taking into account that on one side we got a blockchain system to have the flexibility of the resources that we are engaging in a distributed manner, databases, software, publication, media, on a certain topic. And at the same time, we need to guarantee the connection to the bit-file of the cloud computing, to the European Open Science Cloud, or either the European Open Science Cloud, or either the EuroHPC, the supercomputing, when they need to launch any process for modeling any supercomputing requirement. So, resources layer, infrastructure layer, composition layer, application layer, and finally, user layer, step by step. This is how it's set up. But let's go a little bit step forward. We need to deal, tackle with the problems, the perspective of the scale and the heterogeneity challenges. What does it mean? We are talking the scale. Micro, meso, macro scale, biodiversity is not the same to measure with an nano or microsensor, something that is a little bit, as you know, well, the 96% of the biodiversity in the world is microbial, bacteria, virus, fungi, etc., many fungi. The mesoscale is the world we are living in, the macro we are approaching to the geo scale, to the observations. And then the heterogeneity measure of data, metadata resources involved. It's not the same, analyzing with remote sensing we are doing every day, and how to connect to create synergies with environmental on field samples, or annotations in a notebook on the field. Damage change monitoring, environmental observation networks, well-working data, grid data, and agroecology, this Latin American Caribbean and Africa context by promoting those eight tools based on two paradigms, green and blue initiatives. And this is a very conceptual scheme based on a Mediterranean forest on how we can do this from the perspective of a nano sensor, micro sensor, till the mesoscale, the coordination of the edge computing, based on edge computing techniques, and the combination finally with the remote sensing. And the remote sensing, why? Because for the perspective of the Earth observation, we are talking about oceans, land, air. We're talking about how to measure in oceans, chlorophyll, salinity, surface temperature, how in the land we can measure net primary productivity, the MPP, from the air perspective, air source, rain forest, no, night lights, light pollution, more important than the people think, believe me, my friends. And how from the under the umbrella of United Nations, we can foster the collaboration through the Space for Sustainable Development Goals objective. Because as the former Secretary-General and the current Antonio Gutierrez Secretary-General of the United Nations said, we must challenge climate change skeptics who deny the facts. We need to guarantee that safe operation in space for humanity, planetiwonderies. And just saying remote observations are essential. The people today here, you are essential to understand in a global way how it's affecting every part locally. Why? Because the remote sensing and net observations from the geopolitical scope allows to compare different geopolitical scenarios in different parts of the world. Facilitates the development of Europe to collaborate with other countries. That's very important. Taking consideration the successful developments of GeoVon and YuVon initiatives, the group of Earth observations by the rest of the Sub-Energetic Network, we are partnered with, of the United Nations Outer Office Space Agency, UNOS initiative, Copernicus, NASA, and last but not least, the one which is gathering all of them, the Space for Sustainable Development Goals objectives. This is an example of remote sensing across borders on this observation and now open source scaling, explained by this. This is the south of Europe, the north of Africa, we're talking about Spain, territory, Morocco, territory. We are now facing the same problem of an invasive species with a one-dice, pandemic-dice, which is called Phytotoracinamomium. Phytotoracinamomium is the acor-tree drawback. It's destroying all of our ecosystems. In addition to Sierra Fasteosa with the olive trees, it's a disaster. We're sharing it with different geopolitical, but similar ecological forces, these projects. This is the big issue, the big thing of using open source, big to recess standards, from which we can provide the proper remote sensing tools. Let's go a little bit further. Taking into account the importance of the metadata catalogs, how we guarantee for the ecological point of view, and we're talking about ethnologies, hydrologies, etaphologies, different chapters of biodiversity and preservation, the citation, the metadata catalogs, the training catalog for the people, the capacity building to the people who are supposed to use this. I remind you, Kali reminds you, researchers, decision makers, the citizen scientists and the partners, the semantic approach of what we are referring to, what about the person who identifies to identify the data, and to have a clear perspective of the data we're talking about, in a unibook way without any depth of what we're considering. That's why we're a key partners of European Open Science and both, and also recessed that alliance. And then when we want everything and we're composing the services, how we can use artificial intelligence and data tools. For instance, with FAO, we're collaborating in all what is related to forest, with a accuracy of almost 70% for remote sensing studies. Our colleagues from Brazil are from Granada here in Andalusia. How we can use this remote sensing, this open source tools, to provide faster identification of cetaceans, whales, these special detections. And last but not least, for instance, one of the key issues we're dealing with is an internal joint initiative of non-indigenous and alien species. We're talking about the detection of artificial reformatory systems, and Lantus Altissima, that is a bad wits, we used to say, but wits affecting northern ecosystems, and a lactic ecosystem in Europe. On the levels of invasive crustaceans, we are talking about the blue crustaceans, the blue crab. The macroalchoatecifolia and the cilandreocumurae, which is an exotic algae coming from the asianic areas, affecting South America and Europe, and changing drastically and not for good, and basically the ecosystem there, the biotopes in general terms, and the Mediterranean forest, in the case of Phytotracinamomy, as I could tell you before. These are results, open results, free open developments that you can access through our web page, web portal, of course, there's a mechanism of accession, you send us an email, we'll be grant you, you will have this access to these services, and we'll be able to access them. So, we're, remote sensing is essential, for instance, here, this is an example, some snap captures of the screens, which we are talking about crustacean problems in the Mediterranean, in basin, or how they be in basin in the cities of Sarbatores, citizens signs, survival, DNA, and remote sensing primary observations, we are identifying previously being basin, and it's very valuable, my friends, the EVBs, in particular in collaboration with our colleagues from the University of Asterham, and the University from Yuba, and our colleagues from Leiden, and the reports units in countries and eco-regions, in order to define the future possible scenarios as a way to help biodiversity change indicators. So, we are forecasting what is going on to happen in that ecosystem services, and of course, remote sensing and output understanding of remote sensing, and Earth observation access, and essential to get these objectives. And we are using blockchain systems in all of this, in order to guarantee traceability, anti-tempering, and provide the proper socioeconomic tokenization of those ecosystem services. This is a video, I will not play the video because it took two minutes, how the blockchain is working, but I think you know better than me how it's working, but for instance, one of the samples we are using is about the socioeconomic valorization, jointly with the EVBs, or the Sodor Asbora, as we try to open it as well, Sodor Asbora Parva, as this is a basic, very specific, came in from Asia, you can see here the map throughout the European continent, and how we can, and this is, we are working right now for information. So, I will start this one. Well, this is making a strange thing as a comment, because I have a bad video within the PowerPoint, and I hope it's not crashing. I don't think we're going to have time for a video, I'm afraid, when we get this, but we're getting close to the end of the session, and I want to make sure we've got a little bit of time. Yeah, I'm finished, yeah, I'm in trouble. I'm going to be scared because, ah, video, Microsoft, don't tell a lot of, ah, so I'm going to, I'm going to wait, so I will open again, no problem. Microsoft is not responding as usual, wow. I'm an open source defender, and I'm using today, because this is our office desktop laptop for the traveling, you know, PowerPoint, but I'm finished, now I'm going to again open, no problem, I'm almost finishing, because I will go to the PowerPoint, and I wanted to have an enthusiastic presentation in order to get the people, users, yes, you to collaborate with us, because... Yes, you've shared a lot, you've, you've referenced some really big global collaborations, in European collaborations. We believe, we believe in collaboration, that's the only hope for our, to save our planet. But I, we are, this knowledge, open knowledge, is the only way forward. So if we zoom in on the open source aspects of what you're trying to establish, open source for geospatial theory, what's the best way for people to engage with you? I will tell you, because I'm one of the founders of the open source movement here in Andalusia, in the so-called Linux, the Guadalinix, maybe you have heard about that, we have given service to three different million children, people worldwide, during the last years, facing open standards. Well, I will go to the United Nations, you know, there were several, after the video, by diversity and ecosystem, perhaps one who we're collaborating, you know, the UNOSA, United Nations, linking space and life community in Latin America and Caribbean Africa, but in particular, because it's very important to take it on the multi-component approach to natural conservation and management in times of rapid change, where scientists, natural managers, and policymakers must be done. I invite you to go to a co-potential project that you, one of the Light Watch Eric related projects, which results, we are integrating, open source, my friend research, they are available there, but we're integrating research, our winter research environments, the co-potential project, but by the way, I will share for free openly my presentation, of course. Well, no, of course, and I would like to contact you from now on, all my team, your disposal, that's our work for pressure. All the relationships between land-surface temperature, density, and normalized difference vegetation index, with some spectacular, they'll say, NDVRI results for spatial and seasonal patterns in vegetation, they are limiting in Europe. This can be extrapolated or transferred worldwide as well. This is our remote sense, with open standards. We are working with a critical zone, is there the skin living where rock meets life, because it's very important to understand that this open development are dealing with the place where physics, chemistry, hydrology, eco-hydrology, geology, and biology closely interact. And this must be open source as well. We're applying the deep learning and sending NDV to protect habitats under the weather framework direct to European. Here is one of our publications. We're talking about how we use due to very high resolution images, plus convolutional neural networks to establish precise map maps in Africa with a high cost. Thank you, Amiga. We have a question for you from one of our audience members in Argentina about how they can get in touch with you to collaborate. Of course, these are our coordinates in touch. Why? Because Argentina is very important from the European Union that can even cooperate. It's very important. We're now collaborating with what is called the Alianza del Pastital, Rio de la Plata, in collaboration also with Ruguay, Bolivia Paraguay, all these countries Paraguay and Chile. And we are eager for cooperation. In fact, I would like to invite you to the United Nations General Assembly, 76th edition, General Assembly edition, dealing with sustainable development goals, 14 and 15, that is life underwater, life on earth, that it's going to take place the day after tomorrow, October 1st at 1.00 or 7.00 PM. And I would like to start you a key message. We have right now the best generation ever with the best skills on ICT, on open source developments, we are training there. We must foster cooperation. Please, be free of contact us to start new projects. It's a new MDC, Neighborhood Development Cooperation instrument to be granted, very well granted by the European Union. And now we have an excellent opportunity to foster collaboration on open source development to preserve biodiversity and ecosystem research. I like what Sherry has, Johnny Disco, JV, from all the research infrastructure, sister research infrastructure, we got the mission to do that. And we need you. We need you. On that call for we need you. Sadly, we need to wrap up our session, but for me, can people contact you via the website? Is that the best way for people to reach you at the Erech, live watch Erech website? Yes. And we need to connect with all of these amazing initiatives. We are not optimistic. We are enthusiastic. There is a difference. Thank you. We believe in people, we believe in data, we believe in interoperability between the people, the data, the researches and the psychology, emotional psychology of the people. Because after this lockdown, what we have learned is that the enthusiasm and the reasons to live in this direction is the best ever we've got. I think you've got people in the audience going here, here, yes, yes. Because I am the Lucian from the south. I must go down to another meeting. But this is taking into account that this is not the blah, blah, blah to have a good justification in this meeting. No. The good thing is to take into account, please take my coordinates. Let's work together. In research collaboration is essential. So they will have a collaboration. Policy makers in Brazil policy makers in the United Nations, blah, blah, blah. And now we go to the open source community with the best ever developments we've got to that. Of course, I can employ only half an hour to explain about all of the things we do. Our Github resources we collaborate based on creative commons license. Of course, that's not. I think if we if we had given you a full day session, you still would not have managed to cover all of the things that you could share with us. I'm very pleased that you could make it to the end of our session. Sadly, we need to wrap up. But I hope that you're around for more of the conference and people can continue to connect to connect with you and look for ways to collaborate together. Of course, I'd like to thank once again, all of the speakers that joined us for this session. It's been a really fantastically interesting session. I hope that all of you watching from home or off the spaces have enjoyed it as much as we have here backstage. I'll ask the speakers once again to wave a goodbye to you all. Thank you so much and have a great rest of your conference. Thank you very much. Thank you very much.
Towards the establishment of a new opensource geospatial remote sensing VRE for e-Biodiversity Ecosystem Services and Climate Change modelling and adaptation LifeWatch ERIC e-Science panEuropean Infrastructure for Biodiversity and Ecosystem Research "lifewatch" is mainly aimed to facilitate the access to their distributed data, information and knowledge resources and services, also providing modelling capabilities for understanding the complexity of associated Climate Change processes for research and adaptation purposes, as well as addressed to decision makers and citizen scientists. One of our VREs focuses on the historical time-series study and climate change projections at high resolution, which will be generated by dynamical downscaling of the General Circulation Models (GCMs). To this purpose, the regional climate model Weather Research and Forecasting (WRF) will be used to simulate high-mountains areas climate scenarios, and thus, solving the limitations of the vast spatial resolution of GCMs. The observational database will validate present WRF models-based simulations. This will create high-res regionalized projections in high mountains areas using the state-of-art of open-source geospatial tools.
10.5446/50078 (DOI)
I decided to give my topic in four lectures. So we'll have three breaks. Each lecture is about 35 to 40 minutes long. And I can also remove some subjects if I notice that we all get too tired and want to go to the Galang Beach. So this is about time parallel time integration methods. And the first part is shooting type methods, multiple shooting type methods. So this will occupy us for the first 35, 40 minutes. But before I want to start with this topic, I want to explain what this is, this time parallel time integration. So it's time-dependent problems. Here is an example. There is a heat equation. You see the derivative of u with respect to t equals a Laplacian of u plus some source function. An initial condition is given, and we're supposed to solve this problem over a certain time interval. Now, in general, to solve such a problem on a computer, you discretize the Laplacian like we've seen in the key talk. In a difference method, for example, in space, then in time you do a discretization like backward-doiler and implicit discretization. And then you do time-stepping. So you start with your initial condition here, given a time t0 in all space. And then you do a time step forward to get an approximate solution at the next time step, t1. And once you know this approximation, you do a time step to get to the next time point t2 and do advance like this. So you see this process is completely sequential. You cannot calculate the solution here before you know what the solution here is, because it depends on this solution. There is a causality in this equation. This is a completely sequential process. This becomes even more clear if I take a simpler equation. Here is an ODE. The UDT equals f of u, scalar ODE, just one unknown, initial condition given. Now, I only have to discretize the time direction. Suppose I discretize by forward-doiler. Forward-doiler. So I get the new value is the old value plus delta t times f evaluated at the old value. So if I know u0, I can put it here and here. I get u1. And this u1 I can put here and here, and I get u2, and so on. This is completely sequential. There is nothing that you can paralyze in this process if you look at it like this. If the problem is linear, now I have a linear ODE, u prime equals au plus f. I do the same Euler discretization. Then I can write this as a system with all the unknowns at each time point, and I get a lower bi-diagonal matrix. So from the first equation, I can determine u1, where the initial condition is here in the right-hand side. From the second equation, once I know u1, I can determine u2, and so on. This is completely sequential. It's a triangular matrix. So time parallel, time integration are methods which allow to solve you this in parallel, using many processors. And it doesn't seem possible if you look at the process, because it's completely sequential. How can you possibly solve u100 if you don't know u99? Now this has a long history, this subject area. It's about 50 years. And here is an overview of what I found. To make this presentation, I decided to search in the literature the papers that had a key new idea. And I will explain this key new idea for all the papers I found. This took about two years, and I didn't only do this for this presentation. I worked on this over the last about two and a half, three years. And what you see in this graph is time. There is a time axis. It's cutting apart here. There should be 2010, 2001, 1990. So you see time is advancing from 1960 to 2010 here. And there is another axis here. On top, in the middle, it says large scale. And on the sides, it says small scale. This means the methods that are described in these papers, if they're closer to the middle, they allow to use many processors to do this in parallel. And the methods to the boundary, they only allow you to use a few multi-core architectures to do it in parallel. And then there are four colors. And this, in two tracks, one track is here, the other track is here. This is direct methods. These don't iterate. They solve this problem over a long time interval without iteration. These are the black methods here. And then in this track here, there are iterative methods, three more colors. So you have four colors on this transparency. And these four colors correspond to the four lectures I will give. So there's going to be a lecture about the black methods, a lecture about the red methods, a lecture about the magenta methods, and a lecture about the blue methods here. Now I said there are four colors. It's not quite true. There is a fifth color. Did you identify the fifth color? The fifth color is review papers on this subject. There was a review paper by Geer. There is a book by Burridge. This is a book I read when I was a grad student. It was a fairly recent book at that time. I didn't like it. Somehow there was no theorems. There was explanations for no theorems. But it's a good overview of what was available then. And here's the review I wrote. And in this paper, you can download it from my web page. You find most of what I explained today. So you can go and read of it. And it's explained like I explained it in the talk. You should remember what I said when you read this review paper. So I start with the multiple shooting type methods. These are the magenta methods on this graph. And the first idea goes back to a paper by Niebergeld in 1964, which was not really an iterative method. It was a direct method. But the idea led to iterative methods here. And it was this paper by Leon Smede and Turiniciu, which sparked a lot of activity in this field again. Before, people had worked on it. But a lot of activity was sparked by this method here. So I will explain now the key ideas of these five magenta papers in the next half hour. So the first was this Niebergeld paper, 1964. Niebergeld was to become a professor of mathematics in Zurich at ETH. During the time I was a student, but I never had a course of his. I don't know why it didn't happen. It just didn't happen. He was a postdoc in the US at that time. And he published a paper where he says, for the last 20 years, one has tried to speed up numerical computations mainly by providing ever faster computers. Today, as it appears that one is getting closer to the maximal speed of electronic components, emphasis is put on allowing operations to be performed in parallel. In the near future, much of numerical analysis will have to be recast in a more parallel form. That was in 1964. You have seen in transparencies during the Sembraks school. This point has arrived. But 40 years later. You see on this graph what Niebergeld said today in 1964. It did happen only 40 years later. But 40 years later, we arrived at this time. So in this graph, you see time here that goes from 1985 to 2010. And here you see a log scale of powers of 10. And then you see the number of transistors that can be put on a chip, which is going up, is going up, is going up. There seems to be no bound. You can put more and more. But then all the other curves have a kink that comes in 2004, 40 years after the division of Niebergeld. And you see, for example, the curve that says how many cores you put on a chip. It used to be one core. And suddenly, you have multi-core, multi-core, multi-core, it's going up. Whereas the other curves flatten out. For example, the curve of the clock speed suddenly flattens out. It cannot go faster. It's a physical boundary. It just cannot go faster. So the only way to go faster now, as we heard this morning, is to go parallel. And so he, 50 years ago, thought about how we could do a parallel algorithm for a problem that somehow is not parallelizable. So that was the first idea. So here is this example. He takes an ODE, y prime equals f of y, with an initial condition. And he says, as an example, a method is proposed for parallelizing the numerical integration of an ordinary differential equation, which processed by all standard methods is entirely serial. Like I explained to you, it does not seem to be any parallelism in this. So he assumes he has an infinite number of processors, as many as you want. First processors to waste. Is it possible to get this solution faster than just time stepping through? And here is his algorithm. The idea is as follows. It's an ODE, so we have one quantity to calculate at each point. So the trajectory starts here, and he needs to know the solution along all these points. What he does first, he does a course approximation. He starts at this point, and he produces a course approximation, which is not going to be very close to the true solution. But this gives him an area where the solution should go through, approximately. Then he forms a cloud of points around this area, just some distribution of points of starting values. And he takes for each of those points a processor to calculate a very accurate trajectory. So here you see his really burning processing power. He doesn't need any of these trajectories, really, because he doesn't know yet where it's going to go through exactly. But he calculates all these accurate trajectories, and then he does interpolation. Once he knows where this trajectory arrives, he interpolates among these points the point that is where the trajectory comes. And he also interpolates the accurate solutions to know where the point should go through here, and he continues like this. So all this interpolation is much cheaper than the accurate trajectory that he has to calculate, and so he can get the solution faster than if he had calculated sequentially an expensive trajectory. It's an amazing idea. There is a round-off error analysis. There is also truncation error analysis. It's not a very useful algorithm like this, but it's a groundbreaking idea. If you have a lot of processing power, just waste it. Now, if you're in higher dimensions, there is not much chance for such an algorithm, because you would have to have a cloud of solutions around the solution of a heat equation. This is not feasible. But the one-dimensional ideas, very clever, very clever. Then a lot later, there was a new idea by Bellin and Zenaro in 1989, and they have in their abstract the following saying, in addition to the two types of parallelism mentioned above, we wish to isolate the third which analogous to what Geir has more recently called, Geir was this research review paper, parallelism across the time. You see, that's exactly what we try to do. We try to parallelize the computation along the time axis. Here it is more appropriately called parallelism across the steps. In fact, the algorithm we propose is a realization of this kind of parallelism. Without discussing it in detail here, we want to point out that the idea is indeed that of multiple shooting and parallelism is introduced at the cost of redundancy of computation. We've also heard redundancy this morning. It might be worthwhile to do more calculations than necessary to save communication, for example. So, here is the term multiple shooting, and this is the key ingredient of these methods. In Nevergeld's paper, multiple shooting was not developed yet. He did not have a means to use multiple shooting. It did not exist yet. So, here is the idea of Bell and Zanaro. They don't use an ODE. They use a recurrence relation, which you get from a discretized ODE. So, you just have a law that gives you when you have a Y0, from Y0 and Y1, and then from Y1 and Y2, and so on. And this mapping is called F. And you have to go sequentially through to find this trajectory, this great trajectory. But you can also write, collect all these values in a vector Y. So, now the vector Y contains Y0, Y1, and so on. And then you write this recurrence relation simultaneously for all values. This you can do. So, here is now the vector Y, the solution you want, is a function of Y, which you could evaluate sequentially, but you just write it for all values simultaneously. So, this function phi, capital phi here, in the first component, it's just Y0, because Y0 must equal Y0. But then in the second component, F1 of Y0, because you must get Y1, and then F2 of Y1, because you must get Y2, and so on. So, this is a non-linear system of equations that you can try to solve for Y. You see Y equals phi of Y, you can solve this system for Y. For example, using Newton's method. Or what they do in this publication, they use Steffensen's method. Steffensen's method is an iterative method for this non-linear system. Does actually anybody know Steffensen's method? I didn't know it either. The first time this method occurred to me, I will explain it on the next transparency. You get an iteration like this, the new approximation is phi of the old approximation plus a Jacobian, like in Newton's method, or an approximation, they're off times the difference of the two approximations. And they start, because you need a vector to start, they just start with the constant initial Y0, and then they iterate this thing. Now, you see this iteration calculates at each step the whole trajectory at once, an approximation of the whole trajectory. So what's this Steffensen's method? It's a really clever method, I didn't know it. It's to solve a non-linear equation, so f of x is zero, and it's almost like Newton's method. So you see you get a new xk plus one from the old xk minus a Jacobian inverse times f, but it's not the Jacobian, it's a finite difference approximation, and it's a really funny one, you see? Instead of using an h step to do a finite difference and choose a small h, you use f of x instead of h. So this f, if your method converges, becomes smaller, and you can prove this also converges quadratically, like Newton's method. I didn't know that, I think that's really clever, clever method. Instead of having an h and then divide by h, you take the f, which is becoming smaller as you iterate, and you divide by f. And they prove the following results. They prove that at each iteration of this method, you get one more value of your trajectory exactly. You just get it. So there is some sequential integration that happens in this method. And this means if you've integrated, if you've iterated as many times as you wanted to know time points, you will have the solution. This method can never fail. It will always get the solution. But it's going to be too late because it gets the solution the latest when you've sequentially gone through. So there's no gain as well, but at least it cannot fail. They prove because it's a Newton type method that convergence is quadratic. So if you're close to the solution, this whole trajectory is going to converge at the early time points and at the late time points. It's just going to converge onto the solution. Sorry, Mark. Yes? Can I ask you a very stupid question? Yes. So your f is a vector. In my case here, I took a scalar. There is a vector-valued example as well, but it would have been a bit harder to explain. So I just chose a scalar. But there is a vector with a matrix, a Jacobian matrix variant as well of Stephenson's method. So I just chose a scalar to make this very transparent and to remember what the method is. So each of these corrections you can calculate in parallel, but because you remember the evaluation of f in this right hand side. You just evaluate. There's no connection in between. And they get speed-ups of 29 to 53 they measure for about 400 steps, which means you could use like 400 processors. So that's not a brilliant speed-up. If you use 400 processors, you would hope to get 400 times faster because you put 400 times more resources. But you will see throughout all these lectures, time parallelization is more something along this line. So you do a few methods that get much better than this. So you waste resources. Yes, there was a question. This is written for the nonlinear case. So it's a full Newton's, I mean, this Stephenson's method which is used. So here we have a systematic method that sort of seems to be able to do this. And then there was a very fundamental contribution by Philip Schachtier and Bernach Philippe in 1993, a parallel shooting technique for solving dissipative ODS. Here we have now the name in the title. It's a parallel shooting method. And they have in their abstract. In this paper, we study different modifications of a class of parallel agreements initially designed by Bellin and Zanaro. So you see clearly these are these two people I discussed before. For different equations and called across the steps method by their authors, for the purpose of solving initial value problems in ordinary differential equations on a massively parallel computer. It is indeed generally admitted that the integration of a system of ordinary differential equations is a step-by-step process and is inherently sequential. So again, you see somehow there is no parallelism in this type of problem. The funny thing is this article appeared in a journal where they have a German abstract. It's a former German journal which still had German abstracts. So what's their method? Here we have now an OD. So there's not just a recurrence relation, it's an OD. So u prime equals f of u, u of 0 equals 0. We want to solve this in the interval 0, 1, and we want to use multiple shooting. Now shooting methods were not developed for initial value problems. Shooting methods were developed for boundary value problems. So in general, for a shooting method, what you would do is you would have a second-order problem. You would have an initial condition. You would have a final condition. This is u, this is x. We go from 0 to 1. We know the solution should start at a. We know the solution should end at b. And the solution needs to satisfy this second-order equation. Now a shooting method would say, I don't know how to calculate this trajectory, but I know how to calculate the trajectory with a second condition at 0, a slope. Because that's now an initial value problem. For initial value problems, there were many methods, oil or rumacuta. But you don't know what s is. S is the slope here. So you just start with some s and you shoot. Oops, that was too high. So the angle was too large. So you do a smaller angle. You try like this. Oops, that's too low. So you should go somewhere in between. So you shoot here. Now you see why it's called a shooting method. Because you really shoot at a target and you try to hit this target. Now the problem is, in our case, we have an initial value problem. We don't have any target. We just have a condition on this end. There is nothing to shoot at. In our case, the problem is actually this. If I write this as a system, that's what we try to solve. So how can you shoot in that case? You can shoot if you try to do multiple shooting. So you introduce time subintervals. In my example, I introduced 3. I want to solve this problem on the first interval with a given initial condition, which I here call capital U0. I assume I don't know what it is. I want to solve it on the second interval. With an initial condition, I call U1, which I don't know what it is. And I also want to solve it on the last interval with an initial condition that I don't know what it is. So these are the shooting parameters. And because I have now three intervals, I have three conditions I have to satisfy. The first shooting parameter should be the initial condition of the problem. The second shooting parameter should have the value where the trajectory arrives on the first interval after an interval of 1 third. And the last shooting parameter should have the value where the solution of the middle interval arrives after having passed through its interval. So you see, I have three equations for three unknowns, capital U0, capital U1, and capital U2. So I can collect these three equations as a vector equation, f of U, which is this, this, and this, equals 0 for the unknown three values. Now I have a nonlinear system of equations, which I would solve using Newton's method. So what happens if I apply Newton to this system? That's now a multiple shooting method for an initial value problem. So there is a copy of the system up there that I want to solve. Newton's method calculates a new approximation at step k plus 1 from the old approximation at step k minus the Jacobian inverse. You can see this is the Jacobian. This is the derivative of this function up there with respect to U0, U1, and U2. So you get 1 on the diagonal. And you get the derivative of the solution with respect to the initial value at the end of the interval in the subdiagonal. And then here you have the evaluation of f at iteration k. So that's a plain Newton method to solve this nonlinear system. Now to see how this looks like, I just multiply this matrix here through on the other side. So this term here will not have a matrix anymore. And this term here, terms here, will get the matrix. And I use the bi-diagonal form. Then I get a recurrence relation that looks like this. The first shooting parameter is always U0. That makes sense because I know the initial condition. The first one should never change. Now the second shooting parameter equals in this Newton iteration the solution on the first interval at the end, given a shooting parameter U0, plus the Jacobian term, times the difference between two iterates. And it continues like this. So this method I can also write for n intervals, where this is now the interval count. This is the Newton iteration count. The new value I get from an exact solution on each interval, plus the Jacobian term evaluated at the old shooting parameter, times the difference between new and old. That's a multiple shooting method. Now maybe I can point out something important here. This here is this exact solution on each interval. But because I know the initial condition from the previous iteration, this can be all done in parallel. Here there is also the new iteration of Newton. The new iteration is here and here. So this cannot really be done in parallel. This sort of depends on each other. But this can all be done in parallel. So how would that work? Here's a cartoon. We've seen that the initial condition is always the same. It's the correct one. So no matter what they do as the first iteration, this is going to be the exact solution. This will start somewhere at an initial guess. This will start somewhere at some other initial guess. But the first interval is going to be exact. Now in the next step, this endpoint is exact. So the second interval necessarily has to become exact because it starts with the correct value. And in the next iteration here, we have the correct value. So necessarily, even if you don't do Newton, any reasonable fixed point iteration would give you the solution. So in three steps, we'll have the solution. That's like in the method we've seen of Bell and Zanaro. These types of methods, they just give you the latest after-end iterations the solution. But can they do better? Because there's no interest if you use three processors and if you iterate three times, you have done three integrations sequentially. So there's no point. So it must be faster than this. Otherwise, there's no interest. Now, Schachty and Philippe, they showed that for certain problems, it can go faster. But for certain other problems, it might not go faster. And here is an example from their paper. So here is the time axis. The exact solution is a solid line. It's the solid line you see. It's an oscillatory solution. It's a non-disciplinative example here. It's an OD. And the first iteration of this multiple shooting is the dotted line, which is pretty close. The second iteration is further away from the solution. So in that case, the method does not seem to work. It will eventually, as you see, it gets the first correct, the second correct. So if you march through, it will get the solution. But it's not contracting. Even though there is a Newton method running. So there are properties of the system that make it not work. But if the system is the dissipative problem, and they prove this, then you get convergence quadratically on the whole length. You get the solution much faster than going sequentially. And the much faster depends on your problem. So here is a problem where the much faster is not too bad. If you use 100 processors, you get about 14 times as fast. But then eventually, it's not getting any faster. Here is an example where with 100 processors, you get about three times faster, three and a half times faster. You still get faster. If somebody would have asked you before my lecture, can you go faster than sequentially, most of you would have said no. You can. But not for all problems. So that's very groundbreaking contribution. And the next very important contribution is by Sahar, Stadell, and Tramaine. They come from the application area of solar dynamics and planetary dynamics. And they say we describe how long term solar system orbit integration could be implemented on a parallel computer. The interesting feature of our algorithm is that each processor is assigned not to a planet or a pair of planets, but to a time interval. Thus, the first week, second week, up to the 1,000th week, of an orbit are computed concurrently. The problem of matching the input to the n plus first processor with the output of the nth processor can be solved efficiently by an iterative procedure. And then they say our work is related to the so-called wave form relaxation methods. So this is a type of method that I will discuss in the next lecture. But here, they already point out a relation. I will not explain the quadrature formula the way they got their idea. But I show you what they do. They solve a Hamiltonian problem because planetary motion is a Hamiltonian system. There is energy conservation. There is in black-dicity as well. Hamiltonian systems you can write with a Hamiltonian h. And there are two components. p dot equals minus h derivative with respect to q. And q dot equals h derivative with respect to p. That's a Hamiltonian system. So the Hamiltonian for planetary motion can be written as I wrote it above. There is a major part, h0. And h0 depends on the sun attracting the planets. That's the strongest force that makes planets go around the sun. And then there is a small second term in this Hamiltonian. And this term describes interaction between planets. Planets also attract each other. But the influence is very small. The main force comes from the sun. So in the solar system, this is a typical Hamiltonian. Now if we collect the unknowns p and q in a y and the right-hand sides h, q, and h, p in an f, then this is just a system of ODE's, like the ODE's we've seen before. So in particular, one can write a multiple shooting method, like the one of Schachtje and Philippe. So here is Newton's method applied to this problem after multiplying through by the Jacobian. We get the new approximation at Newton's step k plus 1, a time interval n plus 1 equals the exact solution on the previous interval, plus a Jacobian term, times the difference. Now there is the key new idea that comes here. This Jacobian term is a derivative of the original Hamiltonian system with respect to the initial condition. And the original Hamiltonian has two components, an important one. And a small one, depending on epsilon. And what they do now is they say, we do not calculate the exact Jacobian here. We only use the Hamiltonian for the sun, because that's an integrable system. We can evaluate this exactly. And we don't even calculate the derivative. We say a derivative applied to the difference can be approximated by the difference of two trajectories. This just gets from Taylor expansion. If you expand this term here about y nk, then you get first this term, and then next you get this term. So that's a key new idea. They don't calculate the Jacobian on the exact problem. They calculate an approximate Jacobian, and they don't even calculate the Jacobian. They make it a difference of trajectories. And they prove that now this method does not converge quadratically anymore. But at each iteration, they gain an accuracy of a factor of that epsilon which they dropped in the approximation. So here is an example where they verify that they get accurate trajectories. Here is an example where they show how many processors they used and corresponding how many iterations they need. And you see, they don't need many iterations. They don't need many iterations with 10 processors and they need four with 100 processors and they need five or six with 1,000 processors and they need seven or eight. And now there is the seminal paper coming by Lyon Smade and Turinic in 2001. I saw this paper very early on. Laurent Salper asked me to read it and tell her what I think about it. So I read it and I didn't think much about it because I saw it's converging step by step like we've seen in all the other methods. I didn't see anything else that would be interesting. So here is the description of the method. They describe it for a scalar problem. And they say the name of the method is there for the main motivation, the real problem. All the terminology proposed by the real, parallel method for real time problems. Parallel. So 15 years ago, if you searched parallel on Google, it would have told you, did you mean parallel? Now if you search parallel, it's going to give you hundreds, thousands of references. So what's this method? So for an ODEY dot equals minus AY on the interval 0T, with a given initial condition, you first use a backward Euler method to get a rough approximation, like in Niveau-Geldt, a rough approximation of the solution. And then the recipe says you calculate exact solutions of the underlying problem on each time interval starting at these approximate values. And now there is a complicated recipe following. You have to compute jumps, which are differences of the exact values minus the approximate values you had. Then you have to do a forward Euler propagation of jumps with the jump at the right hand side. And then you have to update the values you got. This method was invented in this form using the idea of virtual control of later liens. But it's not clear what this method does. It's very difficult to understand what it does in this form. So what do they do in this paper? C'est alors un exercice que de montrer la proposition. The parallel scheme is order k, which means that there exists a constant C that depends on k, such that the error after k iterations of this algorithm is bounded by this constant, which will depend on k, times delta t, which is the interval of these shooting steps that you do, to the power k. So remember, I started with an Euler method. That's an order one method. If you do one iteration, you get an order two method. If you do another iteration, you get an order three method, and so on. But there's a constant in front here, which is not specified. So here, they don't look at how this method converges as an iterative method. They just look at, if you do four iterations, what order of a method do you get? And from an Euler method of order one, with four iterations, you get fourth order. Now, this method can be written in another form, which has established itself in the years following this publication, because there was a lot of interest in this method. And the best way to write it is as follows. So I go back to my notation u prime equals f of u that I want to solve. It needs two ingredients. It needs a coarse propagator. This is an integrator, which takes an initial condition u 1 at t1. It gives an approximate solution at t2. Very cheap, one step of forward Euler. Must be very, very cheap. And it's a fine propagator, f. This does exactly the same. It makes initial conditions starting at t1, gives a solution at t2. But it's very accurate, very expensive. And then, if you rewrite this method with propagating of jumps, you just rewrite it. You work, you work. You find this recurrence relation here. And now, you see this looks bloody like what we've seen before. You get the new approximation at iteration k plus 1 by an exact or very accurate solution plus the difference of a very coarse approximation. So that was the first result that we proved. Now, it looks like completely trivial, but it took us a while to figure this out. This is just a multiple shooting method where instead of applying Newton, you approximated Jacobian in Newton on a coarse grid. So not like in Saha et al, where they removed part of the Hamiltonian to make it cheap. It's just on a coarse grid. That's what it is. So the parallel algorithm is a multiple shooting method for an initial value problem where the Jacobian is approximated on a coarse grid. That's it. Now, here is a convergence result that I like a lot because it describes all the properties of this algorithm in one result. I worked on this quite hard. I had it first for a linear problem we published with Stefan van de Valle. Then I tried for an only, the problem I got it going. I presented it in our seminar in Geneva. And then, Ernst Heier, at the end, he said, I think you can simplify this proof. And then he showed me how. And the proof was now about the page long. It's very elegant. It uses generating functions. I learned a lot doing it. So what does this result say? There are some technical assumptions which are satisfied. It says that the error after k steps is related to the initial error. I take the maximum overall time intervals by three factors here. Now, let's first look at this factor. This is a product that goes from one up to the number of iterations I did. So at the first iteration, this multiplies the error by n minus 1. So that's very bad. Initial error could be multiplied by n minus 1. After the next iteration, it's going to be multiplied by n minus 1 times n minus 2. That's even worse. So this part of the bone, it goes up like crazy. Why is it important? Because if I iterate n times if k here is n, then the last term in this product is 0. So this term tells you the latest you will convert is after n iterations. So this result contains that property of the algorithm. But it contains more. It contains these two terms. This is a benign term. It's 1 plus some small term to the power n that's going down. But this is the most interesting term. This says delta t to the k times p plus 1. This result is for a method of order p. So if it was i, it would p would be 1. Second order p would be 2. So at each iteration, you gain an order like in the result of Ivo Made. But in addition, you divide by k factorial. Now, this k factorial here, it's growing faster than any constant to the power k. So even if this is a big constant to the k, eventually this is going to be dominating. This shows super linear convergence. And that's a wave formalization type convergence result we will see later. If I rewrite this, I can also get exactly the result of Ivo Made, but for the general case of a p-thorder method. And I also have the constant. This is the exact constant c depending on k here, with everything depending on the constants and the assumptions. That's a very accurate result. Now, does this work? Here is an example that was suggested by Jean-Pierre Ekman in 2004, who was in a seminar that I gave in Geneva at that time. And at that time, I told the audience, this is to calculate problems in real time. For example, weather prediction. Because if your weather prediction is only finished calculating tomorrow, it's not a prediction anymore. And then Jean-Pierre Ekman said, but you don't have any example. You should do at least Lorentz equation. Because Lorentz equation was the first weather prediction model. Very, very simple. A three-dimensional problem. There's just three unknowns. It's non-linear. And it has the butterfly attractive. So I did this just to see if it works. And this is an interesting cartoon of how the iterates converge. So there are two panels on this. The top panel shows the three coordinates. So you see the oscillations, which correspond to the oscillations in the butterfly. Wings. And the solid line is the exact solution, which you can calculate only to about t equals 10 in double precision. And the dashed line is my first approximation from this method on the coarse grid. It's a Runge-Kutta for order method. So you see, at the beginning, it's quite good. But with this chaotic nature, very quickly it gets wrong. It even gets into the wrong wing of the butterfly. The errors are very large. And I plot the errors on this panel. So you see the errors are basically order one on the whole time interval. I also added a line here, the red line. This red line says how far I would have gone with one processor. So I use about 180 processors here. And if I do one iteration with these 180 in parallel, if I had only used one, I could have integrated up to here. And the dots up there show the same. So now I start iterating this parallel method. So you see it start iterating. It's still in the wrong wing of the butterfly. Still in the wrong wing. Now it's in the right wing of the butterfly. Sequentially, I could have gone up to here in the same time. In parallel, I've calculated up to here, but it's still error order one. And now this interesting thing starts happening. You start to see contraction uniform on the whole time interval. Here it goes down. You see? Sequentially, I could have gotten up to here. And with this algorithm, I have an accuracy of 10 to the minus 5 up to here. So this does work. I can get this solution faster than sequential and quite a bit faster. But I'm burning processors. I've burned 180 on this. But you see this is a working method for this problem. Here is a second example of the Arran store for it. It's also a very fun example. If you were on the moon and if you shot a satellite exactly at the right angle and with the right speed to try to get back to Earth, if you get it wrong, the satellite is going to miss the Earth. It's going to do a really strange loop. Another loop. Another loop. I'm just going to come back to the moon. It's called an Arran store for it. So I calculate this Arran store for it with the parallel algorithm. So you see the initial Euler approximation. It's completely wrong. It's just a spiral upwards. Error is order 1. But already the first correction of this parallel correction gives you an approximation. It's quite good. Even better. And you see the error goes down. 10 to the minus 8. 10 to the minus 10. 10 to the minus 12. So here is a summary of this first lecture. We've seen the methods that I put on this transparency. And given the time I've spent, I would say we should definitely start at 3 again. So you have about eight minutes break. Thank you. So I will start the second lecture. The second lecture is about wave form relaxation and domain decomposition. You've seen already in the first lecture that wave form relaxation appeared as a name. So this is a lecture about the red track wave form relaxation and domain decomposition. So you see these are iterative methods. The first versions of these methods go much, much further than Nevergeld back in time, Beekar and Linda Lough. Then there was an engineering contribution, very important. And then I worked a lot on this alone in my thesis and then with Frederick Nadeff and Laurence Alpern. And there is more recent contributions last year and two years ago. So what are these wave form relaxation methods? One of these methods you must have seen in an ODE course. It's a way of proving existence and uniqueness of solutions of ODE's. And it's Beekar who invented a way to do this. It's in a paper sur l'application des méthodes d'approximation successives à l'étude de certains équations différentie à l'ordinaire. So here we have v prime equals f of v with a given initial condition. You first write this as an integral equation. So this would be v at t equals v at 0 plus the integral from 0 to tf of v tau d tau. And Beekar, because he could not figure out if this equation has a solution, wrote this equation as an integral equation and said, because I cannot still solve the integral equation, I just do a relaxation. So I start with the v0, which I suppose I know. Then integration is easy. Then I add the initial condition and I find a new approximation. And with this new approximation, I go here again. Integration is easy. I add the initial condition, I get a new approximation. And I iterate like this. This was the méthode des approximations successives de Beekar to find the solution of this equation. And Beekar proved convergence of this method to a limit that satisfied the ODE. And with this, he proved existence of the solution of this ODE on an interval from 0 to t. And a year later, it was Lindelof, who gave a convergence estimate for this iteration. And this is a very, very famous convergence estimate. It says if you iterate this Beekar iteration n times, then the distance between the solution of your ODE and your nth approximation is bounded by a constant times capital T to the power n, and is the number of iterations, divided by n factorial times the distance of the initial guess that you used of the true solution. Now, if you remember from the previous lecture, this factorial I had as well, k was the iteration of Newton. This power I had as well. So a parallel method is a method somehow that has inherited this type of convergence behavior. But there were other terms. So this is a typical convergence of this fixed point iteration applied to initial value problems. But that's a theoretical tool that's not made as a method to solve this equation. And in a textbook of Milner in 1952, he says explicitly he would never recommend such an iteration to actually try to solve an ODE. There's not an efficient way to do. Nevertheless, in the engineering community, this is always falling. In 1982, there was a major invention for VLSI design by Lera Rasmie, really, and San Giovanni Vincent Delli. And it's nice, these IEEE papers, they have pictures. So you can see how they look like. This is Lera Rasmie. This is really. This is San Giovanni Vincent Delli. Really, it was a Swiss. He worked for IBM Rieschlikon, then emigrated to the US, worked at the IBM lab in Yorktown Heights until he retired. I got to know him when I started working on these methods. And since I knew he was originally from Switzerland, I started to babble Swiss German to him. And from deep down under, there was Swiss German coming back. It's been 30 years he had not spoken Swiss German, but he still knew it. I meet him occasionally. He had a skiing accident this spring. But he's fine. He had a third friend. He's going, OK. He's been retired for about eight or 10 years now. He's a very nice guy, very good skier. And here they write in their paper why they tried such a waveform relaxation method. The spectacular growth in the scale of integrated circuits being designed in the VLSI era has generated the need for new methods of circuit simulation. Standard circuit simulators such as Spice and ASTAP simply take too much CPU time and too much storage to analyze a VLSI circuit. And the concrete problem was that the next generation of processors had so many transistors on it that they could not simulate the whole thing on the current generation of processors. There was not enough memory, not enough computing power. So they tried to find a method to simulate this too big new generation on the current architecture. And here is the original example they had. It's a MOSFERING oscillator from 1982. The circuit that has three voltages that are measured, V1, V2, and V3, there is ground here, here, and here. There is a feedback loop that comes from the last voltage to the gate of the first transistor. And there is a voltage source here that drives this MOSFERING oscillator. Now, why is this a MOSFERING oscillating circuit? Suppose this V1 is at 5 volts. If this V is at 5 volts, it connects to the gate of this transistor. If you have a voltage on a transistor, the transistor will open, which means there is going to be a wire connection to ground, which means V2 must be at zero. If this V1 is at 5, this V2 must be at zero. Then there is a zero at this gate of the transistor. This transistor will close. So there is no wire connection to ground, and V3 is going to be pulled up to 5 volts. This means V3 is at 5 volts. Now, 5 volts is going to be fed back into the gate of this transistor. This means this transistor will open. So we had 5 volts there, but because it will open, it will connect to ground. It's going to go down to 0 volts. And now you can see how it oscillates. Zero is going to close this one. It's going to pull up to 5. It's going to open this one. It's going to pull this down to zero. It's going to close this one. It's going to pull up. So this is why it's a mastering oscillating circuit. And they wanted to simulate such a circuit, not this one, a huge circuit with millions of voltage points. But they used this as an example to explain their method. If you use Kirchhoff's and Ohm's laws, you can write the system for these three voltages of ordinary differential equations with some initial condition. So V contains three unknowns. And it simulates the up and down of these three voltage values. So their idea of waveform relaxation is, suppose this circuit is too big to be simulated. Let's partition it into smaller circuits. So here is one circuit. And this wire was now chopped. There was a wire connection here to this gate. It was chopped. Then there's a second circuit. And the connection was again chopped. And also the feedback loop was chopped. But certainly this cannot have the correct solution anymore. Not possible that this is the same solution. So as an engineer, you add voltage sources along each wire that you chopped, which should feed in the signal which was missing because you chopped it. And what you feed in is just the value that you get from the previous iteration of the algorithm. So if I do this for the system of ODEs, here you see the system written for the three voltage values. I solve on the first system the OD like it was. But these two values, because the wires have been chopped, I just feed in the value from a previous iteration. So initially I can start with 0, for example, or 5 volts or anything. And this I can solve simultaneously with this one. This is the middle system, which I solve for the middle voltage. And I just feed in where the wire was chopped, neighboring values, which I don't know what it is, at the first iteration, just some guess. And the last one as well. Now, because the signals that travel along wires are called waveforms in electrical engineering, and these waveforms, instead of taking the really one that comes, we take a relaxed one from the previous iteration, these methods are called waveform relaxation. That's the name they gave to these methods. And you can see this is a parallel variant. I can solve all the three in parallel because I relaxed both neighbors. One could also use k plus 1 here and k plus 1 here. Then it would be a sequential. And now you recognize this is like a Jacobi relaxation in linear algebra. And if I took k plus 1 here, it would be like a Gauss-Seidel type relaxation. But we don't solve a linear system. We solve a system of ODs at each step. So this iteration produces a solution in time. Then it exchanges information. It produces a solution in time. It exchanges information. How does it converge? Here is a conversion study they did. They had no clue how this would converge. And we know it's an oscillating circuit. And you see after 1, 2, 3, 4 iterations, they actually have the solution. For iterations, it takes on this length of the time interval to get to the solution. But if the time interval was shorter, if the time interval was only 1, you would basically get it after 1 iteration. Or if the time interval was like 1 and 1 half, you would get it after 2 iterations. So you see this method, the shorter the time interval there is, the faster it will converge. That's why you for malaxation. Now I want to show you how domain decomposition comes in in these time-dependent problems. And to do so, I first explain the three classical domain decomposition methods for steady problems. The first one is the Schwarz method. You've seen the Schwarz method this morning. I use a very simple example like Frederick did. But I don't even have a right-hand side. So this is directly the error equations that Frederick had this morning. So I take Laplace's equation on the interval 0, 1 with homogeneous boundary conditions. So the solution is 0, I want to calculate. And I do a Schwarz method, which means I decompose the domain into two subdomains. You see one subdomain and the second subdomain. The overlap is from alpha to beta. And I solve on the first subdomain my problem, starting with some initial guess, u2, 0 here. Laplace's equation has a straight line as the solution. So this is my solution. Now I grab this solution at this interface to solve on the second subdomain. So I solve. I get a straight line on the second subdomain. Now I take the solution here again and solve on the first subdomain and on the second subdomain. So you can see how this method converges. That's the classical alternating Schwarz method. We've seen a parallel version due to Lyons. If you put just an n minus 1 there as well, you calculate the same sequence, just double. Now there is another basic domain decomposition method, which is called the Dirichle-Neumann method. This was invented much, much later than the Schwarz method, about 100 years later. By Birstad and Wittlund, Birstad is organizing the next domain decomposition conference in Spitzbergen. It's going to be in February. Spitzbergen is 2,000 kilometers north of Norway. In February, there is permanent night there. So if you want to go to a conference that you will never forget and where you don't have jet lag, go to that one. You can sleep anytime. So this is the Dirichle-Neumann method. What's different between this method and the Schwarz method? The fundamental difference is that you exchange a Dirichle value on one subdomain, like for the Schwarz method. But the other subdomain will use a Neumann value. And there is no overlap. So the first solution here is exactly like in the Schwarz method, you see? The subdomains do not overlap. And now you have to produce a solution on this subdomain that has this slope at this point. How does this solution look like? Can you picture this? It must be a straight line that has the same slope as that one. So that's what the second subdomain does. Now we have solved on the second subdomain. Now this value here will be used by the first subdomain as a Dirichle value. So you need to get a straight line here. Now this slope will have to be used to produce a solution here. Now you can see this must make something like this. And you can see it works. You see this will spiral in. It will converge to zero. Schwarz method converges like this. This converges like this. But what happens if I move alpha? I put alpha to the left. So now suddenly my Neumann domain is bigger than the Dirichle domain. I start again with a value. I pick this slope. I have to make a solution on this interval with this slope. Here is the solution. I have to pick this Dirichle value, evaluate. I have to do a solution with that slope on this domain. Now you see, oops. Here it actually does not work. So you see, if the Neumann domain is smaller than the Dirichle domain, it spirals in. But if the Neumann domain is bigger than the Dirichle domain, it spirals out. That's not like the Schwarz method. The Schwarz method will always work for your overlap. That's why in these methods you need a relaxation parameter. You cannot just do a Neumann solve, a Dirichle solve, a Neumann solve, a Dirichle solve, a Neumann solve. What you have to do is you have to introduce a relaxation parameter. So now instead of starting directly with the solution from the other subdomain, I start with a value that I call h. There is an h0 I solve. I take that slope. I solve the Neumann problem. But now I don't take this value here to continue solving. But I take an average of this value and the value I had before. You see, it's theta times this value. And 1 minus theta times what I had before. So I take something in between here, and then I get my new h. Now you see, there's actually something brilliant about this method. If I choose my theta such that the average of these two is close to 0 or even exactly at 0, this becomes a direct solver or a very rapid solver. So this relaxation parameter is very important in this method. It can make it work. Without this, it's not going to work in general as an iterative method. Then there is the third domain decomposition method, which is called Neumann Neumann, was invented around at the same time in France, Bologna, Glowinski, Dalek, and Wiederskull. And this is a method that has not alternating steps, but it does double steps. So the first step you solve on both subdomains a Dirichlet problem. So you start with some initial guess. You solve a Dirichlet problem here and here. So that costs twice as much as the other method, because you have two solves. But then you calculate the normal derivatives that you get here and here. And you impose their sum as a Neumann condition again on both subdomains. So these have the same slope, these two. Now we've calculated size on the two subdomains. So we first had two Dirichlet problems and then two Neumann problems. And here you can see this cannot possibly work. You need a relaxation parameter. But again, if you take a relaxation parameter, if you take some average of these values to subtract from this value, you can actually make this into a fast method. You can get the h1. If you take exactly the correct value in 1D, you can make this h1 be here, direct solver. So these are the three methods that are known in domain decomposition. The first one gave the Schwartz method also in higher dimensions, gave the additive Schwartz variant, which is not an iterative method, gave RAS or RAS optimized Schwartz methods. The second method that does Dirichlet in one subdomain, Neumann on the other, did not propagate very far. Because it's really annoying if you have many subdomains to decide which one should do Dirichlet and which one should do Neumann. It's just annoying. Whereas this one went very, very far. This method here become fitty if you do first one solve and then the other. Or Neumann, Neumann, balancing Neumann, Neumann. If you switch the order, these two methods are basically the same. So this method went very, very far. So my interest here is in time dependent problems. So I want to understand how do these methods converge if I solve now a heat equation or a wave equation. So you see, I have spaces before here. But now I have a time dimension here. And for a Schwartz method, I would pose overlap between alpha and beta, like in the Schwartz method for the steady problem. But the problem I want to solve is now a spacetime problem on the subdomain. So I start with some initial guess along this timeline. And I solve in spacetime. Then I extract the solution here and solve in spacetime. And I alternate like this. Why could this be a good idea? We've heard this morning that processing power is really almost free these days, whereas communication is getting annoying. Now you see, this method has a much better volume computation to communication ratio than if you did time stepping. Each time step does not give much computation. And then after each time step, you would have to iterate between the subdomains and then do the next time step iterate. In these methods, you solve a whole spacetime problem, a lot of computation. And then you once send data of this interface to your neighbor. So that might be one of the interesting things this algorithm has. If it's a non-overlapping method, like Dirichle-Neumann or Neumann-Neumann, I can also solve spacetime problems now and alternate or do two Dirichle spacetime followed by two Neumann spacetime. So how does these algorithms behave? I start with a result for the wave equation. It's an interesting result. So I apply this algorithm in the Schwarz variant, so I call it Schwarz wave formalization, to the second order wave equation, which means I solve on my first subdomain in spacetime the wave equation with some initial guess. I give the trace to the neighbor. I solve the wave equation on the second, and I go back and forth. This is an algorithm I looked at in my thesis, and it's a direct solver. It converges after a finite number of steps. As soon as the iteration number n is bigger than the length of the time interval times the speed of propagation of the wave divided by the overlap, you have the solution. And this is not only true for this one-dimensional example. This is true in full generality. It's a direct solver. It can also do an optimized variant, which can also be made into a direct solver. I'd like to prove why it converges in the finite number of steps, and it's a drawing, the proof. So it's a hyperbolic problem in this wave equation. Hyperbolic problems have a finite speed of propagation. And so if I look at the first solve on omega 1 in spacetime, I know the initial condition here, because it's given by the problem. I also know this boundary condition is given by the problem. The only thing I don't know is what should come from the other subdomain along this line. So along this line in my first iteration, I'm going to have an incorrect value. Now, this incorrect value can only propagate at a finite speed into the domain. So as long as I'm below this characteristic, which is given by the wave speed of the equation, below here I have the exact solution, because this error cannot propagate faster than this characteristic line into my domain. So in my first iteration, I have the exact solution here, and here I'm going to have garbage. This is all wrong. But this means when I give data to omega 2, along this line, the solution is going to be correct. Only above here it's going to be wrong. So the second subdomain is going to have the exact solution under this characteristic. And I see why it converges in a finite number of steps, because now I have the exact solution along here, and it's going to be exact up to the end of the green line. So if my interest is only a time interval of this length, I need two iterations, and I have the solution. So for this type of equation, it's a direct solver. And this also shows you how you should use it. You should use it such that you basically have one solve in each subdomain, and then you have to fix the overlap. That's a good way of scaling this algorithm. It scales too many subdomains, no coursework needed. It's a perfectly scalable algorithm for the wave equation. Here is an example with non-matching grids. That's another advantage of these types of methods. Here is time. Here is space. I have six subdomains. And you can see I have different wave speeds in each subdomain. Now for a wave equation, it's very important to be close to the CFL condition with your discretization, to have an accurate solution. So I choose different space time measures that I'm close to the CFL condition. And then I solve with this waveform relaxation algorithm. I have an initial signal here, which propagates throughout the whole domain, which is reflected at these interfaces because these are real interfaces. It's a different material, different wave speed. So you get reflections. And I calculated this with an optimized Schwartz waveform relaxation algorithm that uses different time steps in different subdomains, so non-matching grids. How does the Dirichlet-Neumann wave form relaxation algorithm for the wave equation behave? So here I just copied the algorithm that I explained to you in the steady case. You start with some initial guess, but you do it in space time. And then you do a Neumann solve in space time on the seconds of domain. Then you do this relaxation, and you go again. This was a topic of a banking mandal who did this PhD in Geneva. And there is interesting things one can show. One can show also this algorithm converges in a finite number of steps. It's a direct solver. It depends on the choice of the relaxation parameter. It also depends on where your interfaces is in this simple one-dimensional setting. So if the interface is exactly in the middle, and if theta is exactly half, then you get the solution in two iterations. If the interface is not in the middle, and theta is a half, then you get a finite number of steps of convergence. It's the direct solver. And if the interface is in the middle, but theta is not half, then you don't get the direct solver. So theta half is key for this algorithm. If theta is not half, you lose a lot in this method. We also studied the Neumann-Neumann wave form relaxation algorithm for the wave equation. And the results are very similar. It's a direct solver under certain conditions of alpha and theta. And if theta is not well chosen, it again converges only linearly. So these methods are domain decomposition methods that are used in a wave form relaxation type. They solve in spacetime. And for the wave equation, we have a full understanding. They're all direct solvers for good choices of parameters. Now this slide is containing results for about 10 years of work. This is the study of convergence of these algorithms for the heat equation. This is pretty hard. This is a lot of kernel estimates. The first of these results I got in my thesis for Schwarz-Wafer relaxation for two subdomains and n subdomains. You see the convergence is exponential. So it's a super linear. It's an error function complement. It depends on the overlap. You see beta minus alpha. It depends on the length of the time interval. If the time interval is short, it's really fast. If the time interval gets longer, they get slower. It works for two subdomains and n subdomains. All these algorithms are wave form relaxation type. But you don't see the 1 over iteration number factorial. There's no 1 over n factorial in these, where as I've shown you before, wave form relaxation algorithm usually converts like 1 over n factorial. If you expand this using sterling formulas, any of these results, you see all those are actually faster than 1 over n factorial. So it's the heat kernel that gives additional convergence speed for these methods. The type of convergence is the same if you swore to wave form relaxation, dirichon-Neumann wave form relaxation, and so on. But the Neumann, Neumann, and dirichon-Neumann wave form relaxation have in the argument of the error function or the exponential, they have no overlap length because there's no overlap length. What they have is the domain lengths, the subdomain lengths. So they have an advantage. Dirichon-Neumann-Neumann have an advantage over Schwartz wave form relaxation. These are the classical variants. This morning you've seen one can do optimized Schwartz wave methods, so you can add an operator here. Like Frederick explained, you can use a robin condition. You can use a second-order condition. You can use time derivatives. We've studied these methods over the last 15 years. There are many, many results, many, many people have worked on this. These algorithms were reinvented as well in different contexts. For example, Bjorn Enquist invented the sweeping preconditioners, an algorithm of this type. Ximin Chen invented the source transfer two years ago. So again, an algorithm of this type, it just has a very good operator here. And I don't want to explain more details, but I want to show you a movie to show you the difference between a classical Schwartz wave form relaxation and an optimized Schwartz wave form relaxation method. So in this graph, you see the x domain here. And you see the subdomains. There is 1, 2, 3, 4, 5, 6, 7, 8 subdomains. You see the overlap between subdomains. And you see here the solution I tried to calculate. And because it's a time-dependent problem, it will be a movie. It's an advection-reaction diffusion equation. This is the initial condition you see. And this solution will be advected from left to right. And it will be smoothed by diffusion. And it will form a boundary layer here. And I calculate the exact solution. And then as a dashed line underneath, what the first iteration of a Schwartz wave from relaxation algorithm would give if I start with a zero initial guess to start the solution. Then I show the second iteration, the third iteration, and you can look how such a method converges. So you see the red solution is the solution I tried to get. The blue solution is the one I get from the first iteration of Schwartz wave from relaxation. You see it's very bad. I basically missed most of the signal. That's because I imposed a zero condition here, but I didn't know anything better in the classical variant. But if I do the next iteration, look at the beginning, the solution is actually quite good. It's only once your time interval gets long or the solution gets bad. That's this typical wave from relaxation behavior. On short time intervals, convergence is good. On long time intervals, convergence is bad. But the method will eventually work. So the next iteration. But you need many iterations. Now I'll show you exactly the same computation but with an optimized transmission condition. So the rest of the code is the same. I just have a Taylor transmission condition. And if you look at the first iteration, it's miraculously good over a whole time interval because it used an absorbing condition. Absorbing equals zero means the signal can go out. And the next stop domain just takes it. So that's the difference between a classical and an optimized Schwarz or classical and optimized Schwarz wave from relaxation method. Very important. Now I just want to explain before stopping this lecture a combination one can do between domain decomposition and parallel that we've seen in the first lecture. So I have now space here. I have time here. And at the compose space time. So I have many, many subdomains in space time. And when I want to solve on this subdomain, for example, I solve all the subdomains in space time in parallel. When I solve on this subdomain, what I will need is an initial condition. And I will need boundary conditions here. Then I can solve. Now, such a parallel Schwarz wave from relaxation method, what would it do? It takes the initial condition from a parallel iteration in time. And the boundary condition from the neighboring subdomain, like a Schwarz classical or optimized Schwarz method. So this is a method that iterates in space time. It was already proposed by Madi and Turinici. But they iterated in space to convergence before doing a parallel iteration, whereas I just exchanged with everybody and iterated space time. This is the formulation. I will not show it. It's just a cartoon of how this algorithm runs. Here is time going forward. Here is space. I start with some initial profile, which is diffused over time. And I run this algorithm to show you how it behaves. So in the left graph, you see the approximate solution, which is calculated using 10 time intervals and six spatial intervals. So I have 60 processors that run on this. And each processor calculates on its space time subdomain whatever it can calculate given an initial condition from parallel and a boundary condition from the neighbors. And here is the error of this first iteration. So you see in the first iteration, not much happens. These subdomains have something to do, but here there's nothing interesting to calculate. But as iterations progress, you see the error goes down. This diffusive solution is advancing, but it's not a very fast algorithm. In particular, what you see is the parallel contribution that comes in time is more efficient than the wave formalization contribution in space. But here I used a Dirichlet classical condition. So eventually, this method will converge. But if I use an optimized condition at the interfaces in space, then the first iteration you see, it propagates already the whole time interval correctly, because now this is an absorbing condition in space. And this converges much, much faster. This is a space time solver, which has a reasonable performance. So here is a comparison. This is when you use optimized conditions, and this is if you use classical conditions for this parallel Schwarz wave formalization method. So here is a summary of what I told you in my second lecture. I think we can have a break of about 20 minutes. We can start again at 4. And now there should be actually coffee already, I think. Thank you. Thank you. Thank you. Thank you. Is there a question? No? In the parallel Schwarz wave formalization. No, I don't remember my question. OK. I have another one. Are you sure results in what? Are you sure results in 1D for the Dirichlet Neumann? So do you have results in two dimensions? Yes, yes. In the thesis, there are also results in higher dimensions. In your thesis? Yes. And the last results you show on Neumann Neumann and Dirichlet Neumann of 2014? Yes. Do you have also? They're all in higher dimensions, but only for strict decomposition, because otherwise we cannot evaluate these kernel estimates. They're too hard if you have cross points. We just don't know how to do it. I remember my question. So for the, you have a coarse solver and a fine solver in the Schwarz Parallel. So you use also optimized Schwarz for the coarse solver? The coarse solver is not the composed one in this case. So it's not, the coarse is like the parallel coarse, but it exchanges in the space like the wave formalization one. OK. And so in the classical one, I use Dirichlet and in the optimized one, I also use the optimized coarse. Yes. OK. OK. So thank you very much. Thank you. And you have the coffee break now. So welcome back to the third lecture of these time parallel methods. The third track is space time multigrid methods. And on this same transparency, it's the blue methods. So you can also see these are all iterative methods starting by Huck Busch, Lübe-Hostermann, Horton von der Walle, Emmett Minion, and then also contribution from me and the collaborator. These are methods which are iterative in nature and they solve the entire space time problem at once by iteration. The first method in that class was proposed by Huck Busch. Huck Busch did not get so much credit so far in this Semrax school. Huck Busch was a pioneer in multigrid methods. And he was very different from Aki Brandt. So Aki Brandt had many, many good ideas. He sort of knew how to make algorithms work, multigrid algorithms, whereas Huck Busch knew how to prove that multigrid algorithms work. So they're very complementary. They were interesting discussions, endless discussions about who had the right approach. So I think Huck Busch should get a lot of credit what he did for multigrid methods. And he's the first to propose such a multigrid method for a space time problem. He called it parabolic multigrid method. So let's take the model problem that he used. It's a parabolic PDE. So you have ut plus lh, u equals f. Lh is a discretized Laplacian. Ut is a time derivative. And discretized this problem with backward Euler in time. Now Huck Busch uses a bit strange notation. He uses continuous time variables here. But you can see what it means. This is really a backward Euler step. It's u at t minus u at t minus delta t divided by delta t. Plus the discretized Laplacian times u at t. So this is implicit. Equals f. And then he says the conventional approach to solve star, this problem, time step by time step is time step by time step. Ut is computed from ut minus delta t. Then ut plus delta t from ut, et cetera. Exactly like I've shown you at the beginning. One step after the other. The following process will be different. Assume that ut is already computed or given as an initial guess. Initial state. Simultaneously, we shall solve for u at t plus delta t, u at t plus 2 delta t, up to u at t plus k delta t in one step of the algorithm. So you see it's the whole space time problem, which is sort of solved by iteration here. So how does this method work, this parabolic multigrid method? It works as follows. I define the matrix A to be the term that comes from the time step plus the discrete Laplacian. And then at each step one has to solve an equation, A times ut equals the term from the previous time step plus the right hand side. Now for this matrix problem, I can consider a Gauss-Seidel iteration, or Gauss-Seidel smoother. If I want to do this, I have to partition the matrix A into a lower triangular diagonal and upper triangular matrix. And then I get a new Gauss-Seidel iteration step by solving the lower triangular part from the upper triangular part from the old iteration and the part which is on the right hand side in the problem. So this is just a Gauss-Seidel iteration that you would run at each time step to solve this linear system. Now Hockbush says, let's use such a process as a smoother as follows. There is an outer loop here, which goes from T over all time steps up to KT plus KdeltaT. But then instead of solving each time step, we just do new Gauss-Seidel iterations. So you see this process is not solving time step by time step, but it solves approximately time step by time step. It does, for example, two Gauss-Seidel iterations, then two Gauss-Seidel iterations on the next, two Gauss-Seidel on the next, and so on. So this is not solving the problem, but it's approximately solving the problem, and it does this over many time steps. So this is what Hockbush uses as a smoother for his multigrid method. It's a sequential Gauss-Seidel over several time steps. And then he gets the following results. If you use this as a smoother in a multigrid method, and if you only coarsen in space, then you get the typical multigrid performance for the Laplacian. So it's the good contraction rate independent of the mesh size, accuracy increasing by a factor 10 at each iteration. So if you do this type of refinement, no coarsening in time, only coarsening in space, this method is good. If you also coarsen in time, it does not work. Sometimes you can get it to converge, but sometimes it even diverges. So this is the first multigrid method in space, time for parabolic problems, but it only works if you coarsen in space. In time, there's nothing you can do. It's just not working. Then there's a second important contribution by Lubich and Ostermann, multigrid dynamic iteration for parabolic problems. We studied a method which is obtained when a multigrid method in space is first applied directly to a parabolic initial boundary value problem and discretization in time is done only afterward. So they don't even discretize in time now. They remain continuous in time. How can one explain this algorithm? I thought of explaining it as follows. We have the same parabolic problem that Huckbush had. Here is a discretized Laplacian, there is a time derivative. Now to explain the algorithm, I will do a Laplace transform. That's not what you do to run the algorithm, but to explain what it is. So I do a Laplace transform in time. We've seen Fourier transforms this morning. Fourier transforms transform derivatives into multiplications by the Fourier variable. That's the same with Laplace transforms. This time derivative becomes a multiplication by S, which is the Laplace variable. And then I write this problem as a linear problem for each value of S. I have an infinite number of values of S. So I just consider this now as a linear problem, which depends on a parameter S. Now for this linear problem, the multigrid algorithm is well defined. If I fix S, I can just apply multigrid to this problem. So I apply multigrid to this linear problem, I fix S, so I split to get a smoother. I split into L plus D plus U. Now D, because this parameter is important, contains S from here. I start with some initial guess. Then I iterate this multigrid method. I smooth with Gauss-Seidel for a fixed S. This is just a Gauss-Seidel smoother. Then I calculate the residual. Like in every multigrid method, I calculate the residual. I restrict the residual to a coarse grid. I solve on the coarse grid. I prolongate to the fine grid, and add this to the correction. And maybe I do again a few smoothing steps here with Gauss-Seidel. This is a classical multigrid method. Apply to this problem if I fix S. There is no thinking involved. It's just a classical multigrid method. Now we try to reinterpret what that means if I do a back-transform of Laplace. Because that gives an algorithm now. So what does that mean? The smoother is Gauss-Seidel's smoother that I used, has an S on the left, and then L and the D on the left. And on the right it has a U. The back-transform, the multiplication by S, is becoming a time derivative again. So if you look at this iteration here, you see this is a wave-formalaxation iteration. It takes a previous approximate iterate, that's the J minus 1, and it calculates the solution of an ODE with a lower triangular matrix. So this smoother that they defined in Laplace-transform, if you look at it in the time domain, it's a wave-formalaxation iteration. So this method uses as a smoother wave-formalaxation. And then the coarse grid correction, if I do the back-transform, so I back-transform from S to T, you just get a coarse parabolic problem. Time is continuous, it's on a coarse spatial grid. So this method that they invented, it's a multigrid wave-formalaxation method that uses multigrid as a smoother and the rest standard components. So in this method you cannot coarsen in time because time is continuous. There's no concept of coarsening in time. So they get very good results. They get very good smoothing for Gauss-Seidel, even better smoothing for Chacobi wave-formalaxation. And you get similar results if you discretize this algorithm in time, provided you don't coarsen in time. Never coarsen in time, otherwise this method is not working. Very nice example here on several time windows with a moving front with locally adaptive time-stepping, which is solved with this multigrid wave-formalaxation algorithm. But again, same problem. This is not really a full multigrid method because you cannot coarsen in time. If you coarsen in time, this is not defined. Whereas in Hockbush, if you coarsen in time, it's not working. So the basic algorithm like it is for multigrid will not work in spacetime. Now there was a substantial contribution by Horton and van de Valle in 1995. And they say in standard time-stepping techniques, multigrid can be used as an iterative solver for the elliptic equation arising at each discrete time step. By contrast, the method presented in this paper treats the whole of the spacetime problem simultaneously. So they also want to use multigrid in spacetime, and they really want to coarsen both in spacetime. They want the method that works like for the Laplace equation. So they write the whole spacetime problem. So this would be the first time step. Then this would be the second time step, the third time step up to the nth time step. Each of these matrices would contain a Laplacian to invert. So this is a huge matrix. It has all the unknowns in spacetime, all the unknowns in time. And they say we just want to solve this whole spacetime problem by applying multigrid, but really systematically, coarsen in space, coarsen in time. Now there's an interesting quote I should have put here in the book by Gil Strang on finite element methods. They say that solutions of such evolution problems, if you considered them in spacetime, would just not be tractable. It's too big. But in the meantime, we have so big computers that you can envisage to put such a big matrix. It's a spacetime matrix. You just want to solve the whole thing. So what do they do? They first study numerically why time coarsening is not working. There's a very interesting graph here. So they brute force apply a v-cycle to this spacetime problem. Brute force, they use a smoothing, which is a Gauss-Seidel red-black, Prians-Bow smoothing. They do standard coarsening both in space and time with a factor 2, even though they know it's going to blow up, it's not going to work. And they measure how much the algorithm manages to contract. And there is a very important parameter, this lambda here. Lambda is the ratio of the time step and the space step squared. That's the parameter that appears in this algorithm. And they plot for different values of the log of this parameter. So at zero here, this parameter is one. How much this v-cycle with six levels contracts? And you see, if delta t divided by delta x squared on the fine grid is about one, the method converges. The contraction factor is 0.4. It's not as good as in space. In space, you want to have 0.1 for Laplace's equation. But it's working. And if you're a little bit above, it's working as well. If you're a little bit below, it's working as well. But now look what happens if you do coarsening with the same factor. If you coarsen this by a factor two, this means this is coarsened by a factor four, because there is a square. So lambda is going to move into one direction or into the other direction. So as soon as you start coarsening uniformly, even if you were in the good range in your multigrid hierarchy, at some point, you're going to arrive here or here. And here, you see the algorithm is not working at all. At least on this side, it's still contracting a bit, but with 0.095. Its plot is slow. Slow than a smoother alone. And here, it's actually diverging. That's what Hock-Busch said as well. So the row which is plotted here is just a two-norm measurement of the contraction of this method. So if the lambda parameter on the fine grid is close to one, then the hierarchy works. But if in the hierarchy somewhere this parameter is going away, the method will not work anymore. And even in the good range, it's not as good as spatial multigrid. So what can one do? There are key new ideas in this paper by van der Waal and Horton. The first one is you do adaptive semi-coarsening in space and time. So you try to coarsen by a factor of two. But as soon as your lambda is getting out of the range where it's working, you don't coarsen in time anymore. Or if you're on the other end, you don't coarsen in space anymore. You navigate so your lambda is always in the middle, because there you know you have contraction. So it's an adaptive coarsening, depending on which coarsening you can do and which you should not do. And then there's a second very important ingredient, it's the time prolongation and restriction operators. They define them in a causal way. They always take data from earlier in time, and they always propagate data later in time. And here you see from their two grid analysis, the performance of what happens, now just two grid, depending on the lambda factor, if you do full coarsening in space time. So this would be the dotted line. And you see this analysis gives a very similar dotted line as what they measured in the 6V cycle. It works around here, here it does not work, and here it does not work. And then you see two more lines, the space coarsening line, which is a solid line. If you're in this area, you can do space coarsening in the two grid method. It will still converge extremely fast. But the more you do space coarsening, the more you move along this curve, and here you should not do space coarsening anymore, because it's not working. Here you have to stop doing space coarsening. And with the time coarsening, it's the other way around. Here if you do time coarsening, you get a very, very good performance. But if you do more time coarsening here, it's going to stop working. So with this Fourier analysis, you can navigate exactly the algorithm to be in a range where it's working in space time. And that's what they do. And that's the performance they get. So on the left, it's the performance of a V cycle, contraction of a V cycle. And there are three curves that are hard to distinguish. It's for different resolutions. The final resolution, less fine, even coarser, even coarser. And you can see the V cycle is not completely robust. The finer you make the grid, the slower it gets, but it's still a very good algorithm. It contracts at a factor of about 0.2. Space time multigrid. If you use an F cycle, that's a full multigrid cycle, that starts on the coarsest level and then just works its way up, then it seems to be robust. They do numerical results for 1D, 2D, 3D heat equations, and that's the first fully functioning space time multigrid algorithm. It's a very good algorithm. It's very competitive. Then there was a different school, the Minion School. We had proved that the parallel algorithm is also a space time multigrid algorithm with Stefan, but with very aggressive coarsening. And then Minion, he worked on a variant of Parareal, which is based on spectral deferred correction methods. Now, who knows what a spectral deferred correction method is? It's something which might be more known than Stefan's method, but still it's not something that's in everybody's bag. So I will first explain what a spectral deferred correction method is. It's an interesting method. It's good to know what it is. It's not complicated to understand. So what's a deferred correction method? It's also called difference correction or defect correction. I really tried to find out if there is a difference between those, there is none. This is all used interchangeably for the same type of method. So suppose you want to solve the ODU prime equals f of U with some initial condition U0, and you just use Euler to do it, first approximation. So I calculate Um tilde using Euler steps, forward Euler, for example. Then I interpolate the solution between these discrete points. Now I have a continuous approximate solution of the trajectory. Then the error, which is also a function in time, is the difference between the exact solution and my interpolated Euler solution. And now we can see what kind of equation this error satisfies. I take a time derivative here, a time derivative of the error. This gives me a time derivative of the exact solution and a time derivative of my interpolated solution. For the exact solution, I know this is f of U because the ODU is satisfied by the exact solution. For this solution here, there is not much I can do. It's just the derivative of my approximation that I interpolated. And then because I don't know what U is, I replace U from this equation here by the error plus U tilde. And now you see I have an equation for the error. E prime equals f of E plus U tilde. U tilde I know. It's my interpolated previous approximation. And the derivative of the interpolated approximation. The initial condition is zero because initially I don't have any error. I know the initial condition. And now the crazy idea is you just solve this again with forward Euler. Then you get an approximation of the error. What can you possibly gain with this? There is an amazing result. It's going back to Skeel. Leslie Fox had already related results. Victor Pereira had a result. There's also the Austrian school that had a result. The new approximation, if you solve this again with Euler and add it as a correction, is second order. And if you do it again, it's third order. Now you remember the parallel algorithm. Also, every iteration gives you one order more. That's sort of very similar behavior. That's called difference or defect or deferred correction. You only pay Euler steps, but you did it several times and you get order two, order three, order four, order five. Now the way I presented it here, it's not really working because you have to do interpolation. And then you have to do derivatives of this interpolation. Now interpolation polynomials tend to oscillate towards the boundary. You take derivatives of these oscillations. So this is not a practical algorithm. But Dot Greengard and Rocklin in 2000 found a way to rewrite this method to make it into a usable algorithm. And that's called the spectral integral deferred correction. The idea is to get rid of the differentiations by writing the ODE as an integral, like in the p-car iteration. So instead of looking at the ODE, we look now at this integral formulation. And assume I have again a forward Euler approximation that I interpolated. Then I can put this U tilde into the integral form and calculate what the residual is. Residual means I put this U tilde into this equation and look at what the mismatch is. The mismatch is the right-hand side evaluated at U tilde minus U tilde from the left. That's the residual. Now the error is again the difference between the exact solution and between my interpolated Euler solution. I assume that I start with the correct initial value. So this U0 there is also the U tilde 0. And then I can write on the left Ut. Ut is exactly this quantity. And on the right I just put the U tilde and again the U I replaced by U tilde plus E. Now I have an integral equation for E for the error. This is an integral equation for E. I can solve for E. I have to evaluate this integral equation. And in this form you recognize that the residual appears almost, it almost appears. You see the residual had a U0 tilde which is here. The residual had a minus U tilde which appears here. But in the middle it had an F of U tilde and here we have an F of U tilde plus E. So not quite. So I have to add and subtract an F of U tilde integrated. Then I get the residual and I have to subtract it again. So the error satisfies an integral equation with the residual and the error again under the integral. Now here is just a copy of this formula. If I take a derivative of this formula now, I get a differential equation for E. And if I solve this equation with forward Euler, then I need to approximate this R tilde. But this I can calculate from the residual with an accurate quadrature rule. And so here is the advancement of the error with the R from the quadrature rule. And it turns out that this procedure is numerically stable. This gives also deferred correction, but in a numerically sound way. And you have the same result here. If you start with Euler order one, one iteration gives order two, one iteration gives order three, one iteration gives order four. So that's spectral deferred integral correction. So how do you use this in practice? You use this if you don't want to implement the Runge-Kutta method. You just don't want to do a Runge-Kutta method of order four. Then you use your Euler method. You get as an initial guess the U0, which are the Euler steps. And then you do on this vector, so that's a vector over a whole time window with many Euler steps, you do one iteration of spectral deferred integral deferred correction. I write this as F here. Then you get a new approximation. You do it again, you do it again. And each time you do this, the order increases. And this only stops when you reach the accuracy of the quadrature rule you used for the integral. If the integral had a quadrature rule of order eight, you cannot go beyond eight. After eight iterations, if you do an aligned one, it's not going to get better. So how is this used if you want to do time stepping instead of a Runge-Kutta method? You just partition the time interval into time windows. And on each time window, you use a certain number of Euler steps. And then you iterate this spectral deferred correction method as many time as you want until you have the order you reach. And then this time window is finished. And then you go to the next time window, you do Euler again. You iterate spectral until you have the order you want. So it's a process that works like this. So here you have time. You write time slices, T i, T i plus one. You do an Euler integration over several steps over this first time slice. Then you do spectral deferred correction iterations until you have order four. And then you start with the value you have here, you do again Euler steps. Then you do the spectral deferred correction until you have the order. Then you do again Euler. Then you do spectral deferred correction, you do again Euler. You see this is a time marching method like any other, except instead of having a Runge-Kutta time step, you have Euler and spectral deferred correction to improve. It's a time stepping method. Now what was the observation of Minion? Now you can try to see what changes in this algorithm. Something is going to change, something very small. Did anybody see what changed? Can you see? It's this index here. So what did Minion do? He said, suppose I have my first Euler step here. Then I have an approximation of the solution here. This is not a good approximation. It's first order. But who cares? Let's do continue here with Euler. At the same time when I do the first spectral deferred correction here. So the initial guess is wrong now for the second interval. It's not because I should have first converged to fourth order here. But who cares? I just do one. And then in the next step now I have here, so I do an Euler here. I take the value I get here from the first spectral deferred correction. I do one. And here I do the second. So you see this is sort of a pipelining thing. And I tried to prove, he has no proof that this actually gives spectral accuracy. I tried to prove I think it can do it. But I didn't do it very cleanly. So even if you don't wait for this guy to finish. This guy is still spectrally convergent here. Because eventually it has the good initial guess. It's just at the beginning that the initial value is not quite the correct one. And then he said, now let's use this method as the fine propagator in parallel. You remember the parallel algorithm had a fine propagator F, which I said has to be expensive because you can do it in parallel. Now he says, forget about expensive. I don't want an expensive one. So in the first parallel iteration, I only use an Euler, a very bad approximation. But in the next iteration, I'm going to get a second order approximation from that first one. And even though the jumps that will be corrected by parallel even make this more incorrect, and there there is no proof, it still sort of works. That was the first idea of Minion in 2010. But then they said, just forget about this. Just use this procedure as a smoother and build a full approximation multi-grid method for a nonlinear evolution problem. And that's an algorithm they call PFAST. Now this name has a really funny story. You can see PFAST, P-F-A-S-S-T. This is parallel full approximation scheme in space time. Now you know in the US this would be pronounced fast. And fast is a very nice name for a method because it means it's fast. So when he gave his talk at the workshop in Lugano, I told him, no, no, no, this is called fast in Germany because we pronounce the P. And then first he was annoyed. And now he pronounces it everywhere like fast. So now it's known as fast, not as fast. It's a good method. It has been used successfully for many applications. It does consistently the course correction in the full approximation scheme. So it does it really like you should do in multi-grid. No convergence analysis. Nobody knows why this works or because we cannot even prove that it gives the right in the parallel setting the right correction. But it's been applied to large-scale problems. It's working. And I would like to explain the last method. That's a contribution that's coming from me and Martin Neumiller. I went to Graz a few years ago and gave a talk there. And Martin Neumiller was working on space time methods. And he had sort of a space time formulation but no good solver. And then we looked at how one could do a multi-grid solver from scratch that works well on such a system without having to have special prolongations restrictions. And what we did is the following. We take the full space time problem here. And the main new idea is we don't use a Chacobi or a Gauss Seidel smoother. We use a block Chacobi smoother, which means we invert these blocks that contain Laplacians in each smoothing step. And then we do space time coarsening. And we can prove for a heat equation that in the Chacobi smoother which uses blocks, the relaxation parameter half is optimal. This is not like in the space multi-grid method. There is two thirds. Here you need to use a half. We can prove that you always get good smoothing in time so you can always coarsen in time. You can do semi-coarsening in the time direction as always possible. We also have a criterion that says if delta t over delta h squared, delta x squared is bigger than c, then we can also do space so we can do full space time smoothing. And what we do is we don't invert these diagonal blocks. We just apply again a v-cycle but now in space. So this is a full space time multi-grid method. We have complete analysis. The paper just appeared. And this algorithm scales amazingly well on large-scale architectures. It's by far the best parallel solver I know for the heat equation in space time at the moment. So here is experiments that Martin Neumel did. These were in the Vienna cluster he later did in Lugano. He did also on the large machines in the US. Here are two scaling experiments. You'll learn what weak and strong scaling is this morning. Weak scaling here is I increase the number of cores and I increase the size of the problem. The iteration number of this space time multi-grid method is constant and solution time is constant. You can scale as much as you want. Here there's a hundred and twenty-two million unknowns. So the least ones are in the billions, the latest results. And it also has very good strong scaling. Frederick explained strong scaling at some point will stop because you don't have anything to solve anymore. But you can see in this, for this size of a problem, if we increase the number of cores like this, iteration numbers are constant and solution times are halfed, halfed, halfed. Even here they're still halfed. This is by far the best I've ever seen. This is as good as multi-grid gets for a Laplace problem on a large scale architecture. It's really an outstanding algorithm. Very, very good. So this is a summary about space time multi-grid methods, what you've seen. And we can have, when do I stop at five? Yeah, we can have like 15 minutes break now, I think. It's worthwhile. Thank you. So I will start the last fourth lecture about direct time parallel methods. This is the same graph that we've seen four times now. The direct time parallel methods are more like small scale methods. There are some that are getting a large scale as well. These are the black methods on this transparency. You see there are many. And these are maybe the most improbable methods. Because all the methods we've seen so far, you've seen that we used, as a trick, we used iterations to get the solution over the whole space time. Now, these methods do not iterate. They use many processors and they somehow manage to do this in parallel. The first method is by Miran Krendliniger, almost as old as Nevergeld, 1967. And here is the idea of this method. They first also described that there is no parallelism in time integration. And they say the computation front is just too narrow. You can't compute like this. And so there is no way to do parallel computing because you're just advancing like this. And then they say, let us consider how we might widen the computation front. So they somehow want to change this front into a front where suddenly several things can be done at once. And here is the concrete example from their paper. So they take y prime equals f of x, y. And they consider predictor-corrector formulas. Predictor-corrector formulas are formulas that have two steps. So here is an example. You first predict the new value yp at step n plus 1 from the yn-corrected value and some combination of other corrected values earlier. So you see you need ync and yn minus 1c. So here is a cartoon of this step. If you know y at n minus 1 and y at n, in this predictor step you calculate the value up there. You see there are two arrows. If you take these two values, you can predict the step up there. Now once you know the predicted step, there is a second formula which is called corrector. It uses the corrected value at step n. And it uses the predicted value at step n plus 1 and again the corrected value at step n. That's the second set of two arrows. You use the predicted one from above and the corrected one to get n plus 1. Now you see this is completely sequential. You first have to do this predictor step because this value is needed to do the corrector step. Now they say maybe there are methods where this is not like so. So here is another predictor-corrector method. And here what you use is you use the corrected value from n minus 1 and you use the predicted value from step n to get the predicted value from step n plus 1. So in the cartoon you can see here you use the corrected value from n minus 1 and you use the predicted value to do the new prediction. And for the correction you use the corrected value at step n minus 1, the predicted value from step n, and again the corrected from step n minus 1. So if you look at the arrows here, you can do the prediction at the same time as you do the correction because the prediction you do one step ahead. So suddenly in this method these two steps can be done in parallel. You can use two-course. So see this is a direct method. There is no iteration. You just advance with two-course and you get the same order as with this method. With this method you can only advance with one-course. And they define in this paper such type of methods where you can use two-course and there is a complete stability and convergence analysis. They're not quite as good as these methods but they're parallel. So if you have a four-core or an eight-core machine, it's worthwhile to take this integration method and you just integrate through like this. And it goes in parallel. So first, direct method, no iteration. Then there were block implicit methods by Champagne and Watts. Maybe I don't explain those. It's a variant of the previous method just in a more general formulation. But in the very well-known book by Haier, Norset and Wander from 1992, these are Haier and Wander, our famous professors from Geneva University. They're the analytical analysts and they are the reason why I wanted to go to Geneva. I had seen their book on numerical ODE's and it's the best I have ever read on numerical ODE's. And so I knew their name, I knew their work, and then I was at the airport in Australia going to a conference. It's a long time ago. And then I heard Austrian in the line before ahead of me. Somebody was speaking Austrian and I thought, wow, I could speak German to them. Because, you know, so far away from Europe. So I said, hi, where are you going in German? And then he turns around and says we're going to a conference in... I said, I go as well to a conference. And then he said, my name is Ernst Haier and I almost fell over. Because I knew what this guy is capable of. And you might have heard two years ago his son won the Fields Medal. So this is really like, this is mathematics which is from a different planet. This is just outstanding. So this is a very famous book. It's the best you can read on numerical ordinary differential equations. There is a second volume for stiff problems. There is a third volume for geometric integration. If you need to learn about numerical ODE's, there's nothing else that gets even close to this. So when I started working on these time parallel methods, I also talked to Ernst and he said, you should look in our book. We also have methods that are time parallel. So I looked. And two quotes that are really funny. They have quotes in their books. So the first one is, it seems that explicit grammar methods are not facilitated much by parallelism at the method level. That's by Arié Zéreles and Sivir Norset in 1990. And then there is a glitch that happened to Kevin Burridge, just the guy who wrote the summary book. In a talk, he said, paralyzing ODE's instead of parallelizing ODE's. Paralyzing means you're paralyzed afterwards, you're not moving anymore. So what are these methods that are in the book? It's parallel Runge-Kutta methods. I didn't know that that exists. So here is a Runge-Kutta butcher tableau. You must have seen butcher tableaus. This is the way you can note Runge-Kutta methods. So you have the A coefficients in this block, which tell you how each stage advances of the Runge-Kutta method. You have the C coefficients in this block, which you don't need only if you evaluate functions. And you have the beta coefficients here, which tell you how you advance to the next step. So there's a way how to describe Runge-Kutta methods. An explicit Runge-Kutta method is lower triangular. And what you do is then you evaluate the first stage, the second stage, the third stage, the fourth stage, and once you have the fourth stage, you take a linear combination. Now, how can you make this parallel? You can make this parallel if the stage evaluations are independent of one another. And in this butcher table, you see the second stage and the third stage, they are independent of one another. So you can do those in parallel. So here is a graph that shows you the first stage you have to do, then the second and the third stage you can do independently, and then the fourth stage again needs all three previous stages. So this is a small parallelism within a Runge-Kutta method that has a certain shape of a butcher tableau. And there's a very nice theorem by Jackson and Norset from 86 that says if you have an explicit Runge-Kutta method, which means you have a lower triangular matrix, and if you have sigma sequential stages, here you would have one, two, three sequential stages, even though they're four stages, but two you can do in parallel. So if you have sigma sequential stages, then there is an order barrier. You cannot get a higher order than sigma. So then the search for p-optimal, parallel-optimal methods started. What are the methods that are Runge-Kutta methods that are optimal in that sense, that are exactly at this barrier? And there are families which are known. They're parallel iterated Runge-Kutta methods, and they're also the GBS extrapolation methods that are p-optimal. So these are Runge-Kutta methods that sort of run like this method that widens the computation front, but it's a classical Runge-Kutta method you can just run. There's nothing to think about. So these are good methods, but at small scale parallelism. Again, multi-core, not more. Then there was a very clever method I saw by Chris Lee, MacDonald, and Ong. It's based on spectral integral deferred correction. And that's why it was important to explain this in detail in the previous lecture. So we discussed the class of integral defect correction methods, which is easily adapted to create parallel time integrators for multi-core architectures. So you see, again, it's small scale parallelism, four-score, eight-core, not more. So here is a cartoon of how we've seen the spectral deferred correction works, usually. You take, let's say, three other steps, and then you do spectral deferred correction exactly like I was drawing it here. Then you go to the next, you do the spectral deferred correction, you go to the next, and so on. That's the classical way to use this. Now, the new idea is called revisionist integral deferred correction, or RIDIC. RIDIC. You might hear about this later in your career. This is something that will go for multi-core. This is a very elegant thing to do. What they do is they say, forget about this organization of the calculation in blocks. Don't do blocks like this, finish the block, this, finish the block, this, finish the block. Just have a first processor going with Euler. So this guy goes. Now, as soon as there are enough points available to do a spectral deferred correction calculation, just do one. So there is a second processor. As soon as we have four nodes, he would just advance this one by one. And at the same time, because here we have four nodes, this one can produce a new one. And here we have four nodes, this one can produce a new one. So you see it's a pipeline form of this algorithm, but it's really a different way of calculating it. You base the spectral deferred correction on different nodes. So with this, they can calculate with the octa-core in the same wall clock time, a solution of accuracy 8 at the cost of Euler propagation. That's really amazing. So you have an octa-core, there is a small startup cost. Once the pipeline is full, you just go. It's a very elegant algorithm, very easy to use, but it's a very, very good invention. Then there is another idea, which is also a direct time-parallel method, which goes back to cyclic reduction. It was an invention by Worley, and it was made to improve waveform relaxation. You remember waveform relaxation was an algorithm from the electrical engineering community. I've shown you Chakobi and Gauss-Seidel waveform relaxation. And in each iteration, you have to solve a scalar ODE if you do Chakobi or Gauss-Seidel waveform relaxation. Now, this scalar ODE, you still solve sequentially, and that's what Worley says here. The waveform relaxation multigrid algorithm is normally implemented in a fashion that is still intrinsically sequential in the time direction. Because at each waveform relaxation iteration, you solve over the whole time window, but you just solve by forward-marching. So this idea is really clever. I learned about this because Stefan van der Waal has said this idea is missing in my review paper. So here, suppose we have an Euler forward-marching. It's a lower bi-diagonal matrix. I have four unknowns I want to calculate. I have a right-hand side. If I do one step of cyclic reduction, what does that mean? That means I eliminate every other variable. I eliminate x1 and I eliminate x3. So x1 is very easy to eliminate. I can calculate what x1 is. I just put it on the right-hand side. And x3 is also easy to eliminate, provided I can eliminate the variable from here with a sure complement. So you see, going from here by here, the number of unknowns is cut in half, and I still have a bi-diagonal matrix. And if I had many more unknowns, I can do this many times. If I have eight unknowns, I get down to four unknowns, still bi-diagonal, down to two unknowns, still bi-diagonal, I can build the whole tree. That's cyclic reduction. Now, in a serial context, that's not of interest. In a serial context, if I just do forward substitution, this costs 3n operations. And cyclic reduction, by doing this reduction, costs 7n. But if I have many processors, then the tree that starts to form, I can solve that tree using many processors. So the parallel complexity is a log in n of cyclic reduction. So here you go down from n to log n. And if I do this for a system of ODE's in waveform relaxation, so this is a system of ODE's, to do waveform relaxation, I partition again the matrix into lower diagonal and upper triangular matrices. I do here a Chakobi waveform relaxation. So each step I have to solve scalar ODE's. Each scalar ODE's exactly this lower bi-diagonal matrix that I have to solve. And if one does this with cyclic reduction, then you get to a parallel complexity that's a theorem by Worley, which says that it's log squared number of spatial nodes times log to the gamma number of time nodes, where gamma is a half times the rounded up number of levels you get in the cyclic reduction hierarchy. And this is very much comparable to the space Laplace multigrid complexity. There have been papers, important papers by Stefan that followed up on this. This gives theoretical complexity, which is optimal for a parallel complexity of this type of algorithm. It's a clever idea. Again, a different idea by Sheen, Sloan and Tome, also non-intrative. If you want to solve this ODE system of ODE's, suppose you really do a Laplace transform, so this only works for linear systems. If you do Laplace transform numerically, you just do it, then you get this system to solve. So this to solve means you have to invert this matrix, which depends on the Laplace parameter. And then you have to do the inverse transform, which means you have to calculate this contour integral for the u hat s, which you calculate from this inverse. That doesn't seem very promising. But if you do quadrature on this, this means you only need to know this quantity for a few points s, let's say 10. That means you only need to know 10 u hat for 10 values of s, which means you have to do 10 solves here. These solves are all in parallel. It's completely parallel. And then you get an approximation of your solution in time. So you see this is a method that produces the whole trajectory by solving a few times the system here for the quadrature nodes in Laplace. This seems to be the best method when it's applicable for linear system. There's going to be another one that is competing with this, which is different, but this has had a lot of success as a method. Now, this is probably the weirdest method. This is really a bizarre method. It's also a direct method. It was invented by Yvon Madet and Ron Quist, also an autocrass like the parallel algorithm. To break the sequential sequence of this resolution, we use the algorithm of the rapid sensory product. So some of the tensor products will only appear very late in my explanation. What's the idea of this method? So I discretize u t equals l, u, l is a gangloplastian, but at the beginning, let's just think of a number. l is for, here, just a number. And I do backward Euler. Then I get this lower bi-diagonal matrix, B u, lower bi-diagonal matrix, times all the steps, time steps, equals the right-hand side. So I have to solve the system B u equals f. It's lower bi-diagonal, so you would forward substitute if you were in the right of your mind, but here we want to do it parallel. So suppose you can diagonalize B. Now, before I continue, is there any chance I can diagonalize this matrix? Suppose all delta t are the same. Then the diagonal elements are all the same. It's a lower bi-diagonal matrix. This is a Jordan block. You cannot diagonalize a Jordan block. A Jordan block is the matrix that you cannot diagonalize. It's impossible. All the eigenvalues are the same. There's no way to diagonalize this matrix if all the time steps are the same. That's why here the time steps are not the same. Because if you don't choose the same time step, then it's not quite the Jordan block. It almost tries to be a Jordan block, but it's not. So if the time steps are not the same, then it is possible to diagonalize. If you diagonalize this matrix and instead of solving B, you have to solve S, T, S inverse, U equals F. That has three steps to do. First you have to solve S, G equals F to get G. Then you have to solve a diagonal matrix equals G. Now, a diagonal matrix is solved in parallel. Each element you can do in parallel. Then you have to do the inverse transform from this diagonalization to get your solution. Now, B, as I mentioned, is not diagonalizable if all the time steps are equal. So this method is not even defined if all the time steps are equal. Even if two time steps are the same, you cannot diagonalize it. So how should one choose delta Tj to get this into a usable method? So there are two things. The first thing we try to see is, maybe it's interesting to have different time steps. Maybe you get more accuracy. And then the first result that came out of this was a calculation at Onera, where we just calculated. So you just set up an optimality system, you differentiate, you look, and you find out for backward Euler, the truncation error is minimized if all time steps are the same. So the best error you only get if the time steps are the same. So if you make different time steps, you're losing something. It's going to be not as good as having the same time steps. But to use this method, you have to take different time steps. So Yvon proposed to do a geometrically stretched grid. So you start with a time step, and the next one is a little bigger. And then the next one is a little bigger, so you stretch it a little bit, because then all the time steps will be different. So you can't diagonalize. But the result is a bit less good than if you had taken the same time steps. So then we thought to analyze how good a solution you can get with such a method. Because now we have two competing requirements. We want to have the time steps almost the same, because this gives the lowest truncation error. But we want to have them different to make this diagonalization work. And so there are two results. The first result says, if I use a geometric mesh with a stretching epsilon, then the difference between what I get for the stretched mesh and the mesh that uses the same time steps, it's a constant times epsilon squared, where this is the stretching, plus smaller terms. And the constant we can calculate for this model problem. So we know the more we stretch, the worse the truncation error gets. So we don't want to stretch too much. Then there is the second requirement. I want to diagonalize the system. Now, if you diagonalize a matrix which is almost a Jordan block, you're going to have numerical problems. There's going to be round-off creeping in. And so we studied how much round-off is coming in in this solution process. And we can show if we solve on a computer this bidirginal matrix bidirginalization for a stretching epsilon, then the error that we will get from the solution process is, again, a constant. You see the machine precision comes in. It depends on the number of steps. It depends on the problem parameter times 1 over epsilon to the n minus 1. So here you see very clearly the two computing requirements. This here says epsilon should be big to have a small error. The other one says epsilon should be small to have a small error. So you need to balance to get a method, and then you can see if this method is useful. So that's what we do here. The error between the exact solution and the numerically computed solution bidirginalization for a stretched grid epsilon is bounded by the normal truncation error without stretching, plus the error which is due to the fact that I did diagonalization and the round-off error, which is because I had to numerically solve it. So we balance these two errors, and we get a formula for the best epsilon for this method. And we can calculate it. Here is the formula. So if you want to solve from 0 to capital T, if you have an ODE that has an A in it, if you do n processors, which means n steps, then the epsilon you should choose is this one. This is going to give you the best possible result, because this formula is hard to interpret. I plotted the result. So here you have AT, it's homogeneous in AT. Here you have AT, that's a problem parameter. Here is the number of processors you will use. And so you see, for example, if you have AT equals 50, then if you want to use 15 processors, your epsilon should be 0.25. Do you still get a useful result with this? That's the best you can do with this algorithm, but maybe the solution is complete crap here. So that's why we calculated how big the error is if you do this choice. So you can look, this is a plot of how big the error, how much is added to the error that you would get without parallelization, without stretch grid, et cetera. So for this, if you do the best choice, 15, the error would be multiplied by about 7. Now, is that tolerable in your application? I don't know. Maybe that's too much. But if you only want to use 5 processors, you only add 1% error. So here maybe you can tolerate it. Or if you want to do 10 processors, your error is doubling in this range. So maybe this is tolerable as well. So even though this method sounds completely crazy, this mathematical analysis shows that for a certain range, the range here, this is useful. Now, why is this parameter A important? It was a scalar OD that I analyzed. But if I have a Laplacian, then the spectrum of the Laplacian is coming in here. Then I have many different As. And you can see with five, maybe eight processors, I can get by with this method because this levels out here. But not many processors, not 20, 30, or 40. This is just an illustration of the accuracy of these estimates. Now, where did tenserization come in? Because the name had tenserization in it of Lyons and Ronquist. If I really have a Laplacian, so here is now a Laplacian, I do backward dialer, then this method cannot be applied immediately because the time-stepping matrix B also contains Laplacians now. So I have to rewrite the system using Kronach products. So there is the time-stepping matrix times Kronach with a spatial identity, and then there is the identity and time-kronach with the Laplacian. So this is the time-stepping matrix as before. And now I can do this diagonalization just on the time part in this Kronach structure by doing an algorithm with three steps. And here in the middle, you get now as many Laplacians to solve as you did time steps. That's why it said tenserization in the title. So this can be used for systems provided they're linear. Here is an example. Now I'd like to explain the last algorithm. This is Para-XP. This is also a clever invention of Stefan. This invention happened when we tried to use a Parareal method for a wave equation. I haven't mentioned this so far, but most of the algorithms you've seen, except for the Schwartz wave formalization methods, do not work for wave equations. They need dissipation. And so there was a variant of Parareal that Farad proposed that creates a subspace, a Krylov subspace. Stefan Gütl was a specialist in Krylov subspace methods. One of the postdoc guys asked him to analyze this method, and he basically showed that the cost of using the Krylov subspace is way too expensive, it's not a useful method. But then he had a different idea, and here is this idea. This is a method that only works for linear problems. It works as well for heat equations as for wave equations. So here is an example, u' equals au plus g. And it's a direct method, it has two steps. Two steps to calculate. The first step is that you solve on a non-overlapping the composition in time in homogeneous problems. So you use the source g, but you start with the zero initial guess. This you can all do in parallel. So you solve with the zero initial guess, with the right-hand side source, all these red trajectories. No communication, this is just a solve. And then in the next step you solve the blue problems. Now the blue problems use the initial condition, u0, then what, the red one calculated, and you solve over the long time interval the blue problem. These can also all be done in parallel. And because this is a linear problem, the solution is just the sum, you just sum up, then you get the solution. Now this immediately seems to have a catch. The blue problems integrate over long time intervals, you see. So how can you gain anything in this algorithm? If one blue processor has to integrate over the whole thing, he could just integrate the original problem. Now the key idea here is that the blue problems can be solved much, much faster than the red problems. Homogeneous problems, these are homogeneous problems. The blue ones, they can be approximated very well as a matrix exponential in a rational cryo space or in a Chebyshev approximation. So the blue problems, even though they propagate solutions over very long time intervals, they're much, much cheaper than the red problems. So this is an algorithm that allows you to solve wave equations in parallel. It has extremely good efficiency. For a wave equation, it's very hard to get efficiency at all. Here you see a table of many numbers. Here is the serial cost, and here is the parallel cost of the model problem above. You see it's a wave equation with wave speed alpha. It has a hat function that oscillates as a source. And we use different wave speeds here, different frequencies for the oscillation. And the sequential cost of just solving to a given accuracy, which is given here, it's 10 to the minus 4, is given here. That's the sequential cost. And here you see the parallel cost to get to the same error level. And there are two columns. This one here is for the red problems. And it's the maximum time that any of these processors use to solve a red problem. And here is the time for the blue problems. And you see, even though the blue problem propagates over much, much longer time, because it's a rational trial method, it's very, very cheap. This cost does very, very little affect the main cost. And that's why you get very good efficiencies here. The same thing also works for a heat equation. Very similar performance. So this is really not dependent of the nature of the problem. It also works for advection diffusion. This is a typical benchmark problem. So to conclude, here are the direct-dise-time parallel methods I discussed. And I'm happy to take questions. Thank you. APPLAUSE
The continuing transition toward open science is fundamentally changing both the ways in which scientists communicate research findings and what they communicate. Open science places emphasis on enhanced access not only to published research findings but also to pre-published literature (preprints), underlying research data, study protocols, and other products of the research process. This presentation will review the transformation in scholarly communication associated with open science and illustrate the ways in which libraries and other information service providers can support it, drawing on examples from the U.S. National Library of Medicine. Particular attention will be devoted to efforts to support open science as part of the ongoing response to the COVID-19 pandemic.
10.5446/57383 (DOI)
Over the years I've been giving a few talks and each time I thank the organizer. But this time I really mean I want to thank the organizer. A few years ago, maybe over 40, in 69, I passed what some of you may still know, aggregation, and with all the people from 68. And in July I had two options. I mean three options actually. Going to teach in a high school somewhere in the middle of the country. I was not excited. And then I had an offer from Grenoble and an offer from Lumini. Lumini was just being created. So it was just a no-man's land and there was this half of this huge building of nine floors. And people were pushing me, you know, Grenoble, you know, this is the high point of PDEs, Malgrange is there, you should go blah blah blah. And at that time I decided that now I'm not the kind of guy who is going to leave Marseille. I'm going to stay in Marseille for the rest of my life. Well, this afternoon I'll be happy 40 years later to go back to the Karang. All right, so I'm going to talk about a mean field game and I didn't know, you know, what would be covered in some of the tutorial lectures. So I'm going to talk about something a little bit off the main beaten path. And I'm going to talk about games with major and minor players. And then at the end I will talk about something that I wanted really bad to talk about, but for which I didn't have the time to actually do computation and prepare illustration for today. You know, at my age if you start learning a new programming language it takes more time than what you think. All right, so I'm going to talk about mean field game with major and minor player. And someone told me that this was working, but maybe not. Oh, defy any logic. This is a left-handed gadget, I guess, not a right-handed guy. All right, so this is the typical starting point if the models are models with the stochastic differential equation. The second part of the talk I'm going to talk about an application where the model will not be given by stochastic differential equation, but at least this will give us a little bit of a sense of what we're talking about. I'm just playing with this gadget, but it doesn't seem to be. Why do I go back? Here we go. All right, so we have a major player and its state will be denoted by, with the superscript zero and we have a generic player and there will be in large number, and this is why I call it generic, which state will be denoted by xt. And both states are evolving according to a stochastic differential equation, they're ETO processes, but what is important to notice is that the control of the major player, which will be denoted by alpha superscript zero, of course occurs in the dynamic of the state of the major player. This guy controls its own destiny, but it also appears in the dynamics of the minor players. And so this is in this sense that the major player influences the minor players. Also the state x0t of the major player appears in the dynamics of the state of the minor player, so if you think in terms of financial application, you look at the financial system, for example in the US, you have thousands and thousands of small banks, and then you have a small number, finite number of systemically important institution, think of Goldman Sachs, and whatever Goldman Sachs does is going to affect whatever all the other banks do. On the other hand, whatever the local bank in Irvine, California does, is not going to worry too much, Goldman. So the dynamics of the major player state is not affected by the individual state of the player, still you have this mu t entering in both stochastic differential equation, and this mu t represents the empirical distribution of the states of the minor players. So the major player feels the field of the minor players, it does not feel them individually, Goldman as I said doesn't care about what the savings and loan in Irvine do, but on the other hand, Goldman may worry about the proportion of savings and loans doing this or doing that. So that's how the system is mean field, it is mean field among the major, sorry, among the minor players, and this field of minor players is taken into account by the major player, but on the other hand, the major player state doesn't depend upon the individual minor players. So this creates this symmetry and we're going to assume that we have a large number, an infinite number, continuum if you want of minor players and just one minor player, a major player, and this major player could be multi-dimensional, in other words, you could have Goldman, Morgan, Stanley, JP Morgan, but only in finite numbers. Then each of these players, major or minor, incurs a certain cost and the cost as usual is given by an expectation of the effect of a running cost, an integral from zero to the terminal time capital T and a terminal cost, G0G, and again we have the same structure, namely the cost to the major player depends upon the field, upon the measure of UT of whatever the minor players do, but on the other hand, for the minor players, the cost running or terminal depends upon what exactly the state of the major player is, X0T, or even what actions the major player is taking. Wunderbar. All right, so I'm just going to give a few slides to propose a way to formulate the problems. By the way, most everything that I'm going to say except that the hint at the calculation which I couldn't finish is already in a beautiful book that Francois will sell you at a discount if you behave well during the summer school. So we're going to assume that the controls are first in open loop form. Open loop control are very convenient and probabilists love them. Practically I just believe they're nonsense, they're not practical, and in particular if you want to do a numeric calculation like Eva's been explaining to us this morning, you need to work with closed loop. However, there are many mathematical results that you can first prove for open loop formulation of the problem, and as I said, as probabilists, they are very convenient. So the notion of open loop can be formulated here by saying that the control alpha 0 will be a function of time and the entire path of the white noise of the state of the major player. The control of the minor player similarly will be a function of time and the entire path of the white noise of driving the equation of the major player and the white noise driving the equation of the minor players. Who sees these white noise in this path? I do not know. So practically if I wanted to construct this open loop control and take that as an action, I would have to see the entire path and in practice, as I say, unfortunately, we rarely see them. Sometimes we can, but it's very rare. And the function of these paths will be functions phi 0 and phi, which are deterministic functions defined on these spaces, the time, the path space, and taking value in sets A0 and A where the controls take value. All right, so when you formulate a major and minor model, you first look at the problem of the major player and you're going to see how does the major player going to respond to the field of minor player. So you first assume that all the minus player, which are behaving identically, choose an open loop control, so a function phi, which is going to be function of, as I say, time and the two white noise paths. And when all these minor players use this control, then the major player is facing the following problem, the equations for a state is the same as before, and then we have an equation now for the state of the minor player where the function phi comes in. These two equations are coupled because, of course, W0 appears in both of them. And now mu t is going to be taken to be the conditional law of xt given the path of the white noise of the major player. And so the major player problem is going to be to search for an optimal control, phi 0 star, this will be the optimum, minimizing is cost, minimizing is cost, where now the cost depends upon the function phi, which is chosen by all the minor players. But the points that I want to make here is that no matter what, the problem, the optimal control that the major player has to solve to find its best response to the decision, the behavior of this minor player is an optimal control of the McKean-Vlasov type because the distribution of the state here that is only the marginal of xt enters into the dynamic of the equation. So if you formulate the best response of the major player in this way, the problem you have here is an optimal control of McKean-Vlasov dynamics. Now we want to look at the best response of the minor player to the major player. So we're going to assume that the major player chooses a strategy of control alpha 0 given by feedback function phi 0, which is a function of the entire path. And we're going to assume that the field of minor players choose a feedback function phi, which is now a function of two white noses. And once this is done, the dynamics of the two states, x0 and xt, solve this stochastic differential equation, which is a stochastic differential equation, again, of the McKean-Vlasov type because the law of xt conditioned by the white noses w0 enters in this equation. But this is not a control problem. You solve this SDE, which is an SDE of the McKean-Vlasov type. But once you solve this SDE, you use the law of this solution and you inject that into the optimization that the minor player, the typical minor player should solve. And now you have an optimal control of the regular type. This is not of the McKean-Vlasov type. And once you can solve it as a function of phi0 and phi, we're going to denote by phi bar star the optimal open loop control, which is given by a function of the white noses path. Let's assume that this can be solved. Let's even assume that it is unique. And now finding a Nash equilibrium for a system of major and minor player is finding a fixed point to the best response map, namely finding a set of couple of feedback function phi0 and phi satisfying this equality. So I want to emphasize the fact that I want to find a Nash equilibrium for the whole system, including major player and minor player. This is going to be in contrast with a simple example I will give, I will show a slide in a minute. So this, if I can solve that, will give me an open loop Nash equilibrium for the system. What is the closed loop version of this problem? I didn't solve it. Of course, I just formulated the problem. But the formulation of major and minor models of mean field game has been all over the map. These models were introduced by Huang and then Kane and Noorian. And many people have written paper on it, but I think there is a major confusion in how this model should be set. Anyway, so now if you want to have a closed loop version of this problem, you do exactly the same thing where now the control for the major player and for the minor players are given by feedback function, but now they're functions of the trajectory of the simple path of the states. So that still may be unreasonable to imagine that I can keep track of the entire path of the state before I make my decision at time t, but at least it is more reasonable from a practical point of view. And now if you want to have a more COVID version, a version for which we can do actual computation and write PDEs, et cetera, et cetera, you will now take the feedback function phi 0 and phi to be only function of the state at time t. Now why do I bother so much to write all this annoying definition is the fact that in general, these form of the definition of an open loop problem or closed loop problems play a major role in game theory when you work with finite many players. If you work with finite many player, looking for open loop equilibrium will lead to some solution which can be different than closed loop Nash equilibria. Even if the open loop Nash equilibrium which you find is in closed loop form. So it's not enough to have an open loop Nash equilibrium in closed loop form to be able to say I got a closed loop Nash equilibrium. That's the case. However, and this is not really a theorem, but it can be proved in many cases and it's some sort of a folk theorem, when the size of the population grow and the number of players tends to infinity, and this is basically the situation we set ourselves up when we do mean field game, these differences disappear in the wash. And this is why very often when you study mean field game, you don't care so much about closed loop open loop because whether you tackle for one or the other, very often the solutions are the same. Unfortunately when you deal with major and minor players, that's not going to be the case because the limit n tends to infinity which is going to give a lot of averaging happening. You remember the idiosyncratic white noise attached to each of the players, they're going to force some sort of law of large number and you're going to have things going to random nest which is going to disappear. And so while the random nest disappear, the differences between the closed loop and open loop disappear as well. That's not the case here because the W0 and the X0 are going to remain. We have one major player or finite number of major player, we have a large number of minor players, they have their idiosyncratic noise. When the size of the minor population grows to infinity, you get your averaging but you never average out the noise of the major player and you saw that the measure that I was taking were conditional measure. So this averaging is not there and it is likely and we're going to see example that closed loop and open loop Nash equilibrium still differ, still in a mean field limit. So the best response being found in these classes of control, again finding a Nash equilibrium is finding a fixed point. So this is an example of a paper which was started years ago and which is still not finished but where it looks like there is a major, it looks like there are minor players but it is not a Nash equilibrium of this type that we would be looking for. When you do contract theory, you have a principle and for my friend Ronea, let's see the regulator of the European community, not the US because we didn't sign the Paris Accord, it's not for us, this sort of thing. So we have a regulator and the regulator basically talk to all the electric company, RWE, DF, etc., and tell them, okay, so this is what we're going to do. This is the contract that I give you. You're going to be rewarded for reducing your emission, you're going to be penalized if you screw up and you go over your cap, whatever the regulator decides. So this is the major player and then you have the mass of minor player, all the little electric operators, the little people like IDF, and so they're going to react to this regulation and they're going to choose a behavior which depends upon the regulation set in place by the regulator. So you have also a symmetry between a small number of player, the big player being the regulator and a large number of player, the electric operators, but now this is different. The Nash equilibrium will be a Nash equilibrium among all the electric operators, not involving the regulator. The Nash equilibrium will be among the minor players only. So that's a slightly different kind of problem than the one that I had before where before I won a Nash equilibrium among everybody. In other words, I won the major player not having any incentive to change his contract once the minor players have chosen what to do. All right, so when can we solve this mean field game? We do not have many large classes of problems. So I'm going to stick to two simple models for which we can actually analytically solve the problems and do some numerics. I'm going to first look at linear quadratic model and then I'm going to look at models for which the state space is finite, in which case we will not have a stochastic differential equation, but intuitively we can make sense of how things are working. So a linear quadratic model will come in this form. The, no, it's still not working. The x0, the state of the major player, will satisfy a linear equation and I assume that the volatility is constant. And what is entering in this linear equation is x bar t, which is the conditional expectation because it's linear the way the measure comes in will be only through its mean. And so the minor players influence the major player decision only through the mean. And their dynamics, the dx t, involves also their means, but they involve the dynamics of the minor player state involve x0 t, which is the state of the major player. And same thing for the cost except that now we take them quadratic, linear quadratic, linear dynamic quadratic cost. Okay, so you have the same type of equation. Again the mean x t bar of the states of the minor players enters the cost of the major player, but that's the only way the minor players will influence the dynamics and the cost of the major player. So the reason we can do that is that again, you know, we can write down the optimization problem that I mentioned before. You have linear dynamic quadratic cost, so this optimization problem reduced to big matrix Riccati equation. And so whether you do that through an FBSD or through another approach, you know, you end up with big matrix Riccati equation and hopefully you can solve them. And in this particular case we can actually solve them in the case of open loop directly and we find a solution. In the case of closed loop, things are a little bit more difficult. However, if we search for controls in a specific form because we know in advance that the controls are going to be linear in the states, so we look already for controls which are linear in the states and the mean. Now when we try to find that these controls satisfy the Nash equilibrium, we perturb that with general control. So it's really a Nash equilibrium that I will find, but I search for a Nash equilibrium in this form, but that's not a restricted notion of a Nash equilibrium. That's going to be a general Nash equilibrium. And then again, you know, we can formulate the optimization problem in the form of a large FBSD and of course they are fine and we solve them through solving a large matrix Riccati equation. However, in this particular case they're different. So this means that the open loop and the closed loop solutions are different. So that happens even for mean field game problem and that happened even for linear quadratic case. So as I said, one of the reasons why I went through these little song and dance was to emphasize the fact that that's an issue, which are the important one. I mentioned that already three or four times. So I'm going to give you just a very simple naive application. So let's imagine that you're following a beehive. You want to see how the bees move from one location to another. So how do they do that? So the queen is getting old. There is a new queen in the bunch. So they want to get rid of the old queens. So the queen leaves the group with a certain number of worker bees following the queen. They didn't notice that there was a young and more beautiful queen. So they follow the old queen. And so they go and sit on a branch on a tree and they wait. And then a few bees called the stricter bees, they go and they look around in the neighborhood. They try to find a new location for the hive. So once they think they found a new location, they come back to the tree where the old queen and the workers are waiting there. And they start dancing and dancing, trying to convince this group to follow them. And eventually, one of them is dancing so well that the group decides, OK, we're going to follow this stricter bee. And they follow the stricter bee to the new location. That's a mean field game problem, just kidding. I'm going to pretend that's a mean field game problem and that's a linear quadratic mean field game problem. That's a linear quadratic mean field game problem with a major and minor player. And we're going to see how it works. So we're going to model only the velocity because it's simpler if you model the velocity and the position. It's a little bit more complicated mathematically because you have hyperliptic diffusion instead of diffusion and things are not as simple. So we're going to denote by v0n the velocity of the stricter bee, the bee which is going to go to the final destination, which knows where it is and which wants to drag the old queen and the bunch to this new location. So that's going to be the velocity of the stricter bee. And the control is alpha 0 up to noise. The stricter bee controls its own velocity. We have n, we have n capital N bees with the queen. And so they control the velocity, alpha it, and they have their own idiosyncratic noise. What is the cost to the stricter bee? So you have three components to the cost. Then we take a linear combination of these three components. The first component is the square of the difference between the velocity of the stricter bee and the optimal velocity it would take to go from point A to point B. So in other words, this new t here is a deterministic function which if the bee used that as a velocity it would take from point A to point B. Then the second term is I want the velocity of the stricter bee to be not very different from the average velocity of the group because the stricter bee wants to pull the group to the new location. And finally, just a kinetic energy term, you want not to be too tired and exhausted. You want to end up where you want to go. So that's the cost of the stricter bee. Now what are the cost of the little worker bees? So they have again a cost with three components. The first component is that they want their velocity to be similar to the velocity of the stricter bee. So they want to follow the stricter bee. They decided to do that so they want to do it. So that's the first term. The second term is that they want to stay in a group. So they want their individual velocity not to be much different than the average velocity of the group. And finally again, they want to end up where they want to be. Namely, they do not want to get exhausted too quickly. So that's the finite player game. You write the limit as n10 to infinity as a mean field game. And this is one example of numerics that you would get. So in this particular case, you know, this is silly. There is no new location. I take the velocity to be circular. Okay, so this is the velocity in use of t. And so you see in black the trajectory of a stricter bee following this velocity. And then at a distance, remember, the position is not part of the model. So if they start from different places, there is a plot of 10 trajectories of 10 of these worker bees. And you know, they try to move around in circle, move around in circle. Again the position doesn't matter. But this is for a certain number, a certain set of coefficient, the relative value of the penalty. But if you start monkey around with these coefficients, you know, even though the velocity, the optimal velocity is the same, well, you know, the stricter bee is not even going to follow that. And the worker bees are not even going to follow it. Okay, so that's a typical example. And there was another slide where, you know, the velocity in you was linear, and then you could see also that the worker bees were following. So you're going to have to buy the book to find the other plot. And let me tell you how outrageous it is. I mean, France is not even as upset as I am. But you know, they want to sell this crap for $149 a volume. Who is going to buy it? You know, if you have, I mean, I know people have ways to do it. Just get a copy, you don't have to buy it because, you know, this is ridiculous. $149, so multiply by two, that's basically $300. It's just ridiculous. But you would have more pictures. All right. So that's what it is. So this plot is an illustration of what's really behind mean field game. Stefan, when do I have to close? You have to close it. Okay. Okay. All right. So this is an illustration of what's behind the theory of mean field game, which is based on a result you're going to hear about over and over and over during the week and presumably during the many weeks if you stick around is the propagation of chaos. Here it's a simple form of conditional propagation of chaos in the following sense. What I did here is I took five workabees, five minor players, okay, and in condition I generate one path for the white noise of the major player, namely the street cubby, okay. So this, what we're seeing here is conditioned, this is conditioned on the white noise of the major player, okay. But condition on that, I take my five street cubbies and I look only at a group of five street cubbies. I simulate their position and I compute the correlation matrix. So if you compute the correlation matrix, you see that you have, so it's a five by five matrix, and on the diagonal you have high terms, they should be one, and outside the diagonal it is still relatively high, but it's definitely non-zero. Now if you, if I redo the same thing, but my five minor players, I looked at them in a group of ten minor players, okay. So I redo the same simulation. I have now ten minor players, only one path of fixed path of the white noise of the major player, and I do the same thing and I compute the correlation, and it is much lighter outside the diagonal. And now if I take 20, and if I take 50, and if I take 100, you see that my correlation matrix is diagonal. Given the fact that we are linear quadratic, this is a Gaussian model, so this means that they're independent. So in other words, you know, when the bath, when the field of particle is larger and larger, a fixed number of particles behave exactly as if they were independent. Here it's conditioned on the white noise of the major player, but among themselves they behave as if they were independent, and this is really at the root of the mean field game theory. So this is a mean field interaction, and in for large system we expect propagation of chaos, namely that each single little guy becomes independent of everybody else. All right. All right, so now I want to finish with discussing another example, and this is an example borrowed from a small paper of Kolokolstov and Alemba and Souza. They have a small example of what they call a botnet cybersecurity model, and this could be a model of infection, of disease, or this is a simple model. The idea is that now we do not have stochastic differential equation, but we have a state space instead of being RD, it's a state space with four states, D, I, D, S, U, I, and U, S, for, so the situation is the following. So we all have a computer here, and you know, we may be worried they're going to have an election, oh no, you did that already, damn it. So we're going to have an election, and we may be afraid that some foreign power may try to mingle with our election, you know, that's something that, no, it never happens, I don't know. It's a hoax. It's a hoax. So we may worry about, you know, someone attacking our computer system. So the interest that I had in this particular model and the reason why I'm still working on this very simple model is the following, and I may not have much time to discuss it today, and I was trying to make a numerical computation for that. The question that I want to address is the following, and I will not be able to address it in full now. Just imagine that we use a mean field game model like Kolokolstov and Ben-Sussan. What this means is that we're going to let every single one of you selfishly, selfishly, makes a decision. I'm going to buy a computer protection, an anti-virus, and I'm going to install it. Who cares, you know, I never got infected before, I'm not going to be infected now. So everybody makes his own decision selfishly. You have some worry about what you have on your hard disk, you know, your attachment to your file, to your PDF, to, you know, pictures of your girlfriend or your mistress or whatever that is. You want to protect that, and you know, depending on the value you put on it, you know, you're going to make a selfish decision. Of course, you know, when you do in a mean field game, we're going to anticipate or we're going to try to estimate what the average of the people do. So we're going to look around and see, you know, if 90% of the people are infected in the room, you know, maybe I do not enter the room or I wear a mask, you know, I'm going to be very careful because the probability that it's going to be infected is higher. On the other hand, if no one is infected in this room, why would I buy, why would I attack antibiotics? There's no reason for me to do it because no one is going to infect me. No one is infected. Okay. So we're going to try to guess what is happening with these computers and we're going to make a decision. But the decision is very selfish. These controls in terms of the terminology are called distributed control. Namely, we make the decision on the basis of our state and nothing else. So this is what we're going to do. In cybersecurity, many, many, many, many papers have been written using game theory. But the way things are done is that we assume that we have a network of computers and there is a network manager, you know, your department hires some dude to take care of your computers and the manager decides, okay, so actually it doesn't ask you. It is installing, you know, an antivirus on your machines without you being aware of it. So the game is between an attacker, a hacker and a computer manager, but not the hacker and the network. Okay. So if you want to have a mean field game problem, what we're going to do is we're going to have a hacker, you know, attacking the computers and we're going to have a bunch of people making their own decision selfishly. All right. Selfishly. I insist selfishly. So that's what you do in mean field game. Now the issue is the following. And that was my question, you know, is it worth to spend the money and hire a computer manager for this network or should we leave it to people to do whatever they want? Okay. So there is a price to pay in a mean field game as opposed to a centralized decision by the computer manager or, you know, the minimal cost will not be as low if we let people deciding on their own selfishly as opposed to the cost to the society if we have a regulator making the decision for everybody. Okay. And the question is how different are they? You know, if the cost is only twice what it would have been, maybe I don't care. I'm not going to hire a network manager. You know, statistically. On the other hand, if the difference between the two is a factor of 10 or 100, then maybe I'm worried and I'm not going to let people optimize selfishly. So in game theory, this is what people have called the price of anarchy. Okay. So when people optimize selfishly, we can think of it as anarchy. I mean, some people do that. Actually the name was tossed by two Greek people. But that's what I want to study for this problem. So let's see. Oops. What are we going to do? So when the state space is finite, when the state space is finite, well, the dynamics are given by a Q matrix. And the Q matrix is going to be a function of two states. It's going to be a matrix. The rows have to sum up to zero. But we're going to have two parameters. One will be a parameter for the measure, the statistical distribution of the states of the other players, and then a control to affect the rate at which the state will change from one state to another. To make my life easier, I'm sure this is what already in the model of Kolokossof and Ben-Susan, you know, we're going to take the control to be zero one. Zero. It's not going to change anything. I'm happy with what is going on. One, I started panicking. I see some of my neighbors being infected. So I'm going to change my level of protection. I'm going to go from one protected to protected or protected to unprotected. What I emphasize here is that, again, we're going to take feedback Markovian control. So they're going to be function of the state XT, the state being, as I said, one of these four states. And if I look now at the evolution of the distribution, this is given by Falka-Planck Kolmogorov equation, which is almost a linear equation, except for the fact that the measure enters in the coefficient. So it is a form of McKean-Vlasov equation, even though it is for a continuous time Markov chain. We can write the Hamiltonian. We can minimize the Hamiltonian when we are lucky. If the state of control is finite, so that's going to be easy. We have a minimized Hamiltonian, and we also have an HAB equation. That's easy. In the particular case of this model of Kolmogorov, Sof and Ben-Sousson, if the control is zero, I do nothing. This is the transition from one state to another, dI, dS, uI, and on the horizontal, the columns are called dI, dS, uI, and uS. And these coefficients have some meaning, beta, or rate at which you get infected if you're defended, et cetera, et cetera. But you see the measure mu appearing in the transition in the Q matrix. So it is not going to be a typical linear Fokker-Planck equation. That's going to be a nonlinear Fokker-Planck equation. In any case, you can solve your mean field game problem. And if I plot here the time evolution of the distribution in equilibrium, distribution will be four probabilities. I have a probability to be defended and infected, defended and non-infected, not defend myself and be infected, and not defend myself and not be infected. If you start with equal probability of these four states, things change and evolve over time. If you start with a delta function, all the, I cannot read from here, but all the computers are in one specific state, they evolve, the distribution evolve in this way. What you can notice is that these distribution becomes constant over time. And so that's basically screaming for an ergodic model, a steady state model. And indeed, this is what Kolo-Kolsoff and Ben-Sussin studied, an infinite horizon ergodic model where they took one coefficient to infinity to make their life easier. But you can solve that completely. We can talk about the master equation and solve it and recover the same result. I'm not going to worry about that. So what I want to emphasize is just two words about this price of anarchy bound. In other words, you want to compare. So now let's imagine we get rid of the hacker. The hacker is simply a parameter in our model. We're going to look only at the mean field of the computer in the room. And so, and there is, I don't know, a 60% intensity of attack by the hacker. And we want to decide what is the cost to the group if we defend ourselves by not following a policy imposed by someone else? Or if there is a central planner making the decision for everybody and telling you, this is what you're going to do. This is how you're going to behave. And so, as I said, the cost to everybody will be lower if the central planner comes in. And the idea of this price of anarchy bounds is to find how much worse are we by letting everybody make their own decision. So just two minutes. If we are end players, end computer in the room, we can imagine that the state of our computer evolves according to a continuous time mark of chain, and each player is going to minimize a cost of this type. And here, this is the cost to player i, hence the state xit. And this is the empirical distribution of all the other computer. And sorry, what would be the cost to the group? So we're going to take a cost per player. So we're going to take one over end, the sum of the cost to all the players. And if you play around a little bit, then you take the limit as n tends to infinity. If there are no common noise, the empirical measure are going to converge to a single measure, and the social cost is going to be of this form. The running cost, including the measure, the terminal cost, including the measure, but now it's going to be integrated out because of this one over end sum from one to n with the same measure muti. So this is the social cost if everybody used the control phi, and if mu is the distribution of the state of the player. And so what we're going to have to do is decide how do we study this social cost. The mean field game problem is one way to solve it where the control phi and the measure muti are tied by a very tight constraint, while the social cost optimization will be leading to a very similar, very similar optimization, but the control phi and the measure mu will not be tied by such a tight constraint, and the minimum will be lower. And again, as I said, I didn't have time to go any further, but you have now, when you study the social cost problem, you have now a control of the McKinvass of type, even though it is purely deterministic, but it's in the space of measures, and you have to solve an HGB equation in the space of measures when the state space is finite, thing are easier, but I'm done. Obviously the guy is tall, big, young, so I'm not going to fight. I'm done. Thank you. Thank you very much. One quick question before lunch. I want to ask for the open loop strategy. I don't well understand, in fact, because the SDE for the representative agent is given in a weak sense, the Brownian WT. I'm not sure they understand the question yet. Yes, it's expressed to express the limit behavior in law, no? When you write the strategy, alpha, it depends on the path of the Brownian motion, W. For me, W, the Brownian is just a tool to write the SDE for the representative agent in a weak sense, I mean. So I didn't understand my question. The remark I was making is that if I think of a game or control problem where I'm going to actually implement the optimal control. So in other words, when I see the system evolve at time t, I want to be able to implement my control. I want to decide where the B's are going to go, how to change the velocity. I have to have a function of something that is hopefully available to me. And the question is that for open loop, you take a function of the path of the Brownian motion. And this is not something that we observe directly. Yes, we don't observe it. So mathematically, this is what, a progressively measurable process. So it's cool. There are spaces of these processes. We put an integrability condition, expectation of 0 to t of the square is finite, we have a nice, little bit of space, and we can do mathematically a lot of things. However, if you want to implement it practically or do it numerically, unless you're going to solve that during the summer school, this is a tough challenge. So this is what I say. However, mathematically, and the first papers we wrote with Francois on the subject, we were using open loop control and open loop Nash equilibria. But unfortunately, I believe this is not very practical. It's useful mathematically, but not always practical. Thank you. Thank you again. We. Thank you. Thank you. Thank you.
We introduce a new strategy for the solution of Mean Field Games in the presence of major and minor players. This approach is based on a formulation of the fixed point step in spaces of controls. We use it to highlight the differences between open and closed loop problems. We illustrate the implementation of this approach for linear quadratic and finite state space games, and we provide numerical results motivated by applications in biology and cyber-security.
10.5446/57384 (DOI)
Thanks, first of all, for having me in Lumini. I think it's my third or fourth time, and it has always been a pleasure to be here, especially this time, is there a feedback between different microphones? I just had the impression. Okay, yeah, I thought I was supposed to say some very basic things about BSEs, about backwards stochastic differential equations, this important tool of control theory from the stochastic side. Now I see that there are quite a few experts in the room. Maybe in my first part, I will also tell you something new. In the second part, we'll be more along as Rene Carmona was putting it along beaten paths. I will present basic theorems about existence, uniqueness, and most important properties of BSEs if there is time enough. In the first part, I would prefer to deduce BSEs as a natural tool. I will fall out of, so to speak, the needs of dealing with a problem of financial mathematics of incomplete markets quite naturally, as you will see. And there I rely on an old paper by Ying Hu and my student, Müller, as you will see. There are no similarities with the first two talks this morning. Just one little thing concerning the talk with Rene Carmona. So I'm not talking about big players and small players, but there is one thing that I would like to mention. One of the two of the big players that you mentioned, Goldman and Sucks, were just born five kilometers from my hometown. And they were small players at the time when they emigrated to the United States. In the turn of the 19th through the 20th century, they were cattle traders in a rural area, northern Bivaria. Apart from that, so let me come to the example that I want to present to you, which is from financial mathematics. And I'm dealing with a small agent who is facing to have to ensure a product that he produces and that he sells. So he has two sorts of income. He has income from trading on a financial market. And this financial market is given by a finite set of assets whose price processes are described here. We have a bond which is trivially taken to be one. And then we have one D stocks and they are of the sort of geometric Brownian motions. Both sigma and B can be random. But anyway, they are D stocks. And the proportional increase of their value is given by this volatility matrix, sigma ij. So we have an n dimensional Brownian motion. So j here goes from 1 to n. And then we have D stocks indexed by i here. And there is a drift vector which is given by bi. And here is the evolution of the price processes. If you want to write this, well, in a vector notation or in a scalar product notation, I forgot the scalar product brackets that will follow later here. So you get for the ith price process, you get sigma i multiplied vectorially or scalely with the Vener process here. And then you have the bi. If you extract, if you are able to extract the matrix sigma from this equation, you will produce a new process which is usually called the price of risk process, theta. So if you have enough ellipticity in your matrix, then you can take the inverse of sigma sigma star here, multiplied by sigma star, call this whole thing theta. So that after multiplying by sigma, you get your b again. So after extracting this object, the sigma, now the sigma from the second part, from the drift part, you get exactly the theta. Okay, so theta is the price of risk process. And we assume enough ellipticity. Anyway, this will not be important for what follows. Because what I want to emphasize is just how a BSD arises in this model. So there is a second source of income for the agent, which will come on the next slide. For the moment, I just, can you read the green parts of my slides? Correctly? Okay. So this model was treated by Nicole and Elkarou and Rouge in 2000 for convex constraints. You will see as we go along that we can deal with non-convex constraints quite as well by our approach. So the convex or no, our closed constraints will be given by a set, a random set, which I call C tilde, which is a d-dimensional set, same dimension as the price processes. And I will call a tilde the strategies, the investment strategies of my agent. So P1 will be the amount that he sets on price process number one, on the proportional increase of the price process number one. Pid will be the one for the deep component of the price process. We will assume that our price, our investments, take their values in this closed set. This closed set can, for example, be the set of integers. So if I'm forced to just invest integer amounts of some currency, yeah? This can be an example of this closed set. Okay, so once again, it's not supposed to be convex. Okay, then there are some conditions that we need in order that our analysis is working, so we assume that x to the minus alpha, you will later see that this is related to the utility function. You can see it already here. This is related to the utility function. So e to the minus alpha times this stochastic integral. So this is the amount of investment taken up to time tau. And this is a family of uniformly integrable random variables if tau runs through all possible stopping times in the interval from zero to a terminal time t with which our trading interval ends. Okay, so the investment processes are written here. So x pi is the wealth you obtain after time, and then you get a little t by investing with the strategies pi. So pi one up to pi d into our financial market. The proportional increase of the prices is here. So this is the gain that you get from investing into all of the assets between zero and t, you have to take the integral. And we can alternatively describe this by extracting the pi. And then as you remember, the dsi over si could be written as sigma i times the w plus price of risk theta t dt. So here we are. So I'm sorry. I'm extracting, I have already extracted the pi, then I extract the sigma i. And then I have the w plus theta ds here. So this is my wealth process of the small agent investing with a strategy, with a vector of strategies pi up to time t. Okay, so the preferences of our agent are measured by an exponential utility function and the preferences, so the utility will be taken from terminal wealth. The utility function is this exponential function, you still can read the green. Okay, okay, I can hardly read it. Okay, so u is the exponential utility function, alpha is fixed. So this is for all x in r. And okay, so now there is the first formulation. No, okay, I forgot now, now it comes. The second source of income for my agent is due to a liability that he has at time t. So he has, for example, to pay a certain amount of money to somebody that has bought a financial product from him. And usually we assume that f contains uncertainty, which is not generic to the financial market, which could come, for example, from climate or from his usual business from some commodities. So the market is not complete and the agent has to, so to speak, to hedge the uncertainty which resides in having to pay at the end of the trading interval this amount f. Okay, so now we come to the formulation of the utility optimization problem for that agent. So once again, he has two sources of income. First, investing into the financial market with his price process x, x pi. And secondly, at the end of the trading time, he has to pay this liability f. And f has more uncertainty usually than is in those deep price processes in the brown emotions that drive those price processes. So the optimal utility problem, the optimization of utility problem of our agent that our agent faces is this. So you take the utility of your income from trading on the financial market, that's the red part, minus what you have to pay at the end of the trading time, minus your liability f. You take the utility from that. You take the expectations to expected utility and you want to find the optimal strategy. I already wrote pi in a tilde here. So a tilde once again was the set of natural admissible strategies. Now I do a first reformulation of my problem. In fact, I will do four reformulations and after the fourth, you will see how everything can be formulated in terms of backwards and so on, so cast a differential equation. So here is the second reformulation. Before I give it, I have to introduce those effective processes. So you saw that here we extract pi and we multiply by sigma. I call this product P. So pi sigma is P. So it's a new effective strategy if you like. I just multiply my natural strategy with a volatility matrix. And then I have an effective set of constraints. I take my old set C tilde of constraints and I multiply from the right-hand side with a volatility matrix. This gives me the set of effective constraints C. And I do the same thing with my admissible strategies, multiply by sigma from the right-hand side and get this A. I call this A now, P, C, and A. So here is the price process once again in terms. Now the wealth process of the agent due to investment into the financial market, if we use strategy number P, strategy P, so it's x plus this integral where P is integrated stochastically against the Brownian motion plus theta ds plus the price of risk ds. So this is the deterministic integral. OK, so here is the second formulation. Basically, it's the same as before, just with the new quantities P, C, and A. So V of x, the wealth function is the supremum overall strategies P in the admissible set A. And then I take my expected utility from terminal wealth given by the investment in the financial market minus the liability. So you can also write it like this. If you express the wealth process xP as it was defined here. OK, further on. So now I want to deduce everything from the so-called martingale optimality principle. What the martingale optimality principle has to do with BSEs, I will explain on the next couple of slides. But let me now first talk about martingale optimality. So assume that we can construct a family of processes, Rp. So for each effective strategy of investment, I get one of those processes. And now I will require four properties. The first property is that it is constant at initial time. And this constant, in fact, does not depend on the strategy. That will be something that will be required later. The second property is that at terminal time, the process takes the value which is given by my utility of investment into the financial market minus the liability. So terminal wealth, not the expected terminal wealth but terminal wealth. The third requirement is that my process is a super martingale for any effective strategy. And in the fourth requirement, I say that for a particular strategy, p star, which will be the optimal strategy or one of the optimal strategies, it must not be unique. For this strategy, it's for p star, I have that my Rp star is a martingale. So for at least one p star, I want it to be a martingale. It need not be exactly one. Because convexity would give exactly one. But we are not in a convex setting. So why does this solve if I have such a family of processes, why does this solve our optimal investment problem? Let us just give the argument. OK, so I want to optimize, as you saw on the previous slide, I want to optimize the expected terminal wealth from investment in the capital market minus liability. So this I want to maximize. So now let us take for any strategy p, I have expected utility, which is given here. Now I, so this is terminal utility, expected terminal utility. OK, I say this expected terminal utility is equal to the expectation of RTP, because RTP was supposed, second line here, to represent the terminal utility. OK, so here I have the expected terminal utility in terms of my family of processes Rp. I know from the third line that my Rp is a super martingale, so that the expectation of RTP is less than or equal to the initial expectation, R0p. Now my initial constant does not depend on the strategy, so this is the same as V of x here as, sorry, as V of x does not depend on the strategy. So for p star it would be the same. You can even go from this line to this line immediately, now from this line to this line immediately, the initial value, the constant, does not depend on p. So I have the same for my hopefully existing optimal strategy p star. So the expectation of R0p and R0p star are the same. And this is the expectation of terminal wealth from this strategy p star because fourth line for this particular strategy, my Rp is a martingale, so that I have equality here. In other words, what I have ultimately estimated is this quantity for any strategy p by this quantity for the particular strategy p star, and this states that the p star is linked to my optimal utility. So with constructing a p star, I have solved the problem of getting optimal utility. So p star will be an optimal strategy. Now this gives me my next formulation. Now I come to construct this family of processes, and for this purpose I use backward stochastic differential equations. So assume that I can create. I wrote already I introduce a BSD into the problem. Let me formulate it a little bit more carefully. Let me say I want to introduce a dynamics into the problem, which at time t gives me, so to speak, the temporary value of my ultimate utility f. So this will be the value at time t of my terminal liability f, and it is given by a backward stochastic differential equation. So the backward dynamics is written here. So you have a stochastic integral with a control process z, dw, and you have a so-called generator f of my BSD e. And what I want to do now is I want to construct this f from the necessities of my utility optimization problem, from the non-convex constraints, from the optimization from the properties of f, and so on. So I want to construct the generator of this so-called backward stochastic differential equation. Let us rest a moment at this place to say that I could also write this equation in a forward way. What would I have to do? So if I write the equation for time equal to 0, I would get y0 equal f minus the integral from 0 to capital T zs dws minus integral from 0 to capital T f of s zs ds. OK. In this equation, the terminal variable f is given. y0 is not fixed. y0 is given by nature once f has been determined. Here is the equation. Now to write it in the forward way, to write this backward equation in the forward version, you just have to, well, you have yt. yt is yt minus y0 plus y0. So it's y0 plus, and then I have to subtract yt from y0. Now y0 from yt, so I have to subtract this from what I wrote on the slide. What will that be? So here, f drops out, and I get this with a plus sign, and the integral from little t to capital T with a minus sign. So I get an integral from 0 to little t zs dws with a positive sign. And for the generator integral, it's the same thing. I have it with a negative, well, minus here with a positive sign. And here with a negative sign, so plus integral from 0 to t f of s zs ds. This looks like a forward SDE, but of course it is not, because my initial value is not prescribed. It is given by the backward equation. So I should add y capital T is equal to f in order to make this a complete system of equations. So this forward equation plus this condition is, so to speak, equivalent to my backward equation. But I can write it in a forward way. And as you will see, in deriving the dynamics and in calculating the f, we will need this forward representation. So it's a backward equation, which is different from forward equations by the fact that the initial value is not prescribed. It cannot be taken to be, it will be deterministic, but it is not described if the filtration is simple at the beginning. It will be deterministic or almost surely deterministic. But I don't know what the y0 is. It depends, strongly depends on f, and of course on the generate. OK, so with that, let's go back to the business of describing. So what we had on the previous slide. So we wanted to have this family of processes, rp, with this four properties. So constant, independent of the strategy at the beginning. At the end of the time, I want to have the terminal utility from the two sources of income. I want them to be super martingales for all p, and I want them to be martingales for exactly 1 or for at least 1 p star. OK, now this is, as I said, the temporary value of my terminal liability. And what I do in order to construct my processes, rt, with index p, I just plug them instead of the random variable here. So I replace the random variable f by its temporary value given by yt. So this will be my rtp. Maybe I should have taken little t here as well. That is a mistake. I'm sorry for that. So little t here and little t here, because I want my processes to be adapted anyway. This would not be the case here, of course. So OK, now let's check the four properties. The first property is it is constant at initial time with a constant independent of the strategy. This is in case the fact, because at initial time, my investment process will be little x, and my backward sarcastic differential equation will take the value y0. y0. This does not depend on p. So here I have a constant which does not depend on p. At terminal time, I want my terminal utility to be described. This is also true, because at terminal time, this is correct. So I have xt, x capital Tp here, and y capital T is f. OK, so this is my terminal utility. In between, I have to have that for every p, my process is a super martingale, and for a certain p star, it is even a martingale. Let us check this. So how do you get this? These are two conditions that are not yet fulfilled, and that, in fact, will lead to the description of our generator function f. So how do we construct the f from those two conditions? This is what will be on the next slides. By the way, I hope that you can follow the slides. As soon as you find that I'm moving too fast, I'm not moving fast. I don't have that many slides for the whole session. Around 30, I think. That's all. And I'm not sure whether I will even cover all 30 of my slides. But feel free to interrupt me, and I will go to the board and explain things more precisely, if things are not precise enough on the slides, and if I'm moving too fast. So once again, we want to design an f for this BSE so that these two conditions are fulfilled. For that purpose, we have to go into the definition of our processes RT, and we have to describe well, the dynamics of the backward process, the dynamics of the forward process, and we have to see what we get. So here we are. r little tp, so here I have it again, the mistake, x little tp should be here, not x capital Tp, minus yt. And the yt is here. So what do I get? Minus x is the definition of the utility function, minus x of minus alpha of the wealth. Here is the wealth. So my wealth is, here is the initial wealth. So here is the initial value of my investment process xp, little x. Here is the initial value of my backward stochastic differential equation y0. So I extract this quantity. Exponential of a sum of objects is the product of the exponentials. So I can extract minus in the exponential of minus alpha times x minus y0. And what is the rest? So the rest, of course, is again x, not with a minus once again, of minus alpha. And then comes the forward dynamics. So what was the forward dynamics? So xtp, that was little x, plus integral from 0 to t, and then comes psdws plus integral 0 to t, ps theta sds, with the process of market risk, theta, here. So this was the forward process. And here is the backward process. Now I have to subtract the two. Here it is. So I get from subtracting the martingale part of xp, no, the martingale part of the backward process here, from the martingale part of the forward process, I get integral ps minus zsdws, as you see. And for the other part, I have, so with a minus comes this expression, and with a plus comes this expression. So integral ps theta sds with a plus. So here I have a minus, a minus, another minus. So I have f of zs minus ps theta s here. And I want this to be a super martingale for any p and a martingale for a particular p star. How do I get that? Well, let me, first of all, extract the contribution of the initial values, and then make my martingale part in the exponent, an exponential martingale. So how do I make this part here an exponential martingale? You took your son of theory or, well, linear stochastic differential equations. What I have to subtract is alpha square over 2 times the quadratic variation of this process here, which is integral from 0 to tps minus zs square ds. If I subtract this from this martingale part in the exponent, this process that I'm encircling now, the exponential is a martingale. So this is a martingale. What is the rest? OK. So I subtracted alpha square over 2 times this integral here. Then, of course, I have to compensate. I'm compensating it by writing, well, here I don't have a minus, so I have the minus here, the minus of the exponential. So I have to compensate this red expression here, alpha square over 2 times this integral of the square ds. I compensate by writing plus alpha square over 2 ps minus zs square ds in the integral. The remainder was just the blue generator to be calculated minus ps theta s multiplied by alpha. So alpha ps theta s and alpha f of s zs. So here, the red part, this initial object doesn't do any harm, multiplied by this. This is the red part, m. I call it mp. This will be a martingale. And the rest will be something. In fact, it is a process of bounded variation because there is no stochastic integral in the exponential involved anymore. So it's a process of bounded variation. And it should be such that the product of the two terms is a super martingale, and for certain p star, the product is a martingale. So how do I have to choose it? Now let us take this kernel here. So let me extract alpha. The alpha will not do any harm. So if I extract alpha from here, I have f minus p theta plus alpha over 2 times ps minus zs square. I call this function q. It depends on p possible values of my strategy, and it depends on z the possible values of my control process. So I call it q of randomness p and z. It will be f of randomness and time, of course. f of dot z here minus p theta. Theta is also taken to be a value in r plus alpha over 2 times p minus z square. This function is the integrand of a Lebesgue integral, which I have to take with a minus of the exponential to make up my process at. And this process, how do I get that the product of those two processes is a super martingale? Well, I will have to require that at is non-increasing. So under which conditions at will be non-increasing? Here it comes. So here is the third formulation of my problem. You remember, initial state and final state were treated. And we were only left with the question, for any p we have to get a super martingale, and for particular p star we have to get a martingale. So these are the two conditions that follow from this after defining this kernel function that arises in this integral here. So what do I have to require? I want, once again, I want this process to be non-increasing. So here I have a minus of an exponential. So I want this process to be non-decreasing. It has to increase. So what I want is that the kernel, alpha is positive. I extracted alpha, alpha is positive. So f minus p theta plus alpha over 2 times p minus z square, q, in other words, should be non-negative to make this process non-decreasing. Here we are. So I want it to be non-negative for any p and z, for all p, and for the martingale condition. So for the martingale, of course, this is already a martingale, so we want this process to be constant. In other words, I want it to have an integral that is constant. In other words, the integrand should be 0. So I want the integrand, q, at p star, to be 0. And this for at least 1 p star. How do I get that? Here is the q. So it was f minus p theta plus alpha over 2 p minus z square. What I do in order to show that these two conditions can be fulfilled, or to see how they can be fulfilled, is I do a quadratic expansion of that expression. So what do I have? I have p minus z theta with a minus, but then I have to compensate with a minus z theta. So now I have an expression which compares to this one, which was already there in the beginning. So alpha over 2 p minus z square. And here I have p minus z theta. To complete this, to complete the square in this expression, what do I have to add? I have to add 1 over 2 alpha times theta square. Then this object here will be a square. It is here. I can extract alpha over 2. And then I have p minus z plus 1 over alpha theta. So this is that expression here. The z theta that I have to subtract here in order to compensate for the addition of z theta here, and the subtraction of 1 over 2 alpha theta square, which compensates for adding this here. That goes by itself. So what I end up with is f of dot z plus this quantity minus z theta minus 1 over 2 alpha theta square. Now this quantity will always be positive. And I want it to be, so to speak, commensurated with f. Remember that we have our non-convex constraints. So the p was supposed to take its values in our possibly non-convex set C. So my p minus z plus 1 over alpha theta square, this z plus 1 over alpha theta. You imagine as a point lying somewhere in the space. And then I subtract the p from it. And the p is supposed to be in my non-convex constraint set C. How small can this quantity get? It gets at most as small as the distance squared from the constraint set to this vector z plus 1 over alpha theta. It's always bigger than that, square. So this is the distance d from the constraint set C to my vector z plus 1 over alpha theta. It's always bigger than that. And, well, how do I get the strategy p star? I just have to find a p star which realizes the distance. And for that, I just need closeness of C, nothing else. No convexity, just closeness. I just have to find a point on the boundary of C that realizes the distance of the set C from my vector. So that, well, ultimately, we come up with a conclusion. Now, how do I have to choose my f? So once again, to the last slide. So I want my f to compensate for this square distance. And then I have to take into account that I have some quantities here that need to be compensated. So I want q to be non-negative. So in other words, f should be bigger than or equal to the negative of this red and green part. And the red part is at least as big as the distance squared. So I want my f to be minus the square distance plus z theta plus 1 over alpha theta squared. Here it is. And I have to take this alpha over 2 in the beginning. So this must be my generator. f of z minus alpha over 2 times the square distance from my constrained set to the vector z plus 1 over alpha theta. Theta comes from the price of risk. And then I have to compensate z theta in 1 over 2 alpha theta squared. So this guarantees the super martingale condition for all p. And if I'm able to find a p star which lies on the boundary of z, of C, so for which this distance is realized, I'm having a martingale. I'm realizing that, well, yes, as we saw, that my q p star z is equal to 0. So p star is described by the condition that d of C z plus 1 over alpha theta is equal to d of p star. So I want a p star on the boundary which takes the minimal distance from any vector in C to this particular vector. This will guarantee my martingale condition. So what I'm left with in order to solve my optimization problem is just, well, to find a p star minimizing the distance. So here I have depicted a situation in which you see that multiple solutions are possible here. Because I'm only requiring closeness and not convexity. This blue set here is my constrained set. And if my vector lies here, then I have depicted an arc on which the distance to my vector is always the same on this whole arc. So I can have multiple solutions of the problem, but I have solutions. OK? Well, and this leads me to an existence theorem. Of course, we need to verify. So we just calculated the generator. And now we need to verify whether this was tassantly assumed in our derivation whether the BSE has a solution. In fact, what you get here is, you see, here is a square. So the Z appears squared. That means we have what is called in BSE theory a quadratic BSE, quadratic in the control variable. OK? This quadratic BSE has to be solved. And then we are done with our optimal investment problem. So here is the theorem. I take the unique solution of a BSE. And here is my BSE. The F was constructed. The F is once again here. And if such a solution exists, then the value function of my utility optimization problem, V of X, is given by minus the utility of initial wealth by investment minus initial state of the BSE. And the optimal trading strategy exists and takes values in the projection of my constraint set C on a vector V is the set of all vectors in Rd that take the distance. So on this arc, it's the red arc here. Yeah? So the value function is then given by something, or the optimal trading strategy P star is given by something which lies in the projection of my, of course, random set CT. So you have to just solve a problem of finding a projection which is non-anticipating or something like that. This is what the problem boils down to once you have solved your BSE. Now, of course, this BSE is a quadratic BSE. And now that I want to present the basic ideas of how to solve BSEs, I will not talk about quadratic BSEs, but I'm aiming at deriving an existence result and some other properties of solutions of BSEs with Lipschitz conditions on the generator with which one sees that also this problem with quadratic generators can be solved. So OK, we will now, OK, just a few remarks concerning the solution. So existence and uniqueness for BSEs which are quadratic in Z that was dealt with by Koby Lansky in 2000. I need a measure of selection theorem to identify my optimal strategy or one of the optimal strategies on the projection of my constraint set. And I need BMO properties of those martingales for which the integrability or the uniform integrability condition that I wrote on my first slide was good enough. Uniform integrability of exponentials, this will give me all those properties. OK, and for the remainder of that session, I now propose to talk about, well, more beaten parts, existence and uniqueness for BSEs with Lipschitz drivers. In the end, I hope to be able to derive some properties of solutions which already point into the direction of being able to solve such more complicated BSEs that are local in Lipschitz. OK? OK. So the spaces on which we work. Let us first, OK, the time horizon will be t as before. m is the number of dimensions I have. So I'm not working in one dimension as in this simple example that I just treated. I work now in m dimensions. So rm, yeah? The number of VINOR processes in my BSEs will be n. So I have an n dimensional VINOR process, w1 up to wn. Its natural filtration is here. And it's a completed natural filtration not to have any problems with adaptedness and so on. So L2 of rm is just the usual space of Ft measurable random variables that are normed by the usual L2 norm. H2 of rm, these are now processes that qualify for the stochastic integrals in the stochastic integral part. So this is the space of all adapted processes. And the norm is given by the L2 norm on the space 0t cross omega. So this is the square, the L2 norm on the space 0t cross omega. Basically the same as this. H1, the same, oh, yeah. Yeah, the same object just with the square root inside the expectation. So it will not be very important. What will be important, however, is this space of processes which is, well, point by point it's the same as H2 of rm, this space here. But it's normed in a different way. And this norming will then lead us more easily to solutions of the BSEs that we consider. So I'm not taking the norm that I described here. I'm blowing up the integrand, xt square, by the factor e to the beta t. OK? So this is the two-beta norm on the space of adapted processes that qualify for stochastic integrals. And H2 beta rm, I call this space with this norm. So it's the same space point by point, but just with a different norm. The e to the beta t's, of course, bounded above and below on the interval from 0 to t. So I get the same x, just with a different norm. That's the only difference. OK, so these are the spaces. Now come the parameters of the BSE. So what was the F before? Terminal liability in the example will now be called xi. xi is the terminal condition, just abstractly terminal condition. It's an H2 vector with values in rm. The generator will, as opposed to my example, where it only depended on time and on the control, it will also depend on the variable y. So on the variable y here. Yeah? So I have a function which depends on randomness, on time, on y, and on z. z will, of course, now not be a one-dimensional object. So the Wiener process, which multiplies to it, is n-dimensional. So it will be an n cross m matrix, because it goes into rm, an n cross m matrix. So this is a matrix. So z takes its values in matrices. And, well, the target space of f is rm again. I will work under conditions that at 0, at vector 0, so at y equals 0, and at z equals 0, I have something which is square integral. And then I will suppose that I have uniform Lipschitz conditions. As opposed to my example in which z appeared quadratically, this is now not allowed. I will assume that there is a constant such that for all pairs of vectors. So y1 in rm, z1 in the space of matrices here, and y2, z2 analogously, I can estimate the difference or the increment of the generator by constant times y1 minus y2, where this y1 minus y2 is measured in the Euclidean norm of rm. And z1 minus z2, this Euclidean norm, is the Euclidean norm in this space of matrices, the Euclidean norm, or the Hilbert-Schmidt norm if you like, which is the same. OK. Generator f and terminal condition psi, fulfilling those measureability. OK, so I assume product measurable and adapted, of course, and then I assume h1, h2. So h1 is initial integrability, and this is Lipschitzianity. If that all is fulfilled, I speak about standard parameters. And for these standard parameter pairs, I want to find a pair of processes, yz, which satisfies the backward stochastic differential equation, which I wrote here in green. So terminal condition psi, and then I wrote it instead of in integral form, I wrote it in differential form. So dy t is equal to zt star dw. The star had to be added because I have vectors now, not scalars. So z star is a joint of the vector z, or the transpose of the vector z. It's multiplied by the vector dw, which is an n dimensional vector as well. So this is just a name of the scalar product of z and dw. And you might remark here that I do not take a minus sign, but I write a plus sign. This is for convenience. It does not matter because the vener process can be reflected at zero, and it's still a vener process. So it doesn't matter. And minus f t, y t, z t dt. So this is what replaces my generator here. OK, and this is the differential form. In integral form, I would write it in the forward way like this, or in the backward way like this with the f replaced by the psi. So I want to find a pair of processes, y t, z t, satisfying this backward stochastic differential equation. I hope you are not perturbed by the fact that I have to find a pair of processes as opposed to just one process for forward stEs. You always have to find a pair of processes, even if the generator is zero. So if the generator is zero, what does your equation express? The equation is nothing but the martingale representation of a random variable f or psi, if you like, by a stochastic integral with respect to a vener process. And in this martingale representation picture, you write your process z as the integrand, and your process y as the integrated process. So this pair is what we are looking for, even in this, well, nonlinear conditional expectation case. So the linear conditional expectation case is the case without a generator. And if the generator is on, then you have, in terms of Peng's theory, you have a nonlinear conditional expectation. The f makes it. OK, so we want to find a pair of processes fulfilling this equation. Now how do you go? The method is to use contraction and the contraction principle on a suitable Banach space. We just have to find this Banach space, and we have to show that we get a contraction by mapping approximative solutions on the next approximation of the solution, as you will see. OK, so for this purpose, for constructing a contraction on a suitable Banach space, we have to derive, first of all, so-called a priori inequalities. This is the general and most frequently used method of constructing solutions of BSEs. So how do we get a priori inequalities? We assume that we have two pairs of standard parameters, f1, psi1, and f2, psi2. And we assume, it's an a priori inequality, so we assume that we know already solution pairs for these two pairs of standard parameters. So for f1, psi1, I have a solution pair y1, z1, and for the other one, I have y2, z2. So these are solutions of our backward stochastic differential equation for the two pairs of standard parameters here. OK, now I want to know if my standard parameters deviate by a very small distance, how far will my solutions deviate as a consequence of this? So I define the difference of my two solutions, y1 and y2, the first component of the solutions. I call delta yt, y1t minus y2t. And here is the first mistake in this whole thing. Delta 2ft will be the comparison of the two generators, but on the solution pair of the second equation. So I should write y2t here and z2t here. Not y1, this is a mistake. I'm sorry for this. So you have your increment of the solution process, and you have an increment of the generators on the second solution. And then I claim that for lambda mu and beta, three parameters, lambda and mu are new. The beta is the old one, which was showing up in my norm. Remember, we have this norm here. So the beta norm, this is the beta from that norm. So I claim that for all triplets of pairs of this type, lambda and mu are new. We will see what they are, such that those conditions here are fulfilled. We will see these conditions appear in our calculations, so don't worry at this place. Maybe just one impression. So given the parameters lambda and mu, they are fixed for certain other purposes, we have to be able to choose our parameter but better in the norm of the solution big enough to have such an inequality. So these inequalities, you have to read them in the following way. So delta yt, y capital T, index 1 will be psi 1, and y2t will be psi 2. So the terminal conditions, so this is the difference of the terminal conditions of the two standard parameters here again. And here is the difference of the generators on the second solution. So I take the beta norm of the difference of the generators on the second solution multiplied by 1 over mu square and the expected difference squared of the terminal conditions. And I want to know if I know these two quantities, so difference of the terminal conditions and the difference of the generators, how far do the solutions deviate? And this inequality just tells me that d delta in the two beta norm squared is less than or equal to this quantity. The better appears and the mu appears. And for the second component, delta z, which will be defined in the same way. So delta z is y is z1t minus z2t. So for delta z, I have an inequality again in terms of the expected deviation of the terminal conditions and the expected deviation of the increment of the generators. I think I should have written, I should have extended this expectation to the delta 2t here. So this bracket here should be here. So the expectation extends on the second expression and down here as well. Now how do you get this? First thing to do is we have to take, well, we have to express delta y squared in some way by the standard parameters. So this is done by taking, well, first of all, first of all, I have to be sure that everything can be integrated so that everything is integrable. So in the first part of the proof, I show that the soup of yt is l2 integrable so that I can make my theory work. Why is that the case? So my solution process is here. So I can write it in integral form. yt equal y0 plus integral z ddwt minus integral f dt. So if you compare the supremum of yt with its description in the BSE, you get, so by the triangular inequality on the right hand side, you had psi plus the stochastic integral from little t to capital T. I would, yeah, OK, little t. And then the integral from 0 to capital T, absolute value, ds. So you can estimate the supremum of yt by the very solution by these three quantities. And now to handle this, you just have to use dupes inequality. You know how to do that. Yeah, you take this integral here. You decompose it into integral from 0 to little t plus integral from 0 to capital T. This gives you the 4 here. And then you have the soup taken care of by the soup here by this stochastic integral. So the dupes inequality gives you an inequality by this quantity here. And this quantity is integrable because we know that our yz is in this space. So this is why this quantity is integrable. Now we only have to see the integrability of this quantity here. But psi was in L2. And for f, I can argue in the following way. So for f, I have these two conditions. So I compare f at y and z with f at 0, 0, which is already integrable. And then I have f of yz minus f0, 0. And this can be dealt with by this Lipschitz condition. So basically, I get an estimate f of yz minus f of 0, 0 less than or equal to c times y plus z absolute value. But the y plus z absolute value is, of course, integrable because I was supposing that y and z are in these spaces. OK, so finished. So we know that this is L2 integrable. And therefore, we get this integrability. Now it becomes more interesting because now we have to see how to derive once again these inequalities here. So on the left-hand side, I have the increment of delta or the increment of y or the increment of z. And on the right-hand side, I just have the differences of the standard parameters. So standard parameter final condition and standard parameter f1 minus f2 on the second solution. OK? How do you deal with that? OK, first of all, you use ETH formula for this process. E to the beta s delta ys squared. This is a process on my interval from 0 to capital T. And I use ETH formula. So ETH formula tells me I have to differentiate first the E to the beta s with respect to s. And then I have to differentiate the delta ys according to its stochastic dynamics with the square here. And I go from little t to capital T. So from little t to capital T. So the value at capital T of my process is this, minus the value at little t is this. So now comes, first the integral from little t to capital T of E to the beta s derived after s. This is beta E to the beta s. And the rest is preserved, d delta ys squared ds. So this was the derivative of E to the beta s with respect to s. Now comes the dynamics of delta ys, delta ys squared. What is delta ys squared? I can write it as twice delta ys times the difference of the generators at the two solutions minus twice delta ys. And then comes delta zs dws. This was the stochastic integral. The BSE only contains, you see it here, only contains the stochastic integral and the generator integral. So y1 would refer to z1 here and f1 at y1 z1 here and y2 with z2 and f2 correspondingly. So what you get here is just, well, this expression twice delta ys and then the Lebesgue integral part of the delta y, which is f1, y1 z1 minus f2, y2 z2. Scalar product ds. And then minus twice and correspondingly the stochastic integral part, which is delta z, dw. Delta y, delta z, dw. So these are the red parts. I would treat them differently. And then comes the quadratic variation part of my stochastic integral. So e to the beta s is preserved. And what is the quadratic variation? So the stochastic integral was delta z, dw. So it's delta z squared es. So this is the quadratic variation part. Now I collect my terms, the blue terms on one side. I take this to the other side, where this and this are already standing. So I get these three blue expressions. And they are equal to, and this has to be taken to the other side. So on this side, I already have e to the beta t times delta y t squared. So on the other side, on the red side, I want to have my differences in the terminal values and my difference in the generators on the second solution. So here I have this quantity. And then comes this one twice e to the beta s delta y s f1 minus f2. And here is the stochastic integral. So here is the stochastic integral with a plus. I have to bring it to the other side. And this comes with a minus, because I have to bring it to the other side. So this last one is that one. And the stochastic integral is here. Everything clear? If you want, I go to the board. But the expressions don't get shorter that way. So what I will do now is, here I already have a quantity that I have to compare with on the right-hand side to get my a priori inequalities. Now I want to treat this remainder here to get my a priori inequalities. The first thing I do, or the first thing I can do is I take expectations of the whole thing, because in my a priori inequalities I have to take expectations. So I take expectations in this equation. And what that does is it lets this term drop off. So the only thing I face is the estimation of this quantity which basically contains the generators. OK. So I now show that this quantity here can be estimated by what I had here. So that's this one. Plus something that only depends on the difference of the generators on the second solution. In order to do this, you can already imagine that there are parts which depend on the delta y and the delta z. If they are produced during my estimation, I have to bring them to the left-hand side of my equation. And by choosing better large enough, I will be able to show that what I transport to the left side will just enlarge the quantity there so that I have an inequality for this. So that's the plan. OK. So I integrate in my inequality here, as I said, in order to get this martingale term drop out. And then I have just this. Why did I write less than or equal to here? I don't realize. I don't see this. I don't see the. I don't see. If I were writing absolute value of f1 minus f2 here, that would have been justified. As I see it now, it is even an equality. I just let the martingale part drop. So it's an equality. So now the main thing to do is to deal with this quantity here. I start here. So I have to compare f1 on the first solution pair with f2 on the second solution pair. The first thing is I use triangle gradient quality, and then I compare f1 on the first solution with f1 on the second solution. And then I have delta f2 here, because this will be f1 on the second solution minus f2 on the second solution. So this is exactly my delta f, delta 2f. You remember it was here. So delta 2f was here. f1 on the second solution, I said I told you this. I had a mistake here, typo. f1 on the second solution minus f2 on the second solution. So here we are. f1 on the first minus f2 on the second. Once again, it's f1 on the first minus f1 on the second plus f1 on the second minus f2 on the second. So this is delta 2f. And now I use Lipschitz continuity of my solution, of my generator f1. How does that work? So I get this quantity, you remember Lipschitz continuity was here. So I compare f1 on one pair with f1 on the other pair. It's less than or equal to constant times the comparison of the first components plus the comparison of the second components. So if I have f1 of y1z1 here and f1 of y2z2 here, I'm getting y1 minus y2 here, which was delta y. And here I get delta z. So this is exactly what I wrote here. This quantity, this red quantity is compared by c times. And then I have delta y plus a delta z. And the green part is just preserved. So now I have to continue estimating this. So this produces delta y and delta z. And I had the delta y on the other side or in these expressions, delta y and delta z. So I have to just separate the appearance of delta y and delta z in those expressions and the green part. I ultimately want only the green part to be on the right hand side. So how is that done? So here is this expression again. I write this expression again with this estimate here, with this estimate. So here it is. Now I'm using my estimate. So the delta y, I estimate the f1 minus f2 here that I'm encircling now by these quantities that I just saw. So c delta y, delta delta sy, delta z plus delta 2f. And then if I multiply out, I get a c times expectation of delta y square here. This is already something I want. But then I get a delta sy with a delta z. So a mixed term plus something in delta 2f, which is what I want. This mixed term still disturbs me. So now I use a further inequality to deal with this contribution here. This is already OK. I want to deal with this contribution. So I name this quantity here y, c, z, and this I name t. So this takes real values z. There is a c. And then there is a y. And I have a 2 here. So 2 times y times zz plus t, very elementary inequality. I multiply out here 2c yz plus 2yt. And then I treat these quantities here with an inequality. I can write that this is less or the c is preserved. And then the 2yz can be estimated by taking any constants mu at lambda positive. I can compare by y lambda square. So 2yz I can estimate by y lambda square plus z over lambda square. This is a very elementary inequality. So this is an inequality that deals with this term. The c is preserved. And the 2yt is estimated. So I have another independent quantity mu with y mu square plus t over mu square. Same elementary inequality. Now, down here I just collect the multiples of the different quantities. So I want y square and z square. So y square comes with c lambda square here and with the mu square here. So this is my y contribution. And then I have a t over mu square that will be the green part and z over lambda square with a c here. OK. Now, this is what I wanted to estimate. Here is the estimate. And now I'm plugging this estimate in. So here I'm plugging it in. So originally I had this quantity, which is copied here. Just with another color for this part, it's in blue and green. And now I'm using my inequality. So the first thing is just preserved. And then expectation of delta y delta z plus delta 2f. So I get by replacing y and z correspondingly and also delta 2f here with the right quantity, with the right multiples, one over lambda square, one over mu square, or these quantities here. I get so the blue parts are here and the green is here. And this, well, if I collect terms, so now I only have delta y square delta z square. Here I have another delta y square. And I have a delta 2f square. So I collect terms for the delta y square, it is 2c plus lambda square from here and mu square, which comes from here. So this is the quantity I have to multiply to delta y square, expectation of delta y square e to the beta s, of course. The delta z square comes with this quantity and the green part was already there. So now I go back to, well, essentially, this equation, which was modified by my estimates that I just derived, and I plug in everything. So what I get is, so I take the e, so what do I do? I take these two quantities and bring them to the other side of my inequality where I compare them with the result of the estimate of that, which I just got. So here you see. So I bring them to the other side that accounts for the minus beta here. The quantities, the c times 2 plus lambda square plus mu square, they were already there, and they result from my previous inequalities. And for the other term, for the delta z square, the minus 1 comes from taking it to the right-hand side, and the c over lambda square comes from my estimate. And the green part is just as it was. Now you see that if I choose my beta large enough here, I will have that this quantity in square brackets is negative. If I choose beta large enough, then this quantity will be negative. And if I choose my lambda large enough, so let me start with lambda. The c is given. I choose my lambda large enough so that this quantity is negative, so that I can omit it on the right-hand side, because everything else is positive. Then chosen lambda, I can choose also the mu and then the beta large enough so that this quantity is negative so that the whole thing results in, well, the only thing that is preserved is this quantity, this quantity plus the green part. So here we are. This quantity plus the green part. So I've estimated my derivation of delta by the derivation of the terminal condition and the derivation of the generator on the second solution. So this is what I wanted to come up with. So this is my claimed inequality. I just recall. So here I claimed it. And just by estimating this quantity in the right form and completing squares or so, I came up with just this estimate. So if I now want the first inequality of my a priori inequality, what I have to do is I have to integrate in time. So far, I only integrated with respect to the expectation. And then I have to integrate in time and I will get the right quantities. The integration in time on the right-hand side, so on this quantity, it will just produce a t because it does not depend on time. So here I get a t. And if I integrate in time here, I change variables and I get this quantity here, this quantity here. So now I have an estimate of delta y with the difference of the terminal states and the difference of the generator on the second solution to derive this one. I do basically the same thing. I use my inequality that was just derived. So the second inequality, I take this term to the left-hand side. I do not take this term into account at all. And since I was choosing my lambda large enough, on the left-hand side, this quantity will be positive. I'm just taking lambda squared over c minus lambda squared to the other side and I get this. That's it. So I have these two inequalities. Now this is the a priori inequality that leads us directly into the solution or the existence of a solution of the BSE. So here we are. If I have standard parameters, then there exists a uniquely determined pair, yz, of solution processes with these properties. So that is just the BSE. OK. So I said in order to prove the existence of a solution for this BSE, we need a contraction on respective Banach spaces. So what do we do? We take an approximative solution, yz, little y, little z. And we map it to something we get by using the martian representation as I will just show. So I take one approximation of a solution, little y, little z, and I find processes, big y, big z, fulfilling this equation. So this is an approximation of my BSE. If I had the fixed point, I would have my solution of the BSE. So here I just take one solution pair, one approximative solution pair, and I produce the next one. And this gives rise to the following mapping. So I take little y, little z, previous approximation, to big y, big z, consecutive approximation. And I just have to show that this mapping is a contraction on a Banach space. A fixed point will then give me a solution. Because then I will be able to have the same y and z here as here. Then I have a solution. OK? And the contraction property is just given by the a prior inequality that I was just proving. That's basically everything in that whole story. OK. First of all, I will have to show that this equation star has a solution. So given y and z in those spaces here, I will have to produce big y and big z again in these spaces so that this equation is fulfilled. How is that done? So this process is basically given. And to obtain this equation, one has to find a clever way of applying the Martial Representation theorem. So this is how you go. First of all, by our assumptions, psi plus an integral of f on my previous solution, little y, little z will be in L2. I just recall what we did in order to show this. Psi is in L2. And f on yz can be estimated on f of 0, 0, which is in L2, plus y absolute value plus z absolute value. But the y and the z absolute value are in those spaces so they are square integrable as well. So this is how I get that this whole object is square integrable. Now what I take is a martingale representation for the quantity psi plus the integral over the entire interval of my generator on the solution little y, little z, on the approximative solution little y, little z. So I take this quantity. I know it's integrable so that martingale representation gives that this process here, the conditional expectation, so projected down on ft, is a martingale. It's even a square integrable martingale because we know square integrability for this. So it's a continuous one. We are on renal filtration. And so the martingale representation theorem provides even a process z which fulfills this equation. So here we have a martingale in the Wiener filtration. And for the Wiener filtration, we have martingale representation so that we can represent our m by this equation. Now comes the definition of y. So I take my martingale minus the integral of the generator on the solution with the small letters ds up to time t. I claim that y is square integrable and that we have this equation. Now, why is that so? OK. You can write your y as m. But m can be written in this way, so conditional expectation of xi plus the entire integral of fds. And then you have to subtract the integral from 0 to little t from this integral so that the integral from little t to capital T remains besides xi. So this is another representation of y. OK. And then y capital T will be, y capital T will just be xi because this integral is then a trivial one. So it's xi. y capital T is xi. And y capital T, on the other hand, can be also described by m capital T minus this integral. So m capital T is m0 plus this minus this integral from 0 to capital T so that y finally can be expressed as y capital T minus y0 minus m0 plus m0. So y little t minus m0 plus m0. Oh, wait. I'm subtracting y capital T, and then I'm adding y capital T again. Y capital T is xi. And I'm subtracting y capital T. This is that quantity here. And then I'm adding y little t, which is here. I did the same trick here. So what you get is the m0 drops off the xi is preserved. And the stochastic integral goes from little t to capital T. And also the Lebesgue integral here goes from little t to capital T. So it's this expression. But this is exactly what we wanted. So this is the equation that we were claiming here. So we have constructed the pair y capital Y capital Z given little y little z so that this equation is fulfilled. Now I have to show that it is a contraction. For that purpose, what I do is the following. I compare little y's, little z's, with capital Y's, capital Z's. So I take a pair little y little z and another pair. And then I take capital Y capital Z correspondingly with version one, version two, solutions to my start equation that I was just showing. OK? And then I can apply lemma one. And now you will be surprised. C equal to 0. The C was the Lipschitz constant for my generators. Why can I take it to be 0? Because my generators now do not depend on the processes I produced, the capital Y capital Z, just on little y little z. Yeah? So I can apply my first lemma with C equals 0, beta equal mu square, and fi as these generators. So there will not be, yeah, there will be a difference of the generators on the second component, but there will otherwise be no difference because it does not, they do not depend on capital Y and capital Z. So this is what my lemma gives for the increment of y, and this is what it gives for the increment of z. So I just have a beta in the denominator and the difference of the generators, so the difference of the generators on the second solution. The second solution is trivial here, yeah? This is just the difference of the generators. Now I add the two, yeah, Lipschitz continuity of f first of all. Lipschitz continuity of f now allows me to replace this difference of the generators by differences of little y, so delta little y, y1 minus y2, and z1 minus z2, delta little z, also in the two-better norm. So this is the first inequality. And here is the second inequality, and finally I can add these two inequalities together in order to get a contraction for big-better again. So if I add the two inequalities together, I get this. With this constant, you have the beta in the denominator in both expressions, 2tc plus add 2c in the numerator. This is here, and beta in the denominator. And here you have the delta capital Y and the delta capital Z compared with the delta little y, delta little z, and this is my quantity that I can get less than one by choosing better large enough. So again, by choosing better large enough, I can obtain a contraction property for my pairs of processes with these norms. So if I choose better large enough, I think I said better bigger than 2 times 1 plus t plus c. If you choose it just like that, so that better is bigger than this quantity, this quotient here will be less than 1, and I have a contraction. And if I have a contraction, the rest is trivial, the fixed point will solve my equation. You do not have to read this. And uniqueness is again a consequence of contractivity of gamma and the uniqueness of the fixed point. So we have existence and uniqueness due to the a priori inequalities. And this is the standard sequence of arguments for solving BSEs. You write an a priori inequality, and then you show contraction, something like that. Yeah? Now I'm looking at the watch. I had in mind to tell you something about, well, OK, you might now ask yourself, as in the SDE case, are there cases in which I can construct the solutions explicitly? In the BSE setting, this is usually not possible anymore. There is one exception which is linear BSEs. I think René Camuna has considered a linear version of the BSE version in his setting this morning. I could do that now, but I think we are now over 1 hour and 45 minutes, so I could also stop. Yeah? The main, I have shown you the main things that I wanted to. OK, so just what, so if you solve linear BSEs, you can solve them explicitly, and from that, you can derive the comparison principle, which says that if you can compare terminal variables, xi1 and xi2, if xi1 is less than xi2, and if the generators are the same or ordered in the same way, then the solutions will have the same order. This is the comparison principle. Now, with the comparison principle, you have an entrance to solving the problem that I was explaining in the first part of my presentation with the quadratic BSEs. So if you have quadratic BSEs, BSEs that just fulfill local Lipschitz conditions, you would like to approximate their solutions by solving BSEs with global Lipschitz conditions. So in this setting, you would do the following. You would truncate your generators by generators with global Lipschitz conditions in a way that makes the comparison principle applicable, so that after truncating, you will have solutions, you will have a whole sequence of solutions. By comparison, you can show that they increase or decrease whatever is needed, and then by local limits, you get a solution of the quadratic BSE by the comparison principle. The comparison principle can be derived via the explicit solution for linear BSEs. So this is missing, but I think time has it last. Thank you very much. Thank you for this very nice introduction. So we have time for some questions. So in slide 21, can you please explain once again how the first line belongs to L2? Okay, so psi is part of a standard parameter, so this belongs to L2. And then, okay, you can estimate this integral by integral from 0 to T, absolute value of s at y as zs. Okay, now your generator possesses Lipschitz continuity, is Lipschitz continuous, and its state, so f of y, if I replace y with 0 and z with 0, then f of 0, 0 will be square integral. Okay, and then I can compare f of y, z with f of 0, 0 in the following way. So I can write absolute value of f of y, z less than or equal to triangle inequality f of 0, 0, plus f of y, z minus f of 0, 0, and f of y, z minus f of 0, 0. For this, I use the Lipschitz continuity. I can estimate it by constant times y plus z absolute values. So I can estimate y and z, they are already known to be square integral because I was taking them in this product space. So that's the point, okay? Thank you. Yes? So my question is, is there any regularity result about the solution? Yes, y is continuous and z need not be continuous. Okay, but what would you expect? Like holder continuous, something like that. Yeah, I think you can derive it from the Heldner continuity properties of the Brownian motion. So it would, I guess, have the same Heldner regularity as a stochastic integral. Okay, thank you. Any other questions? Okay, if not, let's thank the professor. Excuse me. Oh, there's another one? Okay. Yes, for the optimal martingale problem at the beginning, in fact, when you arrive to the result, pt is the projection of zt plus this term. In fact, in this term, there is zt, the term zt at the beginning. Okay, yeah. Yes, it's the projection of, so pt star, it's the projection of zt plus zt. So here I didn't, well, I understand zt, we look for zt or it's not given. But I think it's given, zt plus and zt are given and I want the optimal strategy. But so here zt is given when you rewrote it, in fact. Okay. So for your BSE, you need the generator. Yes. And the generator was this here. And the distance from the constraint set to zt plus one over alpha theta. Okay, the solution we found. So the generator will depend on zt and on time. But not on the optimal strategy. The optimal strategy will then be chosen. If that was your question. Yes, no, but because I have, I answered that we are looking for a link between the martingale problem and the usual resolution of BSE. No, the link was the link was the martingale optimality principle is expressed by the choice of a generator of a particular BSE. That was, that was my point. Okay. Yeah. Okay. Okay. Any questions? Okay. Thank you again. Thank you.
Backward stochastic differential equations have been a very successful and active tool for stochastic finance and insurance for some decades. More generally they serve as a central method in applications of control theory in many areas. We introduce BSDE by looking at a simple utility optimization problem in financial stochastics. We shall derive an important class of BSDE by applying the martingale optimality principle to solve an optimal investment problem for a financial agent whose income is partly affected by market external risk. We then present the basics of existence and uniqueness theory for solutions to BSDE the coefficients of which satisfy global Lipschitz conditions.
10.5446/57387 (DOI)
Thank you very much. I should start with thanks and apologies. So thanks for the organizers, studio organizers for inviting me to the great place. When I got the invitation from Jean Francois and Francois, they forgot to mention how nice this place was. If they told me, perhaps I would have come for a longer period. Unfortunately, I'm organizing as we speak another meeting in Durham, so I should be in Durham back in the North, again, England, and I'm hoping that they will not notice my absence. Actually, I will be quite upset if they don't notice my absence, but then there's another story. Okay, so it's a great place and it's great to be here. And I, looking at the audience and the people that came to the talk, I feel overdressed. And I apologize for that. You know, Durham has a different climate from the climate here. It's true that Peter offered his shorts this morning, but I politely refused that, so there you are. So I'm going to tell you a little bit about cubiture methods. Of course, in one hour it would be quite difficult to go into any depth to explain cubiture methods. What I'll try to do, I'll try to just give you the flavor of the subject and sort of tell you, you know, the message is that cubiture methods are a very versatile methodology that can be used to solve a variety of PDEs and stochastic PDEs. And the common denominator of all these PDEs and stochastic PDEs are Feynman-Kazer presentations. Indeed, you know, many, many of these PDEs admit a Feynman-Kazer presentation. And once you have a Feynman-Kazer presentation, you are in business. There's lots of things that you could do once you have a Feynman-Kazer presentation. You can go ahead and... It's two hours. Two hours. You have two hours. I have two hours. Excellent. Right. So I was surprised to be introduced to have a course of one hour, but you know, you never know. So will I be able to have a break in between? Sure. Sure. Excellent. Right. So okay, coming back to the two-hour lecture, right? So the common denominator of everything that I'm going to tell you, as I said, is a Feynman-Kazer presentation. And it's an amazing object for many reasons. And once you have a Feynman-Kazer presentation, you can go and analyze the equation from a theoretical perspective. You can say, well, okay, so what can we say about the equation once you have this representation? And you can analyze this smoothness, but you can do a lot more. As I will point out, you can try and say, okay, suppose you have a Feynman-Kazer presentation. As you start with the Feynman-Kazer presentation, then how do you, can you show that the equation has a solution in some sense? And I'll say a little bit about this. But because this is a school on numerics, I will talk mostly about how you use the Feynman-Kazer presentation in order to do numerics. And this is where cubitual methods become very powerful. And the reason why they become very powerful is that, as you will see, they are all, all of these Feynman-Kazer presentation have a common denominator, and this is Brownian motion. And the cubitual method simply is a method that provides an approximation of the winter measure. And in all of the examples that I'm going to present next, what you will see, they will be, the methodology has three parts, there's three steps, but somehow you do all three steps in one go. The first step is to replace the underlying winter measure with a cubitual measure. And this is the, this is the cubitual representation. And then there's another step which takes the cubitual measure and reduces it to a sub-measure that has a fixed number of paths. And I'm going to explain that. And this is how you control the computational effort. So these two steps are common, these two steps are common to all the examples that I'm going to give you. The only, the only difference is the way in which you discretize, you approximate the Feynman-Kazer presentation. If each of these PDEs that I'm going to look at, each of the individual PDEs or SPDEs that I'm going to look at, of course they will have their own Feynman-Kazer presentation and you have to take that representation and approximate it. So the only difference also, the thing that is going to be different for each of the examples that I'm going to show you next will be how you, you know, the actual Feynman-Kazer presentation that you have and how you discretize it. And then on top of this, you use the other two steps, i.e. the approximation of the winter measure and the control of the computational effort. The examples that I will present will be, so I have three examples, so it's great that I have two hours to go through these three examples. So the first one is semi-linear PDEs and this is joint work with Konstantinius-Manuakis, his work that we've done some time ago. So the template for each of these three examples will be the same. I'm going to show you what the Feynman-Kazer presentation is going to be. So this is going to be in terms of a four-backward stochastic differential equation. I'll tell you how you discretize and then give you the main result, which will be the error that you get if you do this approximation, which involves these three steps, depends on, you know, it comes from each of the three steps and I explain what each of the three steps involves. And then I'm going to give you an example to see, you know, what sort of results do we get and, you know, essentially a numerical implementation. So this is the first example, the semi-linear PD. The second example, this is joint work with Salvatore Ortiz-Lator. This is for linear parabolic stochastic PDEs with multiplicative noise. And the reason why we look at this example is because this type of stochastic PDEs, they appear in the area of filtering. So in stochastic filtering, you deal with things like conditional distributions of signals given observations and the result there is that you end up with a stochastic PD and you want to solve this PD and very naturally you will have a Feynman-Kass representation for this PD and therefore you can use the whole methodology developed here in order to solve the PD and therefore solve the filtering problem. And the third example is the most recent work which was done, finished last year with my student Simon Montgomery. So here we're looking at McKinvillas of PDs and here the connections will be, the Feynman-Kass representation will be in terms of a non-linear diffusion and again I'm going to explain here how you discretize this non-linear diffusion, what theorems are there and then a numerical representation and then I'll complete the course with some final remarks. So that's essentially the plan for the course. Okay, so let me start with this sort of general generic introduction to Feynman-Kass representations and I'm sure that a lot of people in the audience have heard of at least the sort of general Feynman-Kass formula for the heat equation. Essentially the heat equation is not just the only equation that that meets the Feynman-Kass representation, there's a whole lot of equations that are out there that meet the Feynman-Kass representation. So what is a Feynman-Kass representation? It is essentially a formula that connects the solution of a PD, the solution of an SPD with a functional, with the expected value of a functional of a stochastic process. And most of the time you can reduce this expected value of a functional stochastic process. The process itself is Brownian motion. So the way in which I try to sort of put this to a non-specialist audience is that this Feynman-Kass representation, they provide the dictionary, if you will, between phenomena that happen at macroscopic level and phenomena that happen at microscopic level. And because we are probabilists, because we like stochastic processes, we've learned, we know a lot of stuff, a lot of things related to Brownian motion, related to stochastic processes. And the way in which we try to get to the macroscopic event and the macroscopic phenomena is coming from the microscopic level upstairs. And this is really fascinating, and at the first sight, I've given here seven or eight examples of PDs or SPDs that have made a Feynman-Kass representation. And at the first sight, it's really surprising to see that all of these things, and all of these things, they model phenomena that essentially are quite different in nature. And yet all of these things at the macroscopic level, they have a common denominator at the microscopic level, and that is this object called Brownian motion. Of course, once you know a little bit about this thing, you realize that it's not really surprising that somehow in your PD you have a Laplacian in disguise or by itself, and then it's not surprising that there is a Brownian motion at the microscopic level that is connected to all of them. So the three examples that I will look at will be written in red here. So for the SPD that I'm going to look at, this is the Zakai equation, and the actual presentation was introduced by Duncan Morton and Zakai in 1970. For the non-linear diffusion, I'm putting here Gertner in 1988, but I'm happy to change that if people think that there was an earlier representation of this McKimblastow PD in terms of a non-linear diffusion. For the semi-linear PD, it was Pardouin Peng that came up first with the representation of the solution of the semi-linear PD in terms of four backward equations. So basically, once you've seen this, then if there are students in the audience that might want to try other PDs using this methodology, just take your pick. Choose one PD which has a Feynman-Kasser representation and see if you can do it. Cubature methods are really versatile methods that are the messages that can be used to solve any of these PDs as long as you have a Feynman-Kasser presentation. So let me come back to this. In abstract form, the Feynman-Kasser presentation is simply the expression, a representation of the solution of the PD or the SPD in terms of the expected value. So this is an integral over the pass space. You take the winner measure, and then you have to have a functional which you integrate with respect to the winner measure on the pass space where the winner measure resides on the set of continuous paths. So in all of these PDs, the only thing that's going to be different is the functional. The winner measure is going to be the same. And therefore, if I know, so one step in the approximation will be to replace the winner measure. And a cubature approximation, a cubature method replaces the winner measure. So that is where a cubature method will influence, will come and be used here. OK. So before I go on and tell you about the numerical approximation of these PDs using cubature measure, I'd like to tell you a little bit about how you can use Feynman-Kasser presentation for analyzing theoretically these PDs or SPDs. And the connection is really in Mali-Avante calculus. In effect, what you do, you say, OK, if you want to analyze the smoothness of this PD or the PD or the SPD, then you can go on and look at the smoothness of the functional that appears in the Feynman-Kasser presentation in Mali-Avante sense. And so even though it's a much more complicated methodology to use Mali-Avante differentiation, it can lead to very, very powerful results. And I have a very simple example here to try to explain how exactly do you use Mali-Avante integration, how exactly do you use differentiation in the Mali-Avante sense to analyze the differentiability of the PD. So the example is the simplest possible example, the heat equation. And so what you do here, so I have the heat equation, which we all know about it. And the heat equation coming from Feynman-Kasser has this Feynman-Kasser presentation. So Fey is the initial condition. And the solution at time t, estimated at point x, is the expected value of phi at x plus w t, where w t is the Brownian motion. So that's a Feynman-Kasser presentation. And I have this representation in this way. It's actually quite simple to deduce this representation. OK. Now in this part, OK, so then once you know this thing, once you have this representation, then you can rewrite this in terms of, you are quite likely, you have an explicit formula for the density of the Brownian motion. So you can write this as an integral phi with respect to the corresponding Gaussian density. I'm not going to look at this. But at this point in time, this is the representation. And then what you can do, what you can do, you can differentiate. You can differentiate this with respect to x, differentiate under the expected value. And then you can use Mallevan calculus in order to get an expression of the derivative of the PDE in this form. So it's not just that the equation itself, the solution of the equation itself, has a Feynman-Kasser presentation, but it's also the case that all derivatives have a Feynman-Kasser presentation. You showed that all the derivatives have a Feynman-Kasser representation in this form. And then you can say, right, once I have this representation for the derivatives, I can try and control the derivatives. I can try the various norms of the derivatives in terms of the property of the functional in the Feynman-Kasser presentation. So a very simple example in here, once I have this representation in here, I can say, well, suppose phi is bounded, well, if phi is bounded, then I can say that the supremum norm of the derivative of u is bounded by the supremum norm of phi. And then once left inside the expectation, I just get the expected value of the absolute value of the Brownian motion divided by t. And I know what the expected value of the absolute value of the Brownian motion is, is what it is, square root of t times some constant. And then all of a sudden, I find an upper bound very easily. I find an upper bound for the supremum of the derivative. Of course, this is a very complicated way to compute, to estimate, to find the bound for the supremum of the derivative. In this particular case, I could have looked at the fundamental solution. I could have looked at directly this representation. And then I could have differentiated this with respect to x. I could obtain the same formula by differentiating the fundamental solution. So there isn't much to be gained in this particular example. But what it turns out, it turns out that you can do the same argument for much more general setups. You can do the same argument for general PDs and SPDs. And therefore, you can look at the properties. So in general, you will never have an explicit solution. But you will still be able to apply this methodology in the Malia-Van integration in order to get the expression for the derivative. And you can find, you can study the smoothness of the solution in this way. So this methodology is very powerful. And in fact, this was the reason why Malia-Van introduced this, which you wanted to duplicate. You wanted to use a probabilistic methodology in order to show that the solution of a PD is smooth under the Hormander condition. And his program was, oh, sorry, his program was continuous in the 80s by Kusoukran-Struck. What Kusoukran-Struck did was to take this idea and take the program further to a case when the PD is no longer satisfied, the Hormander condition. So you have something, you have a PD where the Hormander condition is generalized to a condition which is weaker, which is called the UFG condition. Essentially, under the UFG condition, you can have many PDs which do not satisfy the Hormander condition, but they satisfy the UFG condition. And the same methodology, the same analysis applies there. And following their program, you can show that PDs that are not necessarily, do not necessarily satisfy the Hormander condition are still smooth. They are not necessarily smooth in all the directions, but they are smooth in certain directions which are generated by a certain Lie algebra. And these results that Kusoukran-Struck developed cannot be duplicated using PD methods. So these results that Kusoukran-Struck, you know, the probabilistic methodology that Kusoukran-Struck introduced cannot be repeated, cannot be duplicated in using standard deterministic methods because standard deterministic methods have to survive, have to use essentially something like the Hormander condition or something that eventually leads to a system that satisfies the Hormander condition. It's very powerful, it's a very powerful methodology. And you can go further. You can say, okay, now suppose I go the other way around. Suppose I start with representations like this. Is it the case that if I have a representation like this, I have a solution of the corresponding PD? So rather than going from the PD to the representation, you start with the representation and say, under what condition this representation gives me a solution of the PD? And the reason why this is powerful is because the representation in itself can be defined. You can define the representation under very general conditions. And as a result, you can hope to show that the corresponding PD has a solution under very general conditions. And this is the case. And you can do this and show for a variety of PDs when you have no hope to show that they have a solution by using classical methods. You can use this pharmacartic representation to show that they actually have a solution in a certain sense. So Kusoka and Struc developed this program, as I said, in the NTS. Kusoka came back and refined their methodology in 2003. And also, the whole Cubature methodology is based on the results that Kusoka and Struc developed. And it was unavoidable that once I started to look into Cubature methods, I realized that their theoretical analysis is so powerful, it can be extended in a variety of situations. And I have a number of results together with my collaborators that extends their classical results. And this is for linear PDs. This is for linear PDs. But you can go and try to repeat their programs for both semi-linear PDs, and this is work with Francois, and for McKinvass of Equations, and this is work with my students, Eamon McMurray. And essentially, you have to do the theoretical analysis first before you do the numerical analysis, because the numerical analysis hinges on errors, on bounds, that you have to first develop theoretically in order to use them in the theoretical analysis of the corresponding methodology. Okay, so let me go back and now tell you how, you know, so that was the, you know, a little bit that I wanted to say about the theoretical analysis of the PDs. Now we move on to the numerical analysis of this PDs based on the Feynman-Kazer presentation. So let me, I'm going to go back and say, right, I know, suppose I know that I have my PD or SPD, and this PD or SPD has a Feynman-Kazer presentation. In other words, in other words, I can represent it, the solution of the PD or SPD at time t estimated at point x is written as an integral over the winner space of some functional integrated with respect to the winner measure. How do I use this representation in order to produce an approximation? And it's actually, you know, if you think about this, it's, you know, very intuitive. What you need to do, you need to approximate everything you see inside. So you have to, first of all, approximate the winner measure. So the winner measure is a very complicated object that works on, is defined on the space of continuous paths, and it's actually very, very difficult to work with the winner measure in itself. Okay? So one way to do that is to just sample from the winner measure, and I'll come back to this. We don't want to sample from the winner measure. That's the message. That sampling, sampling from the winner measure can be, you know, it is better to use the cubiture method rather than sample from the winner measure. Now the next step would be to get the representation that you end up in the Feynman-Kazer, the functional, and try to discretize it, try to, try to replace it by something which is a lot more amenable for numerical approximation. Okay? So, and this step, this step is the step that is different for every case that you look at. So what's going to be common will be the approximation of the winner measure and the way you need to control the computational effort, and what is going to be different in each of the cases is how do you actually approximate the function? Okay? So these are the first two steps, so, and then, so this is what I'm saying here. It's a three-step scheme, but, you know, you'll not see three steps when you implement this. You do everything in one go. So step one, part one, replace the winner measure with another measure which is going to be discrete, because this is a measure on the space of paths. So we're going to deal with a collection of paths, right? And so this is going to be a measure which will correspond to another process which I'll call it W tilde, and this process W tilde will approximate in some way the winner process, the Brownian motion. And then the next step, as I said, is just sort of, is going to be to replace the functional with a simpler version that you will be able to integrate with respect to this simplified winner measure, and then the final step will be to say, okay, so I have all of these, I have all of these places, I have all these steps in place, but then I know that my laptop or my desktop or my parallel computing machine cannot afford more than this amount of computational effort. That means that I cannot have more than 100,000 paths. Now how do I sample from this simplified measure so that my computational effort is kept fixed? And this is, so this you have to, you can control the computational effort using a methodology which we developed for a different reason. Come back to this, and we call it the TBBA, the T-Base Branching Algorithm, and it was a very unfortunate name because people are used to it, much more appealing names, divide and conquer, that sort of thing, so it didn't take off. We think because of that. Okay, so before I go on and explain the Cubature Method, I have a sort of nice representation here to try to explain what Cubature Measure actually does. So what I have in here, what I have in here, I have a numerical, so I have 10 independent Brownian paths. So you can think of these independent Brownian paths. These are the typical paths, typical paths for the winner measures. If you sample from the winner measure, you get paths that look like this. So what you could do, you could say, okay, I'm going to replace my winner measure here with a measure which sits on these 10 independent Brownian paths. This is a normal Monte Carlo approximation for such an expected value of a function with respect to the winner measure. That's fine. The way, so this is what you see in here, these are typical paths. A Cubature Measure doesn't use these typical paths. A Cubature Measure uses something which I call representative paths. So the representative path is nowhere near a typical path. Representative paths, the exemplary representative paths are paths that are piecewise linear, paths that are piecewise linear. So in what way do these paths that are piecewise linear, in what way do they approximate these typical paths? In what way a measure that resides on paths that look like this approximates the winner measure? Well, the common denominator for this is the signature of the winner measure, the signature of the Brownian motion, the signature of a path. So what I'll point out is that these paths and these paths, they share a part of their signature. And I will explain what signature means, but for the moment, the way in which I sort of go and try explaining what this means to a non-specialist audience is by means of a comparison with DNA. Because you go and, you know, if you, I'm sure the same thing applies here in France, in Britain, if you want to go and explain to people that want to give you some money to fund your research, they come to and say, Mr. Bloch comes to you and say, well, okay, can you explain to me in 500 words or less with no mathematics what it is that you want to do with our money? And so you try to find out an explanation which doesn't use any mathematics. And the one that I was using was to say, well, okay, so think that you want to find out details about sort of human beings, how they evolve, how they behave. So obviously what they have in common is the DNA, right? But of course, you don't want to do any experiments on the full DNA. So you try to use things that sort of take part of the DNA. You look at, you look at approximations. So for example, you could look at mice and you can study mice. And the connection is going to be that mice and men, they shared part of the DNA. There's a truncated part that is common to both mice and men. And then you do your experience, you do your study on mice. And sometimes this is a success, successful, they like it, sometimes they don't. I tried it on my kids as well. And my kids liked it very much. And when I explained them, you know, this comparison with mice and men, the next day my kids came to me and said, okay, daddy, we want to come, we want to study cubiture measures. And I was very pleased with that. And then they said, we want to come to see your office. And I didn't realize why they wanted to come to see my office, but I said, well, you know, we are not to try to educate my kids. So I took them to my office and they were very disappointed because they were expecting to see lots of mice. So yeah. And then they said they decided that actually they don't like cubiture measuring. But anyway. So coming back to the story here is that a cubiture measure produces paths that resemble a brownian path through its signature. So what is the signature of a path? So the way in which this is presented, it's a very abstract way, is in terms of a tensile algebra or no combative polynomials. But I'm not going to go into all of these things. So it comes out of the work of Chen. So to try to explain it directly, what you do, you take a path. So suppose you have a path which has bounded variation and you compute all the iterated integrals of this path. So assume that you are in RD. You have a path in RD. And then what you do, you compute all of the iterated integrals, first order, second order, third order, and so on, of this path, all the iterated integrals that involve all the components of the paths. And there are lots of them. And then you take all these iterated integrals and you put them in separate boxes. And you obtain a massive, massive object that makes a record of all these iterated integrals. So you can write all these. You can sum them up in direct sum because there is no proper summation in there. And that's the signature of the path. So in terms that get away from algebra, the signature of a path is an object that makes a record of all the iterated integrals of the path. And of course, you have to make sure that this iterated integrals make sense. And of course, if the path has bounded variation, the iterated integrals will make sense. So the signature of a path, once again, is an object, is an algebraic object that makes a record of all the iterated integrals of the path, all the components, iterated integrals of all the components of the path. So that's the signature of a path with bounded variation. And then the next thing that you can do is to truncate this object. So remember, eventually, I'm only going to care about the truncated signature. So to truncate this object, you just say, OK, I'm not going to be interested to compute all the iterated integrals of all the orders. I'm only going to be interested to compute the iterated integrals up to some order m. So the signature contains all the iterated integrals. The truncated signature up to order m contains all the iterated integrals up to order m. So that's what you have when you take a bounded variation path. And then what you can do, you can say, OK, how about Brownian motion? Well, so your path doesn't have to be deterministic. It can be random. We can look at the signature of Brownian motion. And in order to define the signature of Brownian motion, you do the same thing. You take all the iterated integrals of a Brownian path. But here, the integration is in Stratonovich sense. So there is a reason for that. So you work with the Stratonovich integral. But there is the, you can either order Stratonovich or iteration. There's a one-to-one correspondence. So you have a signature of a Brownian path as well as a signature of a bounded variation path as well as a signature of the Brownian path. And you can take the Brownian path, the Brownian signature, and truncate it. Yeah? I don't see exactly what is your objective with that d1 and also get d0 in the log cap. So this is, so what you're trying to avoid the fact that you have to take all the components, right? So basically, when you, so suppose you have, you have omega has two components, right? So this thing with omega has two components is a matrix that has, you know, here, integral of omega 1 with respect to omega 1, integral of omega 1 with respect to omega 2, integral of omega 2 with respect to omega 1, and so on. See, you get an array, you get an array which contains all the integral of each component. This is for a two-dimensional, this is for iterated integral of order 2. When you go to order 3, then you have a cube. Order 4, you have a hypercube and so on. And the easiest way to use, the easiest way to express this is to use this tensor product between them. Right. So as I said, I'm trying to avoid algebra here, right? We are analysts, and so the best way to think of this is to just take all the iterated integrals between all the possible components up to all orders and that's the signature, and then you want to truncate it, you just take it into the integrals between components up to some order m. And you do the same thing for Brownian motion as you do it for a path with boundary variation. Okay. So it turns out that signature, the signature of a path is a very powerful object, effectively the signature of the path identifies the path, and there are theorems developed in the rough path theory. This is the basis of rough path theory. It comes from the development of rough path theory that started with the work with Terry Lyons and continued in recent years with many, many other people that do rough path theory. And the culmination of this is the work by Martin Herrer that got his Fields Medal. Using rough path theory he showed that you have solutions of certain SPDs and you can interpret them as rough paths. It's a very powerful methodology, and the basis of all this is this concept of a signature. We don't need rough paths in order to do cubitural methods. The only common part between rough path theory and cubitural method is this notion of signature. If you start with a signature of the path, the signature of the path identifies the path up to a certain class. If you know the signature of the path, you know the path. But there's more. If you have a process, if you start with a measure on the set of continuous path and you look at the corresponding process and you are able to compute the expected value of the signature of the process, this quantity identifies the law of the process. If you have two different processes that have the same signature, they will have the same law. So in particular, if you have a process, it's quite easy to compute the signature of the winner measure because we know how to compute the expected value of the integrals of Brownian motion. That's easy. Everybody knows this. If you compute these things, any other process that has the same signature, any other process for which the corresponding expected value of the iterated integrals will be the same as the expected value of the iterated integrals of Brownian motion will have the same law. That's fine. But now let's suppose that you take another process that matches the expected value of the signature, not for all iterated integrals, not the entire signature, but the truncated signature. So you say, OK, I'm taking a process that if I compute the corresponding expected value of the truncated signature, it matches the truncated expected value of the signature of Brownian motion. What can you say about the corresponding laws? And it turns out that the corresponding laws are very close. It turns out that if you take a functional of that process, the expected value of the functional of that process is very close to the expected value of the functional of Brownian motion. And that's what I wanted to get to because I want to replace the Winder measure. I want to replace the Brownian motion with another process that will match the truncated signature. So to do that, what I'll end up, I'll end up with an approximation of the corresponding Feynman-Kazer presentation. I'll end up with an approximation of the PDE. And this approximation will be a higher-order approximation in the sense that the higher the truncated signature is, I'm matching, the better the approximation. And I'll come back to this. And if you want to find out conditions under which this is the case, I've done some work. There's been a lot of work in this direction. And if you want to find out processes that have these properties, processes that match the Brownian motion in some way, there's been a lot of work in this direction. And you will not be surprised to see Kusouka in here. So Kusouka, obviously, they developed the theoretical methodology, the theoretical analysis, and then they realized he and his collaborators realized that you can then develop numerical methods that exploit these theoretical analysis. And Lyons and Victor came in 2004 and said, OK, you can do this to any order. So they proved the following result. They say, OK, so you start with the signature of Brownian motion. You know what the expectation of the signature of Brownian motion is. And you can construct a process that will have a finite set of outcomes. It's going to be a very simple process. It's a simple process. And the outcomes, the paths that this process can take, this random process can take, will be finite. And each of these paths will have bounded variations. So what you see in here is bv. It means that the paths have bounded variations. So this is going to be a process. You can think of it as a random process. But this random process is only allowed to take a finite, the paths of this random process will be one of a finite set of paths with bounded variation. And so there exists a finite set of paths, say capital N paths. And there exists a process that the probability that the process takes any one of these paths is given by some lambda i. So there exists some lambda i. The sum of the lambda i is 1. So we're dealing with, this is the distribution of the process. And this random process takes one of these paths, which has finite variation, with probability lambda i. And this process has the property that if you compute the expected value of the truncated signature of this process, it matches the expected value of the truncated signature of the Brownian motion. And the corresponding distribution of this process, the law of this process on the path space, so the law of this process on the path space is a linear combination of Dirac measure at those paths times the corresponding probability for each of those paths. So this is a simple, is the empirical distribution of this process on the path space. This is called the cubiture measure. Yeah? The sequence of the eye. The sequence of the eyes. This one. Here, on the previous page. Yeah? The answer is here. Yeah? Yeah. Oh, so that's, ah, I see. So this is just, this is just a notation to integrate on the simplex. So that simply means, so the first order, right, the first order. The second order is just omega t1. The second order is, and the third order, so you just integrate, you just integrate on the simplex. That's what that, that integral simply means this type of iterated integrals. So that's simply a notation for this iterated integrals. So t1, t2 are sort of the dummy variable with respect to which I integrate. I'll get there. I'll get there. Okay. So, so what they have, what Lianos and Victor have is this sort of, sort of existence result. They say, if you start with the Brownian motion, you truncate the Brownian motion, there is, there exists, there exists this finite variation paths so that the corresponding measure, the corresponding measure is a cubitual measure in the sense of the expected value of the truncated signature of this new process coincides with the expected value of the truncated signature of the Brownian motion. And you know, for, from now on, I'm going to call, I'm going to call by qtm, the signature, the cubitual measure for where m is the truncation parameter. So that means that you match all the iterated integrals up to the level m. So m can be one. That means you just match the first, just match the iterated integrals of order one, two, order two, three, order three, and, and, and so on. Okay. So, right, so suppose you have such a truncated measure, you know, how do you actually use, if you have, you should suppose you have such a truncated cubitual, how do you actually use it? Okay, so, so the, the simplest example is to look at, see what happens, how do I use it in order to compute the expected value of a function of a diffusion, right? At the end of the day, a diffusion, you can think, you think of, you can think of a division as being, as being a functional of the Brownian path, right? And how do I use this, the, the cubitual measure in order to compute the expected value of a function of another function of the Brownian path? So you know, you start with a standard SD, you have a standard SD, well, which is driven by, so, you know, it's in any dimension you want, and it's driven by a d-dimension in a Brownian motion, okay? And I want to compute the expected value, I want to compute the expected value of a function of this SD. So this is a standard problem which appears in many, many applications, right? But from the perspective of a Feynman-Kass representation, it's a, this, you know, this, this particular expression is the Feynman-Kass representation that can be used to represent the solution of a linear PD, right? So exactly what I'm doing here, I'm solving a linear PD by using a probabilistic, this problem, the Feynman-Kass representation, okay? So how do I use the, how do I use the cubitual measure in order to produce an approximation of this linear PD? Okay, so, again, the reason why I'm going to be able to use it is because, is because, you know, you know, at some level, the solution of X is a function of the Brownian path, okay? And it's, it's driven by the Brownian path, okay? So suppose you have one of these cubitual measures that sits on, that you have one of these cubitual measures that sits on a number of finite variation paths, omega i. So what you do, you take each one of these paths and you solve the SDE, but instead of solving the SDE with respect to the corresponding, with respect to the Brownian motion, you replace the Brownian motion with the corresponding path. So you replace the Brownian motion with the corresponding path. This is going to be an integral now, no longer a stochastic integral, but an integral with respect to a finite variation process, okay? And you solve for each of those paths in the supports of the cubitual, you solve the corresponding ODE now, right? Which is driven by this path, right? So you know, you have one solution of the ODE for each of the paths in your cubitual measure. You solve the ODE, right? And then you're saying, okay, now that I solve my ODE, how am I going to approximate the functional? Okay? I need to compute the expected value, I need to compute the expected value of my functional of the solution, but now the solution is only, is only integrated on those paths. So what you do, you take, I'm sorry about this, so what this means, what this means, this should be alpha. So you take, you estimate alpha, your function, at the solution of the ODE, and you take the linear combinations of these guys, multiplied by the corresponding probability for each of those paths. To put it differently, you integrate alpha of x t, we respect not to the cubitual, not to respect to the Winder measure, but we respect to the cubitual measure. It is as simple as that, all you have to do is solve this ODE, you estimate the value of the, you estimate alpha of the final value of the ODE and you take the linear combinations of those. Okay, so what do you get if you do that? So what you will get, when you take the difference between this, the quantity that you get, the difference between the expected value of alpha of x t, that means integrate alpha with respect to the law of x, and compare it with the integrating alpha with respect to the law given by this weighted Dirac measures at these paths, the error that you get is a border delta to the power m minus 1 over 2. So I need to explain what this is. So what you do, you can work on the whole interval, 0 t, or you can divide this interval into subintervals, and then for each of these subintervals, you can apply the cubitual, you can solve this ODEs. So if you do that, and the measure of the partition is delta, then the order that you end up is this. And m is the number of the iterated integrals that you match. So if you just match 1, is bad news. If you just match 2, you get a half. If you match 3, you get 3 minus 1 divided by 2, you get order 1. So the equivalent of the Euler approximation, an order 1 method, if you use cubitual, what you need to do is you need to match the first three. No. Once you have to match all the iterated integrals up to order 3. So an order 1 method, if you use tela lie methodology, for an order 1 method if tomato use Tula methodology, you have to match all pearly integrals up to order 3. In order to obtain an order 2 method, you have to match all the iterated integrals up to level 5, and so on. So when you look at the literature on Cubature methods, all you'll see, you'll see development of Cubature methods of even order, 3, 5, 7, 9, and so on. And the reason for that is the bound that you obtain is of this form. The bound that you obtain is m, which is the number of iterated units that are too much, minus 1 divided by 2. OK. So that's what you, this is the error. And of course, now I'm coming back to Peter's question. OK, so how many of these paths do you have to take? Well, so in order to get an order one method, in other words, in order to get, you need to match the first three iterated sets of iterated integrals. So a Cubature of order 3 can be, so this is not a unique, there's no uniqueness here. But for d equal to 1, a Cubature of order 3 is just simply you have two straight paths going to either up or down with probability of half. So you just go up with probability of half straight and down with probability of half. And this is a Cubature of order 3 in dimension d equal to 1. So there's two paths, you need two paths. In d dimensions, you need this many paths. So you need the integer part of d times d plus 2 divided by 6 plus 1. So you need this many linear paths in d dimensions to obtain a Cubature of order 3, to obtain a first order approximation. Now if you want to go up a Cubature of order 5, for d equal to 1, you need three paths. So you need a path which just stays 0's constant. And then you need a path that goes according to this law and a path which is minus the first path. So this path that goes according to this law to looks like this, it's piecewise linear. So from 0 to a third is a straight path. And then from a third to two thirds, it's another path. And then you get another path from 2 thirds to 1. So it's a piecewise linear path that goes like this. And this is exactly the pictures that I have in here. So the three paths, so you stop here. The three paths are there's a straight one. And there's one which goes up, down, and up. And there's another one that is the reverse of the first one which goes down, up, and down. So you have these three paths. And you have to solve the ODE for each of these three paths. Now when you get to the first partition time, what you need to do for each of these points, you have to restart the procedure. And for each of these points, you have to solve the ODE according to the same three paths, but starting from these points. So if you have three paths here, then you have three times more paths here and so on. Every time you have an element of the partition, you have to multiply the computational effort by three. So you have a computational increase. You have a computational increase which depends on the number of elements in your partition. And that's bad news, right? Because that means that if I have a partition which has 20 points, then I end up with, if I use the curvature of order 3, I end up with 3 to the power of 20 paths that I have to solve. And that's not something which is feasible. So the next step, once I have this cubitural measure, so we think of the cubitural measure as being a measure on this set of paths that branch out, keep branching out, then I have to say, OK, I won't be able to. I won't be able to compute the ODE for each of these corresponding paths. So what I need to do, I need to select a subset of these paths and only compute the ODE on this subset of these paths. And I need to find a way in which I can do this selection so that I don't mess up the overall computation. So the cubitural measure does is really, it identifies a set of representative paths. But the next step is to say, OK, now this is the representative path. If I do this and I just do this integration with respect to the measure on this representative path, I get this error, I can't compute the ODE over all these representative paths. Therefore, I have to select some of these paths, the most, the one, a subset of these paths, depending on the amount of computational effort that I have on my disposal. And I will only do the computation on this selection of computational paths. Now, if you're trying to solve a problem on a finite time horizon, you have to use a partition. And this is where I stop before the break. So what's going to happen is that for each time of the partition, you have to solve all these ODE's and the paths branch out more and more. Now, the good news is that because you match the signature, you need a lot less elements in the partition compared to what you would have to use if you were to do a standard Euler approximation of the SDE and then sample from that. So this isn't something that I can prove. This is something that we observe in practice. That the number of points in your partition doesn't have to be equal with the number of points in a partition that you have to use when you use an Euler approximation to solve the SDE. And then you replace the Brownian motion by you sample from the Euler approximation. Nevertheless, especially if you are in high dimensions, even if you have a few points of the partition, the number of ODE's that you have to solve increases exponentially with the number of points in the partition. So you have, as I said, you have to find a way in which you take just a sample from this pass. So in order to do that, what we do, we replace the cubiture measure by another measure. So remember, the m here tells you how many iterated integrals do I have to solve? So do I have to match? I want to match. And then for each of these s, you have to solve a number of ODE's. The number of ODE's depends on the dimension. If you just solve for a cubiture of order 3, you need, for example, order d squared paths if you solve the if your ODE's in the dimension. And then what you do next, you replace the cubiture measure by another measure, which depends on a second parameter, which is capital N. And the second parameter will tell you how many of these paths are you actually going to take. So your analysis, and you find out, actually, all I can afford is to solve 1,000 ODE's, a million ODE's. It depends on your computational effort. And then you say, that's, and then this is the number of paths that you will choose. So out of all the paths that you end up, that sort of increased exponentially with the number of points in the partition, all you'll do, you'll just pick up capital N paths, where n is a parameter which you choose, depending on how much computational effort you afford to use. And in order to do this, what we use is something which is called a tree-based branching algorithm, or TBBA. And we introduced this tree-based branching algorithm before cubiture measures were introduced, because this wasn't introduced in relation with cubiture measure. It was introduced in the context of filtering. So a tree-based branching algorithm is simply a methodology that provides optimal stratified sampling. So in statistical language, that's what it does. You get a measure which sits on some finite support or countable support, and you sample, in some way, out of that measure, another measure. You get another measure which has a fixed support, or it says a support which is less than your capital N, your choice of the number of points, and has certain properties. And I'm going to come back to its properties. And so this was introduced in the concept of filtering, and then we realized that actually this is an ideal methodology in the context of cubiture to keep the computational effort fixed. And in fact, anywhere you see measures that are constructed in this inductive way which involves trees, you can use this methodology to sample out of a tree. And so the idea here is that what you use, you don't use the cubiture measure itself. You use a subset of this cubiture path. You combine the cubiture measure with a TBBA, and that is going to produce another measure that will have the number of paths fixed to whatever you choose the number of paths to be. So when I have this cubiture path, I told you that, say, this is the first point in the partition, and then I have the cubiture path. For each of these cubiture paths, for each of these three points, I have to reapply the same, I have to attach the same three paths, and then I get 9 and then 27 and so on. What you do, what the TBBA does, the TBBA, it's fixed. It chooses only some of these paths, and it solves the ODEs only along some of these paths. So the paths that are in the cubiture measure but are not selected, you do not need to solve the ODEs. You don't care about those paths. All you care is about the paths that are selected. So first, what you do is you first select the paths, and then you solve the ODEs, not the other way around. That means that truly, you keep the computational effort fixed. So how do you do that? Well, essentially, you use something which is a Monte Carlo method, more or less, but it's stratified. And the idea behind it is that you think of having a number of particles that start a number of n particles that start here. You think of the n particles start here at the root of your tree, and these particles trickle down across the branches in a certain way that at each level, the empirical distribution of the occupation measure of these particles will approximate the original cubiture measure. And the law of the mass that the cubiture, the mass that this truncated cubiture puts on each path is given by this formula. I'm not going to go through it, but I only want to mention is that its chosen, the mass that it puts on each of these paths is chosen so that it approximates the corresponding lambda i, the corresponding cubiture weights to within 1 over n. And that's the important thing. So it tries, so the TBBA truncates the cubiture measure so that it gets as close as possible to it, to within 1 over n of the actual weight of that path. So remember, each path has a weight. So I have paths and weights. And as I go along down the cubiture tree, these weights multiply. So for each of these paths, I have a corresponding weight. And when I select my paths, I select my paths by taking into account these weights. So I want to select at most n of these paths in such a way that I come as close as I can to the corresponding weight within 1 over n of the corresponding weight. So this is the formula that is done in here. So the remaining paths, for the remaining paths, each one of the remaining paths will have a corresponding weight. Each one of the corresponding paths will have a corresponding weight. And it's going to be a probability measure on the path in itself. So the sum of all these weights will still be 1. The sum of all these weights will still be 1. So I still deal with a probability measure. So after I applied the TBBA, I'm left with a subset of paths. For each of these paths, I'll have a corresponding weight. And the sum of all these weights is still going to be 1. So I still will deal with a probability measure. Then all I need to do is just solve the ODE according to each of these remaining paths and apply the same methodology. Take the sum of the alpha, estimate it at the end of the solution of the ODE driven by the path multiplied by the corresponding cubiture weight per tube in the way in which it's presented here. So the algorithm is really there. It's not very difficult to. Yeah? So is there an analogy to particle filters that you could see your paths that you kill some of your paths so you set the weights to 0, but you have to reweight all other weights so you present the measure? Yeah, there is. And I'm going to hope to come and show you the application to filters. OK, so the algorithm, if you go and look in the paper, you see the algorithm written down in pseudocorb. You can apply the algorithm. There's no problem with it. And this algorithm has some amazing properties in the sense of it has minimal variance. So if all the measures that approximate a measure such as this, it minimizes the variance for any individual size. So it has lots of nice properties. OK, so that's all I wanted to say, which is generic and applies to all the examples that follow next. So I've told you how you approximate the winner measure. And I also told you how you limit how you control the computational effort. And now I'm going to go and look at three different examples, three different classes of PDs or SPDs, where this methodology is applied. And for each of these things, what's going to be different is, of course, I'll have a corresponding Feynman-Kath representation. Because as Renee is telling me, I should call it a Kakotani representation, but there you are. I have to tell you what the representation is, how I discretize that, and then what is going to be common to each one of these three will be the fact that for each one of these three, I'm using the curvature method combined with the TBBA. So example number one, same linear PDs. So we'll look at the same linear PD that looks like this. It has the final condition, which is phi. I have here a second order differential operator, which is written like this. So these VI's here are first order differential operators. So I'm writing them in this format. So I have a first order differential operator here. And then I add to this a sum. Here, I have first order differential operator applied twice. OK, so this is essentially a second order differential operator. And the reason why I write it like this is because it's going to be easier to write the corresponding Feynman-Kath representation in terms of the solution of an SDE written with Tratunovic integrals. OK, so this is a second order differential operator, which applies to you. And then here, this is where the similarity comes. So the similarities in terms of the two So the similarities in terms of a function, which is in the contents of four, back rest, this is called the driver, a function which can depend on time, the underlying variable, the solution, and these operators, VI's, that are applied to you. OK, so the nonarity can be in this format. OK, now if this seems to be too complicated, I'm just going to give you a particular example, and then just keep this example in mind. So a particular case is to say, OK, let's not worry about this very complicated operator. I just replace it with the Laplacian. So I have delta t plus the Laplacian applied to partial of t and Laplacian applied to you. And then the nonarity is going to be in terms of u and the gradient of u. OK, so this is a particular case which belongs to this wider class. And I have a final condition. OK, so what we're doing here, we have a corresponding Feynman-Karz representation, and then we're going to use the Cubichard measure to approximate the solution of this equation. Right, so what is the Feynman-Karz representation? Well, the Feynman-Karz representation in this particular case for a semi-linear PDE is in terms of a four backward SDE. So let me explain briefly what this is. So what you have, a four of backward SDE has two parts. One is a four part, a four component. You start from some value x naught to time 0, and you simply solve. This is a classical, so casted differential equation, which is solved forward in time, where the V's that you saw before appear in here are just coefficients. So I have a drift term, and then I have a stochastic term, which is written in terms of a Stratonovich integration with respect to a Brownian motion, a d-dimensional Brownian motion. And the V's that you saw there appear here as coefficients in the stochastic integral. OK, so this is the four component, the four backward SDE. And then you have a backward component. And the backward component, you start at the final time capital t with some function of the four component estimated at capital t. And the way you interpret this, you solve backward in time. You solve this backward in time. And you have a drift term, which is in terms of this driver that you saw in the similarity in the same linear PD. And then you have a stochastic term here, which is an integral, which is an eta integral. But you have an additional term in here, z, which is the mystery guess. So this z here is essentially part of your solution. So the solution is going to be made out of, we'll have three components. The x, the four component, the y, the backward component, and the z, which is this term in here, which appears in the stochastic integral. And the reason why I have to impose this is because even though you think of running the backward component backwards in time, you impose the constraint that the backward component is measurable forwards in time. So it's measured with respect to the four filtration generated by the Brownian motion. And the only way you can do that is to have this freedom of choosing this additional process, z. So the famous theorem introduced by Faraday and Peng in the 1990 is that if you have a system that is introduced in this way, there exists a triplet of processes, x, y, and z, that are a solution of this system, and the solution is unique. So what's the connection now between this system and my same linear PDE? And it's again, Faraday and Peng that came out and say, OK. So suppose that you take your solution, the four backward SD that I've just shown you, and you start a time t from x, and you solve the system forward from time t up to the final time capital T. So a time s, this is going to be the expression for the four component. So this is the flow. This is the flow. So classic flow associated with my four component. And then you go a time capital T, you take the function, the final condition, estimate it at the solution of the four component at time capital T that started from time x at time little t, and then you run backward in time according to the backward component. And you run it up to time little t where it started. Then this quantity in here, so again, this doesn't mean that I have a process that at time t, because I index the process with t and x, doesn't mean that y t t x is equal to x. It doesn't mean that. It simply means that I compute this guy by starting with the four component at time x, you go to the final time, and then you go backwards in time, back to the time little t with the backward component in this way. So this quantity here gives you exactly the representation of the solution of the same linear PDE. So u, the solution of the same linear PDE at time t estimated at x is exactly equal to this quantity. And you can prove that this is a termistic, even though you think that actually it's random. And so by taking the expected value of this quantity in here, the stochastic integral disappears. And you have this expression here in terms of all the elements that appear in there, in terms of the four component, in terms of the backward component, and in terms of z, this mystery processing there. But there's more. Both the backward component and the z, you can show that these quantities are functions of the four component. So essentially, even though this looks very complicated, this is a nonlinear Feynman-Kass representation for the solution of the same linear PDE. And that's it. I have my Feynman-Kass representation. I then can go on and apply my methodology to solve it. So the next step that I need to do is to discretize. So this is what I'm saying in here. OK, I have a way to represent the solution of the same linear PDE in terms of a Feynman-Kass representation. And then I can start to discretize it. And I'm explaining in here how you discretize it. What we use here, we use the methodology which was introduced by Buscher and Truzi. This is simply a discretization of the functional. I can't go through the details of it because I'm running out of time. But there's a standard discretization that you need to do. And then what you prove, you prove that if you apply this discretization, if you apply this algorithm, so before you do any cubature approximations or anything like that, when you apply this, the error that you get is either of order delta or of order squared of delta depending on conditions that you impose on the coefficients. So the discretization that I've just briefly shown you, you can use any other discretization. This discretization is not unique. And there are a lot of other discretizations out there. This discretization is order 1. So the best that you will be hoping to do is to end up with an approximation of order 1. So it's order 1 because it's delta of the power 1. So in order to match the corresponding measure that will match this, we'll be to use a cubature measure of order 3. 3 minus 1 divided by 2 is 1. I can use higher order cubatures, but it's not going to be helpful for me because my discretization, my error is going to be dominated by the way in which I discretize the functional. But there are high order discretization. And you might want to look at the paper Jean-Francois and I produced when you can get basically discretization of any order. So that's how you discretize. And once you discretize that, all you have to do, you have to apply the methodology. You know, instead of solving the forward SD, you solve all these ODs. You compute the course. You sub-sample using the TBBA. And then you compute the corresponding, you compute the discretized functional integrate with respect to this truncated cubature measure. And the result that you end up is the following. So there should be a 1 over 2 in here. Let me assume so. The L2 norm of this is controlled by three terms. And this is where you see exactly which error, what of these three it stays. How each of these three stays infringe the error. So the first point here, the first one here, this is delta. This comes as a result of using a first order discretization of the functional. Now you can do better than this. You can use a high order discretization. And then you get a better approximation for this first error. And then the next one depends on which cubature measure you're going to take. So if you take a cubature measure of order 3, then you get another delta. We can go higher. You take a cubature measure of order 5 and so on. And you can have a better approximation but with more computational effort. And then the third one is an error as a result of trying to control the computational effort. And this is 1 over n, or square root of n if I put the square root of n in there. So you can see clearly each one of these three steps will involve an error. So a quick example, we just got this same linear PDE where we got sort of you will recognize this as coming from a geometric Brownian motion. And then we took a driver which looks like this and a final condition which is of this form. And I'm sure you can see this very well. But the point is that as you increase the number of particles, so particles by particles I mean capital N. As you increase the number of terms and then you get better and better approximation is the error or the case. And this is done for cubature of order 3 and cubature of order 5. And the computational effort, so if you do an averaging over independent runs of applying the TBBA, you get a better result because you reduce the variance and you get a much better error. The error is sort of a border 0.01, a relative error. And it is in here that I'm trying to show that because of the fact that you keep now the computational effort fixed for each element of the partition, as you increase the number of elements in the partition, the computational effort increases linearly. So there's no explosion in the computational effort, even though it did look at the first time that I have an explosion. OK, another example. So I'm going to now go and look at what happens in filtering. And this is in filtering, I've been working in filtering for a number of years. So there's this methodology that produces an approximation of the solution of the filtering problem in terms of the occupational measure of a set of particles. You'll have particles that move through space and they produce an approximation of the filtering problem. So the equation that you solve in this case is written here in weak form. But if you want the strong form of this equation, a particular case, again, similar to what I told you before. So you have a drift term, which you can take into be just the Laplacian. And then you have a stochastic term, which is a sum of integrals with respect to a Brownian motion. This isn't the Brownian motion that appears in my cubiture measure. This is another Brownian motion. So it's a Brownian motion which is independent of the first one. And the noise is multiplicative in the sense that I multiply the noise, the Brownian motion, with the solution itself. And I multiply this with some gamma k. And the gamma k's are functions themselves. And I can see I'm rapidly running out of time. So I'm just going to show you quickly the Feynman-Kassel representation looks like this. So there is a Feynman-Kassel representation for this. And the connection with the filtering problem is that that linear SPD is used in the context of filtering where the x in here is a process that I like to estimate. So x in here is a process which we call the signal and is the solution of an SD. And what I have, I observe x. How do I observe x? I observe a function of x to which I add noise, which is modeled by another Brownian motion. And then if I solve the solution of this equation, would give me the solution of the Zaka equation. Normalize, properly normalize, gives me the solution of the filtering problem. And I need now to go to the simulation. How do I do this? OK. So what you'll see next, we will see a simulation where you see the evolution of some particles. Each of the particles evolves according to one of those ODE's. What I'm choosing, I'm choosing 5,000 particles. At each time, I'm solving 5,000 ODE's. And I select these ODE's. I select which particles I'm going to use by using the TBBA. So the particles evolve according to the ODE's that are dictated to me by the cubature measure. Exactly which ODE's I'm going to solve. I'm using the TBBA to decide which ODE's I'm going to solve. So let me just, I'll start running this. OK. So what you've seen here. So first of all, there's several things that you need to look at. The solution of the Zaka equation, the solution that I need to estimate, let me start it from the beginning, is the green curve in here. So if you look carefully, I hope you can see there is a green curve in here. It's a bimodal thing. It's a bimodal thing. The filtering problem, the solution as a Zaka equation has two modes. And it's an amazing, so there are equations, SPDs out there that have an explicit solutions, that are bimodal. You can end up with multimodal ones, but this one is bimodal. So your particles, they have to approximate this bimodal, distribution, but they do much more than that. They approximate not just the distribution of the any time t, they approximate the distribution on the path space. This is a much better approximation on the path space. But what you see here, you just see the approximation at any time t. So the intention is to approximate this bimodal distribution, which eventually becomes unimodal in the sense of the second mode is still there, but is very, very flat. Now in the context of filtering, the reason for that is that as I observe, as I make more and more observations, I find out where my signal is. So my signal is given by this triangle in here. The actual position of the signal is given by this triangle. But the posterior distribution, by using the observation, I am not able to find out exactly where the signal is. All I am able to do is to compute the conditional distribution of the signal given the observation. And at the beginning, I'm not doing a very good job, not because I can, because I can, in the sense of the posterior distribution is truly bimodal. The blue thing here is the actual distribution of the signal without taking into account any observations. So if you were to just use the blue thing, you'll go completely wrong, because the blue thing simply says that the signal can be either in one place or the other place, but I don't know which one it is. But when I start using the observations, then I know, I begin to know where the signal is going to be. The particles themselves are represented in here by these straight lines in here. So you can see there, there's some very narrow straight lines. Each of these straight lines is the position of the particle. So you look, so the whole cloud of particles start by being very concentrating. They split into two bits. I solve ODEs, I solve ODEs, but eventually, I go down one route, and then I don't continue that route. The whole mass moves onto another ODE that becomes more relevant, and this ODE continues, and that ODE doesn't continue. So the particles themselves, they just move from one place to another, depending on how the TBBA tells them to move. And so all the time, I have a number of surviving particles that solve this ODE, and only the one that survived what I do with them. I just compute the corresponding empirical distribution of this, and that's what I get. The red thing is going to give me that. So essentially, I solve this methodology. It shows how I can use the cubiture measure to solve the SPD in the context of the filtering problem. OK, so I'm going to go back now, and I only have a few minutes left. So I'm just going to show you this result. So again, this result here, again, tells you what the error is going to be, depending on the various parameters. So the error depends on how good of a job I do to discretize the functional. So it's 1 over n to the power, if 1 over n is the order of the mesh of my discretization, 1 over to the power alpha is the error that I get by discretizing the functional. Now, this alpha, it's a most 1. With existing methodology, we cannot get better than 1. We're trying very hard now to go to 2. Then the next one depends on m. m is the order of the cubiture, and again, is 1 over n to the power m minus 1 over 2. So you can just work with cubiture of order 3. It'll be OK. And 1 over n essentially depends on how many cubiture paths you're allowed to use. So each of the three steps induces a corresponding error, which can be computed in terms of the corresponding parameters. OK, so let me not go into this. I want to go because this mean field, and this is about mean field. So I'm just going to say a couple of things about the third example. So for these third examples, you have a nonlinear diffusion that looks like this. So the corresponding PDE, so I'm writing one of the corresponding PDEs, is nonlinear in the sense of these vi's in here, these differential operators that you saw before, apply to the solution. They are applied to the solutions. These vi's in here depend on the solution itself. That's why it's nonlinear. So the vi's depends, or obviously on x, on the variable, on the independent variable. But they also depend on the integral of some function phi, some function phi, integrated with respect to the solution of the same linear PDE. So this is truly a nonlinear PDE in there. The coefficients themselves depend on the solution. So if you now want to compute the corresponding pharmacats representation, the corresponding pharmacats representation is going to be given in terms of a nonlinear diffusion. So now what you'll have? You'll have to have a stochastic flow where the coefficients, obviously each of the coefficients is depend on the solution, the flow itself. But they also depend on some expected value of some functional of the solution itself. So what you have in here, you have an integral of phi with respect to the law of the process itself. So the direction in which the nonlinear diffusion is going to go is dictating not just by the position that the diffusion has at a certain time, but also by the entire law of the diffusion. So this is a McKim-Lasso of SDE or a nonlinear diffusion. So we have some conditions under which this work, this is done. And we've used two sets of methodology to discretize the functional. And we have a result that we proved last year, and the paper is still waiting to appear, which depends on many, many parameters. I'm not going to go into these details. Imon has gone into a lot of trouble to compute these errors, depending on all the parameters that appear in there. So let me not go into these details. So a simple example that we looked at was to take a nonlinear diffusion, where in here the drift is given by the law, by the expected value of the diffusion itself. And the reason why we chose this plus by a motion, the reason why we chose this is that we can compute this explicitly. There's an explicit expression of this, and is given by this. And we use that, and we show that obviously it's going to work, because otherwise I wouldn't be talking to you. And we also looked at a more complicated situation where the coefficients are not uniformly elliptic. It's a two dimensional case. And again, in this case, the whole methodology still works. OK, so let me conclude with some final remarks. I made a case here that you can use Feynman-Kar's representations to represent a variety of PDEs, a variety of solutions of PDEs and SPDEs. And once you have such a representation, then you can begin to either analyze them theoretically or numerically approximate them. If you use the Feynman-Kar's representation to numerically approximate the solution of these PDEs or SPDEs, you have three steps that you need to use. Step number one, you need to discretize the functional. Now discretizing the functional, this is a problem specific question. For each of these PDEs, you have to look at the corresponding functional and find a way to discretize it and compute the corresponding error. The other two steps are not depending on the problem. They are common to all the methodologies. And the reason why they are common is that each of these Feynman-Kar's representation essentially is a representation in the form of an expected value of a functional with respect to the winner measure. And everything that you do is just replace the winner measure with a cubiture measure. And I've explained how you do this discretization using the cubiture measure. And then the third step that you need to impose is to essentially reduce the support of the cubiture measure to a support that you choose to use. And then you can do this by using the T-based branching algorithm. And then you compute correspondingly the error that this induces. So you have to put all of these three steps together to get an approximation of the solution, a probabilistic, an approximation of the solution of the PD or SPD based on the Feynman-Kar's representation, which means is a probabilistic method. Now, the cubiture measure method is essentially deterministic. I solve ODEs. I solve a set of ODEs. So each of these particles, the kind of particles that you've seen in the filtering application, you solve an ODE. And you use the corresponding paths to represent the cubiture measure and the approximate the winner measure. The TBBA is not deterministic. The TBBA is a random method that selects in a judicious way n paths out of the paths that give you the cubiture method. There is another set of methodology that is not random, is deterministic. And this was introduced by Lyons and collaborators, which is called the recombination method. So a recombination method tries to do the same thing but deterministically, where you reassign the paths in a deterministic way. By using the TBBA, there is not going to be any exponential increase in the number of paths. The number of paths remains constant to the number of paths that you decide. And if you put all of these things together, you get an approximation for any PDE for which you have a Feynman-Kassel representation. And I hope that this will eventually lead to a theory of approximation that can be applied in general. So you can envisage a situation where one day you come there, you give your approximation, you give a Feynman-Kassel representation to your maple or to your mathematical. And underneath you have all this machinery, and it's just going to spit out the approximation. Thank you very much. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Are there any questions? OK. So how does your method degenerate as the diffusion, if it goes to zero? Suppose you have a low diffusion limit. Somehow, for example, take a heat equation and apply some drift terms. And expect some transport equation to be the limiting case when Epson goes to zero and the diffusion. So how does the method degenerate? How do you see this kind of lack of diffusion in your methods? I expect this would be related somehow to this home undercondition definition. Yes. Yeah. You've given the answer. OK. So you have this set of methods. They work under the home undercondition. They work under a condition which is called the UFG condition, which is weaker than the home undercondition. But still, there's some sort of non-degeneracy that the corresponding differential operator will have to satisfy to make these methods work. So this non-degeneracy means that the corresponding ODE is will stay in the support of the diffusion. But somehow, the diffusion itself has to be, at least on the support, has to be well defined. You have to fill out the support of the diffusion. Now, if you have a situation in which you hit, like you take the CIR, it hits 0, it's not going to be satisfied. So the method is going to degenerate exactly because the corresponding condition fails. So you see this exactly. You can do the approximation. And there's lots of tricks of the trade that you can use. You can use a more refined. You can use a more refined locally refined partition. And you can compute these things locally so that you can avoid that. That you can do the same thing that you do with standard solutions of ODE's or Sd's. You can do it here. But generically, you know that in the limit, if you just want to try to approximate the limiting equation, you will fail. Because ultimately, you might want to recover the characteristics method when this, I don't know if you see what I mean. But ultimately, when you're doing fine-tuning cuts, you are doing some of our all characteristics for a parabolic equation. And when this parabolicity disappears, and perhaps you have some kind of hyperbolicity left, you might be able to recover the characteristic method. So I was wondering. Right. It's all to do with the UFG conditions. So there's no longer the question of the hormone condition is a question of the UFG conditions. What you need to do, you need to adapt the pharmacology as a representation, approximate the representation so that you use something which solves the UG condition, and then you apply it to those. Yeah. When you say you approximate on the path space, does it mean that you approximate this function you everywhere at the same time, or starting point per starting point? You approximate this function starting point per starting point. But you can do these things. There is a way that you can do these things together. OK. So but I mean, this is simply the Feynman-Kasser approximation simply gives you a representation of the solution at time t estimated at point x. So that's what you do. OK. Another question. Can you do it with jumps? Sorry? Can you deal with jumps? OK. If you asked me this question last week, I would have said no. But yesterday, I was in Darum, and Vlad Bali gave a talk, which sort of where he showed that you can have equations where you have an additional, you have a jump term. And there is a way to approximate the small jumps with an integral respect to Brownian motion. It's an amazing thing. So somehow out of these small jumps, you can approximate these small jumps with an integral respect to Brownian motion. And you are left only with the big jumps. And as a result of that, you can, I think, now, the answer that I can give you, the answer is yes. If you do this additional approximation, you place all these small jumps, there's lots of them. We're using an integral respect to Brownian motion, and then you apply the same methodology. Large jumps are already OK. The large jump is just sort of get the jumps, and then you go up to the jump, and then you jump, and then there's not a problem. Large jumps, there's a finite number of large jumps. There won't be a lot of the large jumps. If you know what to do in between them, you're going to be OK. But if you really want to do the cubiture measure with jumps, so without doing this trick, then this has not been done. So maybe students out there can come and say, well, OK, now let's try and approximate not the winner measure, but a measure that accepts jump, and then you can repeat the sole methodology on half jumps. OK, so the last quick question. So we try to keep this discussion. OK, so I wanted to ask, to what sort of problems it is preferable to apply cubiture method compared to alternatives? That's a hard question. You have to test it. That's all I'm suggesting. Your experience. You know, wait. So when we came up with the first methodology for semilinear PDEs, we sort of try and look at various other alternative methods that were there. And of course, we tried to show that this is better on the examples that we looked at. But maybe that you come up with different examples, well, it's not going to be working. It's not going to be better. For example, two weeks ago, we learned of a new method that can be used to solve some of the semilinear PDEs in 100 dimensions. And then you have a whole lot of questions. And now you can try to try and see if you can do this in 100 dimensions. But there is not a universal answer. This is a generic methodology which you can use to try if it's going to work or not for the problem of your choice. For example, in i dimension, does it work efficiently? Exactly. So this is the question that we face now. So up until the week before last week, everybody was doing semilinear PDEs in dimension up to 7, 10, and so on. So of course, we can do semilinear PDEs in dimension up to 7, 10, and so on. But now last week, two weeks ago, we learned that you can do semilinear PDEs in dimension 100. And now this is the challenge. We have to try and see if this is going to work in dimension 100. But the methodology is still there. You know how the Kubitsch-Mescher is going to be in dimension 100. You know how to control this by using the computation effort, and then just have to test to see if it's going to work. OK. So I think we shouldn't be that optimistic about 100 dimensional PDEs yet. But let's thank Dan again. Yeah. Thanks.
The talk will have two parts: In the first part, I will go over some of the basic feature of cubature methods for approximating solutions of classical SDEs and how they can be adapted to solve Backward SDEs. In the second part, I will introduce some recent results on the use of cubature method for approximating solutions of McKean-Vlasov SDEs.
10.5446/57388 (DOI)
Thanks to the organizers for the invitation. It's my pleasure to be here and to have the opportunity to give this presentation. So I'm actually more an expert of kinetic theory and recently came into game theory because of Mean Field Games and there seems to be a fantastic playground there. And that's a joint work with John Goliou from Duke University and Kristin Ringgofer from Arizona State. So I've been discussing a lot with some friends who started from the physics side of the applied mathematics and then they were hired in economics university and so sort of to show there that they were good economics people, they were sort of rejecting completely the mechanistic viewpoint and taking the rational agent viewpoint. So they were saying that for social sciences, using the description of agents is like particles submitted to forces and so on is not correct that people are rational agents and they are making decisions based on utility function that tries to optimize something. And so he was criticizing, I'm not going to say who he is, he was criticizing a lot of the new fields of social physics or economic physics where people apply physics based ideas on systems of social dynamics and economics. And so my feeling is that at this point that was a kind of a matter of taste and it turns out that when we tried to think about it, we could actually propose a way to reconcile these two viewpoints and trying to show that it's the same thing, it depends on the way you look at it. So this is what I've said here. So there are two viewpoints about social or biological agents either, you can look at them as mechanical particles subjected to forces. A typical example is pedestrians. So the first model for pedestrian dynamics is due to a physicist called Derkelbing and the model is called the social force model. So this is exactly what it says that pedestrians are subjected to forces, repulsion forces when they meet an obstacle, attraction force when they want to see what's happening there and so on and so forth. And then the second view point is trying to view the agents, the rational agents trying to optimize a goal. And again in pedestrian dynamics now there is a trend to actually use concepts of game theory to describe pedestrian dynamics. And as I said, our goal was to try to reconcile these viewpoints and show that kinetic theory can deal with rational agents and vice versa that looking at rational agents is somehow very different from looking at say forces and particles. One outcome of this is that then you can try to sort of cross-fertilize the two domains. And in particular in economics, economics is mostly a theory of equilibrium. So demand and supply and so on. And there is of course more and more time dynamics in economics but still, I mean most of economics the pillar is equilibrium. And so it's somehow kind of a little bit too narrow in some sense and the idea of using kinetic theory methods in economics is actually an idea that allows us to incorporate in some sense time dynamics into the theory. So these are very general ideas. So in fact this view point studied as I said by looking at studies on pedestrians and so this work by Dirk Helbing and recently a model by actually Helbing 2 and Moussaid and Ferrolas who were proposed another model based on more of the rational agent kind of approach and which we have used to sort of elaborate our theory then. We have looked at social hurting behavior and some economics and I'll try to go through briefly at the end. So that's the motivation. So let's try to, the first thing I'd like to show is that in some sense a Nash equilibrium and a kinetic equilibrium or a thermodynamic equilibrium for those who are familiar with statistical physics are somehow similar, right? Are actually equivalent, isomorphic. So that's the idea. So let me try to propose a setup. So I'm considering first I'm going to look at the discrete setting and then I'm going to go to the continuum setting. So I'm sort of a mean field limit of this discrete setting. So I'm considering n players, the numbers 1 to n. Each player has a strategy yj in a sub-strategic space y. Suppose that all the strategies are in the same space for simplicity and each player has a cost function but when it plays strategy yj in the presence of the others, the player is playing strategy yj hat. So in actually in game theory usually this vector of all strategies but the one that you consider is called y minus j, right, but I call it yj hat. So the cost function is the function of your own strategy and the strategy of the others. And of course the game theory is about minimizing your cost function. Actually speaking of cost function rather than utility because I prefer minimizing rather than maximizing but it's the same of course. So I'm trying to minimize my cost function by playing on my own strategy variables not touching the others strategy variables. And I achieve a Nash equilibrium when I find a n-tuple of a strategy such that for any of these strategies then I get a minimum of the corresponding cost function, right, when I do not touch the other strategy, right. So this is a Nash equilibrium, right. So this is and as you know subjected to conditions and then there is at least one Nash equilibrium as Eva has said previously these existence of a Nash equilibrium is a topological fixed point theorem, right. So it's and we will see how the concept of fixed point theorem will arise later when we speak of thermodynamic equilibrium. Okay, so here we have we wanted to introduce the time dynamics so instead of trying to instantaneously minimize the cost function we sort of imagine that the players follow a strategy in time which is essentially to move continuous time variable corresponding to their strategy along to a gradient descent of their cost function phi j the gradient with respect to yj or phi j with the other variables fixed. So this is called the best reply strategy and we also wanted to add noise to account for uncertainties. So basically each player I think it's called the I use syncratic noise each player has its own kind of source of uncertainty it's not a common noise which is turns out to be more complicated each of the noises here is independent of the other right so each is so then we get this stochastic differential equation and when going to an infinite number of players I'm assuming that each player has a very size more influence on the on the on the whole set of players so that I can describe the agents by only by their probability distribution in the strategy space. So I consider that the game is anonymous in the sense that I do not need to know who is playing what what I need to know only is how many players do play the strategy why. Then I assume that it's a continuum of players and that means that this probability distribution is actually absolutely continuous with respect to they say the Lebesgue measure on the strategy space why I suppose that why now is is a subset of of an interval of r or open set of rd and it's a non atomic in the case in the sense that it's a non atomic in the also in the sense that it's absolutely continuous and and also in the sense that now the cost function is a function of your own strategies and the strategy of the other players but the strategy of the player other players is encoded into the distribution of strategies and I do not have to remove my own strategy because my own strategy is is of zero measure say in the set of strategies so it doesn't count whether I remove it or not so essentially my cost function is a is a function of my own strategy and the distribution of strategies right so that's the continuum of players and that's a general framework which is actually already has already some big history behind with the works by your man my colleague Schmeidler shaper and Shaplin and others good a good collection of these people are noble prizes in economics and in some sense related to the concepts of mean field game that's been put forward by last year and you also and also by Malamé and co-workers independently of last year and you also right okay so in this framework what is an Nash equilibrium basically it's a Nash equilibrium if you find a distribution of strategies such that the cost function is constant on the support of this distribution so that means that on the support of this distribution I have no gain in changing my strategy to something else right so this basically means that all the cost the cost function is constant on the support of the strategy distribution and away from the support then the cost function is higher meaning that I have no gain in you know going to outside the support of of it right so that's that's the definition of a Nash equilibrium for a continuum of players it turns out it's a equivalent to the kind of what's called the mean field equation which is basically saying that this quantity with f and here is minimizes this quantity where you have f here but you keep f in here right so so this is Nash equilibrium right so now if I going to the best reply strategy in this continuum setting then well basically I'm going to take the eto formula of this stochastic differential equation so I'm going to get a Fokker-Planck equation in space y with mean field phi so this is how I'm going to get these equations so the now the strategy distribution f of y depends on t and evolves in time according to so this is the best reply strategy so I'm going to move along the steepest descent direction of the cost function phi f right so this is a motion in y space so this is just first of the differential partial differential equation which which translate the transport of f along these these trajectories and then the noise term adds up into this diffusion here right and phi f is a shorthand notation for the cost function as a function of y where f is is the is the primary in inside the cost function so you see that here you have a nonlinear and nonlinearity because phi depends on f and f is the unknown itself right so it's a so I'm not going to to discuss the existence and solutions of this actually if you if you set up if you cook up the right conditions of phi then you get existence of solutions and so on but this is not really the the subject here so here we have a Fokker-Planck equation and so it's something that's familiar in kinetic theory so in in say statistical physics where you have say particles so suppose that why now is space or velocity and particles are so material particles and so that's the distribution of these particles and they evolve in space according to that's called the this this this this combination of these two operators is called the collision operator so usually you have two antagonist effects so there's a noise which sort of sort of spreads the particles but then there is the so-called interaction term that sort of clusters the particles together and then there is a kind of a equilibrium between these two phenomena that produces some some some structures some some structures so for instance we will see some examples later and so when you collect these two operators then you get what's called the collision operator in kinetic theory that describes how particles interact with each other right so you see that already you know we can you can you start to see an analogy between these you know game theory concepts and and mechanistic concepts so now classical object in kinetic theory is to look at the equilibria of this equation so basically as the zeros of this collision operator q and these are called the kinetic equilibria or also thermodynamic equilibria let's call it in kinetic equilibria ke and they are very easy to to find right because this is the you can write it as a divergence of grad y phi f plus d grad y f right so if you maybe I could I could just do the the simple computation for you here so you have q of f that you can write as the divergence in y of the grad of y phi f times f plus d grad y f so if I put f in factor I can write it like this right and so if I'm if I'm looking at this this is nothing but the gradient of phi f plus d times the log of f right and if you were saying that this quantity is zero that means that phi f plus d log f is a constant right so that means that f so if this if this is zero that implies that f is going to be minus minus sorry sorry exponential minus the phi f divided by the noise and you have a constant right which appears in front and since you have a probability then the constant is adjusted so that you get a probability right so you get that naturally the natural the natural equilibria are what we call Gibbs measures so there's just the exponential of minus the potential divided by the noise and normalize in such a way that you get a probability and they are the only thing is that phi depends on f right phi depends on f so in fact here phi depends on f and this is f so when you will recompute the phi from this guy you should find this one so you have a fixed point problem right because you need that the phi constructed on this guy matches this one in order to get an equilibrium right so in general using this trick you can write the collision operator in fact using you can rewrite this collision operator using this in this form the divergence of this Gibbs equilibrium associated with a potential phi f times the gradient of f over mf right so it's easy to see that if you compute this quantity then you get something which has a sign using Green's formula so that you see that if q is equal to zero that has to be equal to zero since this guy is as a sign that means that the gradient of f over mf has to be zero so f has to be proportional to m phi f right so this is a proof of course here you could argue that okay I have said that this is equal to zero but it's not exactly this has to be zero it's the divergence of this is to be zero but it sounds out that provided you put the right regularity of spaces then it's the same and this is basically the regularity that you have to put is the one that allows you to apply Green's formula here and then you get that the only equilibria are those which are the Gibbs equilibria provided that if I take the f I construct the phi I take the Gibbs equilibrium I recover the f so here you see I have a fixed point equation in f and you can express this fixed point equation in phi just by saying that the potential phi has to be the one that's originate from the Gibbs equilibrium on phi so you have a fixed point problem and remember finding a Nash equilibrium is also a fixed point problem so it's not very surprising that at some point we find the same the same thing right so the equilibrium for the kinetic kinetic equilibrium is a Gibbs distribution that satisfies this right so now the question is now what's the relation between the Nash equilibrium and the Gibbs equilibrium that I've just shown so in order to find this relation I'm going to modify a little bit the game so instead of taking the game with the cost function phi f I'm going to add the contribution of to the to the cost function which is log f that's penalization that corresponds to the entropy okay so in some sense I'm going to sort of penalize the I'm going to penalize the just too strongly ordered too strongly ordered states right so somehow and so if I if I increase the d which is means increasing the noise I mean I'm going to look for a game where I put more and more cost into the into the noise right so this time I'm changing a little bit the noise and I introduce the cost and I'm going to introduce this new cost function new which is phi plus d log f so now I'm so I'm just assuming very very weak regularity on phi supposing that phi is just continuous continuous function for any f actually I could even just suppose that it's locally finite for any f that's would be enough then what I'm saying is that if I take a kinetic equilibrium associated with this collision operator q is going to be a Nash equilibrium associated to the game with this cost function mu and vice versa right so it's very easy to prove actually it looks like it's a very very shallow it was very shallow theorem so suppose that I have a kinetic equilibrium so I have a function f which satisfies that it is the Gibbs equilibrium associated with a potential phi associated with itself suppose I have a guy like this right okay so I can write this Gibbs distribution like this this is the definition right and since my potential phi is a continuous function right it's locally finite so these guys strictly positive right so that means that the support of so that means the support of this is is the whole range of y say for instance the real line right now if I take the cost function associated with this guy in fact if I take phi plus d the log of this guy so if I take the log I get a minus phi which is going to cancel this phi and the only thing that remains is the log of z which is a constant so you see I get that the cost function is a constant for all y and all y is also the support of this distribution right so that means that this guy is a Nash equilibrium for the game with cost function mu f because it matches this definition here here it really matches this definition it is equal to this cost function is a constant for all the support of the distribution and the support of the distribution is everything because it's it's it's always strictly positive right so it is a Nash equilibrium and now vice versa suppose I have a Nash equilibrium so it's it's so it's going to be so the first thing is that I notice that the this this Nash equilibrium has to be positive everywhere because if it was zero somewhere right then its cost function would be minus infinity because the cost function is phi plus d log f so if f would be zero somewhere this would be minus infinity and mu would be minus infinity right phi is finite everywhere okay so the mu would be minus infinity and since it's a Nash equilibrium the mu would be minus infinity everywhere and since mu is minus infinity everywhere that means that f is equal to zero and if f is the constant is minus infinity f is equal to zero which is a contradiction because we need that the integral is one because we have a probability so that means that half has to be strictly positive everywhere right so mu f now is a constant which is positive so if mu f is a constant then you go back to the definition if mu f is a constant then you get that this is a likely this computation that f is e to the minus phi f over d so is of the form of a Gibbs distribution right okay so that implies that f is a kinetic equilibrium so we really have identity isomorphism in the okay here the important thing is that is the noise if I didn't put the noise I would be in trouble because this is precisely what allows me to really match the two the two things right so in some sense the noise would would sort of get rid of all the singular cases I would say right okay so then there is a special case which is of interest which we have seen I think in eaves talk is the case of a potential gain so you could imagine that the cost function is derivative of a functional u that's called the potential gain so it's a functional derivative you have a functional u you have a you take the functional derivative of u with respect to f that gives you phi f and then you can define what's called a free energy so you take this u and you add the entropy contribution f log f and then it turns out that you can compute the mu f the the new cost function here as the functional derivative of this free energy with respect to f right so that means that you can write the collision operator as which is in general written like this right this is essentially the same computation since now the mu is a functional derivative of f with respect to f you can write it like this and this is typical form of a gradient flow in in the Wasserstein metric so essentially that means that if you're computing the time derivative of the free energy along the trajectories of this by just a simple gain integration by parts you get that's a minus this quantity which has a sign which is the free energy dissipation right so basically you have a dissipation of the free energy and you have that if you have a Nash equilibrium which is a case where the mu is a is a constant so the gradient of mu is equal to zero that corresponds to a case where this guy is zero so that corresponds to a critical point of the of the free energy subject to the constraint and integral of f d y but of course the critical point doesn't mean a minimum okay so you may have Nash equilibrium Nash equilibria that are not minima but met but but but higher higher minima or or even saddle points and things like that but okay this gives interesting perspective of course is a very special case because this is a gradient system so you see that's very stringent condition in infinite dimension it has almost no chance to happen right but that's it's still a quite important practice and yeah so in basically we can you know we can relate in this case we can relate gradient flows to to Nash equilibrium okay so so that's basically the first part I don't know if there are questions yeah it's like you make an assumption that the variation of the strategy must be equal to the variation of the cost function right so that's the best strategy assumption yes so that's absolutely that's so I assume I assume that the I mean it's it's rather natural you know you have so you have a set of possibilities you can you evaluate it in terms of you know your utility or your cost and then you are going to go in the direction where you maximize the the game so it's in some sense so it's a very it's a very commonly maybe a coefficient for adjusting yes yes absolutely yeah that you can put it in the 5j because you know it's yeah in fact you have a you have a reactivity of the the reactivity of the of the of the agents to the yeah so that you could but you could you know you could probably also put it into the the file and here we don't worry about any state variable in fact I will come back to that later okay thank you right it's next next any other question yeah since we have questions in this equation does how does this connect to the dynamics of the replicator can you that's a good question I've never looked at that in detail but yes you it's a good it's a it's would be a good so so so in some sense there is a little bit more complicated because the states of the replicator are a little bit more than Nash equilibria these are these as evolution yes yes yeah evolutionary state stable states we have we have in mind actually in the future to look into greater details in this connection but I can't answer right now can I ask you yeah you kind of passed to the limit continue limits can you kind of go back now and perhaps get discrete systems from the continuous case yeah so so we haven't made any kind of a rigorous study about the limit for large number of particles of this but there is a companion work by Blanche and Kali I think they have looked at a specific special case of of a of a potential game the related to I think the allocation of people in cities and and they have they have looked at this question in detail and it works of course in the general case you need to make some assumptions it is very general so you have to make some assumptions on the phi and so what you know the regularity and so in this particular case yes they have looked at this and it works yes it's and of course then you have the symmetric thing is that when you have the the continuum model you could think of discretizing it using kind of particles and so recovering a kind of a discrete discrete system I think that that's going to work provided you you specify correctly the the assumptions okay so now we wanted to we wanted to sort of build on the our expertise on on kinetic theory to actually provide a kind of a sort of a new perspective to gain theory and so this the idea of of hydrodynamics so the idea in kinetic theory so say for instance gas dynamics what you have is you have a very complex systems with many molecules you can describe them by distribution function which tells you how many particles in space and velocity you have but at some point you'd like to reduce the description into just average quantities like densities velocities and so on and so you do a model reduction which is based on the fact that you have different scales in the system and so there are fast scales which are related to the molecular interactions between the particles and these fast scales actually equilibrate very fast the distribution to an equilibrium which is in our case will be a Nash equilibrium but this Nash equilibrium or this thermodynamic equilibrium still depends on slow variables in the in the case of gas dynamics it's the density the local mean velocity the local temperature and since these quantities are not say spatial spatially homogeneous they have some gradients and so on then this drives a slow evolution of the system and that's precisely what's called the hydrodynamics and this is how you get the equations for gas dynamics equations for instance like the Euler equations of the Navier-Stokes equations and so we thought that maybe if we imagine that we have agents interacting so there are some fast interactions that are described by these strategy variables and then there are say slow variables that could be spatial variables or that could be other variables indicated indicating for instance a social status or something that would sort of more slowly evolve according to you know some spatially homogenities of the of these Nash equilibrium so we wanted to sort of try to put this in some framework and this is what we did here so we assume that now we have another variable which called sometimes it's called a tie variable so we call it space suppose for instance that y is the wealth of the agents and they exchange wealth during you know economic exchanges and x is just their position so you have different wealth distribution in France you have a certain wealth distribution in in US you have another one and maybe in Chile and in China and you know in whatever Russia you have different wealth distributions of course then because of these differences of wealth and that triggers some dynamics for instance you have people migrating because you know they want to go to a more wealthy country or you have money exchanges because immigrants working in a wealthy country going to send back money for their families but these are more slow exchanges right okay so when you when you go shopping you do it daily when you send money to your family you maybe send it every month or something like that right so so you have different scales than when we wanted to actually try to apply this strategy to try to see how we could look at the evolution the slow evolution of these equilibria over time right so this is the idea so why is this still the strategy variable and x is a kind of a space variable it's can be physical space but it could be also social status you would like to write x as being like you know low medium or high class and you know how you move along the the scale or things other things like that right or education level for instance things like that right so and we assume that each agent is able to move in space according to a certain law which depends on its own position and its own strategy it could actually also depend on the other strategies and all the other positions we do not we didn't take it into account here but it could be easy to do it and again the evolution of the strategy is the same as before except that the cost function now also depends on the whole list of positions of all the other agents right so this basically now is a coupled system of differential equation and SD and OD here we could have added also some diffusion alright as well we didn't do it but we could have done I mean it's not really important what you do here you will recover the thing at the end so doing it at the kinetic level and the case of a continuum of players you have a function of position and strategy and time and now you have that the evolution in space gives rise to this additional operator divergence of Vx and yf and this is the operator we had before that described the evolution in strategy and again now phi f is a cost function it depends on x and y and on the distribution of all the other agents right and so the goal is what we'd like to actually provide a model reduction assuming that you have a fast you have a fast dynamics for these variables and you sort of reach a Nash equilibrium for this operator and then these drift time here will drive a slow evolution of the system in space and so this drift time will be this drift in space will be actually characterized by the slow evolution of some moments of f such as the mean density so if I integrate all over all strategy variables for particles at a given point or say for instance here the the mean strategy variable so that's the density of agents and that's the mean so the the Upsilon f here is the mean strategy of the agents at the given location x that's these are examples of the variables we would like to monitor in the hydrodynamic equations yeah so I have put some slides about mean field games to show the different connections and differences I'm not going to go into the details of this because there has been many talks about mean field games already just let me let me point out that in a recent work we were able to actually show that there is a connection between mean field games in our approach so the idea of mean field games is that you are going to do a control of a whole trajectory whereas this best reply strategy means you are making a you're not making a really control you're controlling over a very short interval of time so what you what we prove is that if we take the mean field game of of last year and Lyons and we chop the control into a small interval of time then we recover our our our framework right so so obviously if we compare the optimality of a mean field game to what we do we are suboptimal because you know if you if you make successive optimization of a small interval of time it's you're not going to get the global minimum as if you do the optimization of the whole interval of the whole time horizon however it turns out that for for our view viewpoint for perspective in some cases it makes more sense so if you think of pedestrians moving in the street they are not going to optimize the trajectory over you know 60 minutes it's not possible right so it's more like you know a really local control right so so this is then a matter of which kind of model you you are intending to do right okay so here going back to this question of of reducing the the model we are going to implement this idea that the dynamics in y is much faster than the dynamics in x so that's the main hypothesis there is a scale separation hypothesis the variation of strategy y is much faster than this that of the type x so we're going to reach a fast equilibration of strategy through to a local Nash equilibrium and then leading to a slow evolution in space and to to implement this scale separation we need to introduce a small premier that describes the basically the ratio of the time scale so epsilon is the time scale of the strategy interactions say for instance what the exchange is between the agents when you regard it into unit of time which is the global evolution of the system a year for instance right so it's very small and so that's the scale separation in time and we also need a scale separation in space in the sense that we assume that most of the exchanges that occur in strategy occur between people that are located in the same area right so we are assuming that the cost function at leading order so there may be a small remainder that takes into account non-local exchanges but at leading order is going to be a function of the local value of the density of particles at x and the not the total f but the conditional distribution f condition on the fact that you are at location x so it's essentially only the distribution the strategy distribution for particles which are at the location x right so we are only taking agents we love we only exchanging you know wealth with agents which are really near to us right when we go shopping so it's not entirely true because of course when you buy German car or whatever Chinese computer then you're doing non-local exchanges but in some sense you are doing you're buying cars much less often than bread right so you know it's maybe like not not very often right so of course you can dispute these kind of hypothesis but this is the frame we decided to work so essentially the cost function is totally local in x here so at leading order and so we end up with an equation that we can write like so we divide in my epsilon we have here the slow slow exchanges that are in space and the fast exchange is in in strategy variable y and this operator now because the because the the cost function depends only locally on on x it's it's it's actually an operator that operates only on the y variable right x is and t is only a parameter now here and so you can actually look at the zeros of this operator and that's going to give you Nash equilibrium exactly in the same k as in the same framework as we did before what where of course you will have so new is again the condition on distribution of f condition on the position x so you're giving by assuming that you instantaneously when epsilon goes to zero when epsilon goes to zero you would need to to stay on the manifold of equilibrium q of f equals zero so you need actually that f is proportional with a density row to an a Nash equilibrium okay so a fixed point of this equation right so of course to solve this equation you need to specify what the shape of phi is I'm going to give an example in a minute just bear with me for a second so far that we can solve this right so what can we do we need to now to know so you see here for instance we have this quantity row of x t which is not specified by our Nash equilibrium solution so we need to find an equation for row right but the equation for row is readily is readily found because if you integrate with respect to y you average the equation out with respect to the strategy variable q is a divergence in y so when you integrate it goes away and so you're going to get rid when you integrate in y you're going to get rid of this singular term and so this is how you get the continuity equation so if you integrate with respect to y you get this equation here so row is the same density as here so this local density right and now the flux of the row the flux in space is given by row u and u is the average of the particle velocity v of x y this this function which is here averaged over the Nash equilibrium right so if you know locally the Nash equilibrium you can average this guy and you know the local velocity of the agents and then you get the the way the density evolves right but the problem is that maybe maybe and in practice this is what happens maybe the Nash equilibrium are not uniquely specified by this equation right so they might actually be not only a unique solution of these guys but they may be actually families of solutions of these guys right and so families parameterized by something and so you need also to find how which which object in this family are picked up by the limit of the problem so in order to to examine this question I cannot do it in the in full generality I need to specify a certain dynamics a certain cost function so I'm now going going to give you an example of what can be done beyond finding just the continuity equation here and that example actually is an example that we took from model of wealth distribution and it goes like this so it was a model which was first proposed by Bushou and Mesa and then it was further studied by Cordier, Paresky, Toskani and during and Toskani so again we find we start with the same with the same equations and so what we are going to do is we are going to specify the cost function right so we specify the cost function so nu is the distribution is the is the distribution is the normalized is the conditional distribution condition on the fact that the particles at position x so we assume that the cost function is this integral here so in practice that means that the strength of exchange is between the agents so agents exchange wealth and the amount of wealth exchanged is is proportional to the difference of their wealth to the square don't ask me to justify this model these are very famous people and and it's there okay so but this is the you know the rules the rule is this it turns out that you can write this as the just the difference between the local value of y and the average of nu of y over nu so which I called epsilon nu so it's just this difference y minus epsilon squared with a certain rate kappa right the other modification we make is that we are we are using a geometric Brownian motion in order to to take into account the finance constant context and keep the y variable positive right so this is not standard Brownian motion but it's to geometric Brownian motion so they basically this is the same equation and the the thing that you can you can notice on this equation is that not only when you integrate with respect to why you get zero because of the derivative of y here but also when you multiply pre-multiplier by y this q of f and integrate over y you get zero this is due to the special shape of the interaction here so what that guy is mean means that the mean wealth is preserved during the interactions right so this is in fact this the total wealth the total wealth and and and consequently the mean wealth because the the total number of agents is conserved so the total wealth and the mean wealth are conserved during the interaction so this is called a conservative economy so each time somebody loses something someone gets the the thing that the other has lost there is no money that's lost in the blue right like sometimes it happens when you have a crisis so it's not very realistic right but it's it's very it's it's very nice because it allows you to simplify things and so you see that this this identity allows you actually to get another conservation not only the mass but actually the conservation of the mean wealth and that's going to be very important because when you are looking at now the Nash equilibria of this game what you find is functions that are basically related to the gamma distribution or the inverse gamma distribution so it's basically something that has a Pareto tale so when when y goes to infinity that decays like a parallel right so it's basically a Pareto distribution and that's the one that's provided by the equilibrium of this of this game right and and of course you can the the the cost for the for the game associated with this Nash equilibria is given by this so compared to the previous formula it's a little bit more complicated because the Brownian or the geometric Brownian motion but basically it's the same idea so again you see that the cost essentially is proportional to the mean wealth here so that's the formula for the cost okay so when you do that you see that this distribution has two there are two parameters there's a mean then the density in front of so for any location you have the number of agents that you have to know but you have also to know this parameter y or epsilon which is the mean wealth at the prescribed location right so you have two parameters to determine row and epsilon okay and so you will need two equation in space and time to determine how these two parameters row and epsilon evolve right and so how do you going to get that you're going to get that just by integrating this equation over why that's going to give you the equation for row and pre multiply by y and integrate either with respect to y using this cancellation property and this is going to give you the equation for the mean wealth and so this is what you what you do here and so what you get is that your system the limit when epsilon goes to zero the system is completely characterized by the fact that the limit distribution is this fat Pareto tail distribution with mean density row and mean wealth epsilon that's evolved according to this set of equations and the spatial variation here is monitored by user and you want which are again averages of this function that tells you how agents move in space average over the Nash equilibrium here and for the density is just via average over the Nash equilibrium and for the mean wealth is just V average over that Nash equilibrium multiplied by y right so you get here an equation that's completely specifies is the slow evolution of the say the mean quantities of the of the of the population alright so now to go a little bit further we would like to get rid of this kind of unphysical assumption that trading preserves wealth right so it's not really true we know or know that during trading you may have you know loss net loss or net gain of total wealth right so we cooked up a model which is inspired by the previous model but which doesn't need to satisfy the total wealth conservation so essentially we we took the model by Bush and Mesa but we just modified the we take a cost function which is quadratic in y in the in the in the wealth but now the coefficients are a little bit different right so essentially you have the two important coefficients this is a constant which is in the central you you have this a and the b here so the a is this guy so what is now epsilon 2 is actually related to the variance of the wealth so the variance is actually epsilon 2 minus epsilon 1 squared okay and and so you see that this variance you know when this variance is is small then a is large which means that trading is fast and when the variance is large then trading is slow so it's a kind of a risk aversion strategy where players do not play when the market is a very uncertain when the variance is large whereas the they play fast when the market market is certain right and the second one is actually cooked up in in such a way that what I'm going to show next is true okay so this is not you know it's but essentially this is so the important thing is that in this case you don't have a conservation of of total mean wealth right so you can show that you have the same kind of equilibria in those gamma distribution again you have the three parameters which is a y epsilon which is the mean of the mean wealth that you need to determine and now the problem is how are going to to get an equation for the mean wealth because now you don't have this previous trick that you could integrate the equation against why use the total conservation of wealth to get rid of this guy and get an equation okay so I'm not going to to tell you the the story because it's a long story but it turns out that relying on previous work we did on on on on on flocking on flocking models we can actually find a way to bypass this this this problem and find what we call generalized collision invariance because these functions the y that I multiply against the q to get zero in kinetic theory is called a collision invariant but we don't have enough collision invariant here but we could find the kind of a surrogate collision invariant which called the generalized collision invariant here is called generalized because in this case the quantity that satisfies with its conserved that's that cancelled the q when you integrate against it is this guy and you see that depends on the distribution itself so it's not really true collision invariant because it depends on on the distribution itself but in any case when you multiply the equation by this guy the q cancels and this is how you get the equation for the for the mean wealth so in the end you you can you can you you you cancel the the q so you get rid of the singular term and you get an equation that you can express you have to to do some computations in the end you get an equation which is in this form so again you have an equation for the density row an equation for the mean wealth but not that now it's not a conservative equation it's not a dx of something because you don't have total wealth conservation but still you have you are able to compute all the terms and you are able to determine the slow evolution of the parameters of the equilibria of the Nash equilibria the row and the and the obsidian right so that's basically the the story I wanted to tell so the example I've shown here is maybe not really very solid in terms of application in terms of economics but we are trying to we're working with somebody from the business school in imperial to actually apply these kind of concepts to essentially first strategies of firms in front of climate change so this is a big pro program and we have it going to have a PhD student working on this so more story later but I wanted to explain a little bit the kind of a framework right so that's basically what I wanted to to say thank you very much so I just wanted to ask earlier you mentioned what you're solving isn't exactly a mean field problem it's a short-term mean field problem and as a right I was wondering can you comment on how it compares to the solution of a mean field problem as you know the length of time increases that's a that's a that's a difficult question right because in general what you what you can show about convergence of mean field models is for finite times right so for finite times so so in some sense when you increase the time horizon somehow you have either to increase the number of particles more and more or you know to make the perturbation parameters smaller and smaller so you don't in general you don't have uniformity with respect to this so which means that if you're taking a given system and compare it with a mean field then you were you you expect a drift in time and in in the end maybe maybe the equilibria will be different it's it's it's it's not I mean the situation may not be as bad as I think it's really depending on depending on case case by case but this is a very difficult question and people are working on that and it's not solved yet yeah that's it's technical question for the slide 25 the first equation in fact when you deal with what you call the collision operator when you introduce the collision operator indexed by it's applicator to F epsilon I didn't understand this when you divide by epsilon I didn't well in this contest so because previously so when you do so you you make a change of time scales time and space scales so basically what you're looking at so assume that you're looking at you know worldwide problem right by the agents are interacting you know in the city right so you have a very different scale of your system and on the scale where the agent interact so this ratio between the local scale and the global scale I call it epsilon so when I write the the equations when I wrote the equation previously like here for instance basically since all the terms are say order one that means that basically I'm on I'm looking at a scale which is the scale of the agents right or the order of the interaction of the agents but now I want to place myself on the global scale so I'm going to change scale I'm going to change x to epsilon x and t to epsilon t because I also want to look at the large time scale and that brings an epsilon in front of the DTF and an epsilon in front of the grad XF right so this is the reason why you get this epsilon here so that's the scale separation in time and space and also in order to be able to to say something I have also to to assume that the dynamics itself is going to favor this scale separation otherwise it's not going to be true and so to to enforce that I'm assuming that the cost function is at leading order a cost function depending only on the value of the distribution at the given location x plus maybe some non-local terms that which I assume small right so this assumption actually is consistent with this change of scale right which means that when I let epsilon go to zero I can first look at the equilibria of this guy these the equilibria of this guy depend on some parameters unknown and these parameters the row and the epsilon are going to evolve on the slow time scale the t and x according to what this operator is going to tell me right so that's how it how it goes so this is why in the end the equation for row and epsilon involved the moments of this guy so here you have reached an equilibrium so only what matters is only the Nash equilibrium but now here you are not reaching equilibrium so you're still evolving so the dynamics will involve the some averages of V over the Nash equilibrium of this guy okay more questions all right so let's thank you again
We propose a mean field kinetic model for systems of rational agents interacting in a game theoretical framework. This model is inspired from non-cooperative anonymous games with a continuum of players and Mean-Field Games. The large time behavior of the system is given by a macroscopic closure with a Nash equilibrium serving as the local thermodynamic equilibrium. Applications of the presented theory to social and economical models will be given.
10.5446/57389 (DOI)
Alright, thank you very much. Thank you for your patience and it's great to be here today. So my background is formally in reinforcement learning. So I think one of my jobs today is to sort of get you excited about some of the things I care about and tell you some of the challenges in that field. So let me start with sort of when I think about model free control, the kind of problems I have in mind are fairly wide ranging. It could be as simple as deciding what you want to eat for dinner or it could be sequential decision making for clinical treatments, optimizing a power plant, optimizing energy grid, of course playing go. And so my special interest is in games, computer games, classic games. And the reason for this is because reinforcement learning has been really successful at playing games. And so some of the examples here you have is from 1956 as Samuel's checker player, which was actually using a very, very clever form of reinforcement learning that was sort of reinvented later on called TD Lambda. And then backgammon, of course, which was probably one of the most the first successes of neural networks with reinforcement learning by Jerry Tazora in 92. And more recently, there's been a spate of interest with some high profile successes, and part from us at DeepMind. 2015 was we had a nature paper on playing Atari games. And 2016 we also had a paper on, of course, superhuman go playing. So just to give you a sense, this is a PDF, so unfortunately you won't get the video. Just to give you a sense for why this Atari business matters, if you haven't heard about it before. So the Atari 2600, of course, was a computer, a video game console that was developed in the 70s, and came to be sort of a flagship of experimental work in reinforcement learning. We had a paper in 2013 that said that proposed the Atari 2600 as a platform for effectively reinforcement learning research, but also more generally for artificial intelligence research. And the reason for this is because of, it's a challenge domain. So what does, what challenges it pose? Well, first of all, what we said is we'd like to play these games, so develop controllers that can play these games using joystick motions, and really playing the game. And what that means is playing the game from pixels. So the agent observes a sequence of images of frames, and each image is 210 pixels by 160 pixels. And the agent observes this at 60 frames per second. So the setting that I'm going to be in is discrete time, high dimensional. And the reward that the agent is getting at each time step is the change in score. So we're playing a game, so we want to maximize this sum of this kind of rewards. In this case, it's going to be trying to maximize the score of the game. All right. So to give you a sense for the kind of domain I'm usually looking into, it's going to be very high dimensional. In this case, there's 33,000 discrete dimensions. Each dimension is a pixel with 128 color values. And an episode is going to last up to 30 minutes. And if you do the math, that's going to be about 100,000 decision points. Now, the most interesting thing about Deuteri is the fact that there's not one game or two games we're trying to solve or get good controllers for, but there's over 60 of them. And what we're looking for is an algorithm that's going to be able, or an architecture, that's going to be able to play all of these games without changing the actual parameters, the hyper parameters of the algorithm. And each of these games is quite diverse. I won't be talking much more about this today, but you can think of games ranging from space invaders to Pong to maze games to Pac-Man. So we really have to look for a learning algorithm that's robust to all this variation. On the other hand, we have the incredible success of deep learning. And that started, I think, now five years ago. Deep learning has been around for quite a while, but the success story started five years ago when Alex Krzevsky and his co-authors showed that you could get state of the art, to beat state of the art classification on visual tasks simply by using deep learning. And what this talk is about is sort of talking about the basics of reinforcement learning and the basics of deep learning and asking the question, how do they combine? And so one thing that I really want to emphasize is the kind of work I do is to look at theory and to look at practice and ask the question, how do we bridge the gap between these two? How can we come up with algorithms that are well motivated from a theoretical perspective, but are also going to stand muster when we take them to play go or take them to play Atari or try to optimize a windmill farm, for example. Okay, so when I say model free control, I'm going to come down to that point a bit later, but I want to say for now, where has this kind of technique been successful? And there's a few sort of key ingredients that keep occurring. One of the things is often the dynamical systems are very complex. We don't know how to write them down. If I think about an Atari simulator, I don't have a model of that. All I can do is take the simulator and simulate it forward. And these samples are fairly expensive, right? So I can't generate millions of trajectory per minute. Another ingredient is often the state spaces are going to be quite high dimensional. If they're low dimensional, we know we have really good solutions that are going to scale much better or have much better guarantees, right? So 30,000 pixels for me is going to qualify as high dimensional. A third thing is often going to be long time horizons. From a discrete perspective, that just means we don't quite know where the decisions are happening, and we need to potentially make a decision, a different decision every time point. As an aside, I think this is actually a place where continuous time methods might shine, where you instead of trying to deal with these long time horizons, you approach them from a different perspective, the continuous time perspective. And finally, surprisingly, maybe one place where model free control has been so successful is all these domains where there's an implicit opponent. So these are not technically environments in the traditional reinforcement sense with a Markov process because there's an opponent that's trying to typically minimize your payoff. But despite this, these model free methods tend to be really good in these circumstances. So Go is an example of this. Badgammon is an example of this. Poker is also an example of this. And way that is, I think we don't know, but maybe it's also, it has to do with the complexity of these problems. So suppose you want to use model free methods or reinforce learning methods in your problem. What are some of the questions, some of the practical considerations you might have in mind? Well, first of all, you share the question, are the simulations reasonably cheap? Because I'm going to operate under your assumption we don't have a model. And so if the simulations are extremely expensive, you might want to build a model instead and not use model free methods. Secondly, is the notion of state complex? If it is, then we probably can't model the state easily. So we'll have to use some sort of approximation. Third, is there partial observability? So it's basically, is the state not actually a Markov description of the world? And if that's the case, we've had good success with model free methods, even though they're not actually guaranteed to converge. But then if we answer, yes, to the question, can the state be enumerated, then maybe we need to move away from model free and into something like value iteration. And if there's an explicit model available, I certainly think we can do better than model free methods. So this is just to say, model free method is not the end all of all the solutions. So what am I going to talk about under stock? It's going to be very simple. I'm going to go from the ideal case, the way we typically do the theory and reinforcement learning, and then move towards a practical case, which is normally the setting that we're going to consider when we have to deploy these algorithms on problems. So let me start with a quote by Sadan and Bartow from the reinforcement learning, the canonical reinforcement learning book introduction to reinforcement learning. And you know, if you have a question, what's reinforcement learning? The answer that's in the book is all goals and purposes can be thought of as the maximization of some value function. So I'm going to talk a lot about the value function in this talk, just because the reinforcement learning perspective on this involves a value function. Another perspective on it is to think of the problem as we have an agent interacting with an environment in discrete time steps. And at every step, the agent takes an action, receives a reward, and observes a state. And then conditional that state based on that state, the agent is going to choose what action to take next. So for me, what's very interesting about reinforcement learning is that it really combines three, and in fact, four learning problems in one. On one hand, we have optimal control where we're saying we want to maximize value. But that's actually just sort of the center of this puzzle. Then at the same time, we have to do policy evaluation, which I'll explain in a second. And finally, often because we have this stream of experiences, we'll have to do stochastic approximation. And we have to do these three learning problems at the same time as we often have to approximate the function. So this picture is going to come back in the talk. I'll come back to it. But this is the important piece to me today, is that we have all these interleaf problems and we're all trying to solve them at the same time. Okay, so let me start with some backgrounds. I'm sure most of you are going to be familiar with. When we think of this interaction with the environment, we're going to formalize it as a Markov decision process. So we have a finite state space, or countable state space, a finite action space, reward function, transition kernel, and then finally a discount factor, GABA. And in this setting, a trajectory is going to be a sequence of interactions with the environment, starting in some starting state or maybe some distribution over states, and then proceeding forward in time. In this framework of policy, is a probability distribution over actions that the agent is going to take at any given state, so it's conditioned on the current state. And if the policy is deterministic, then I'm just going to write this as pi of t and drop the distribution. And a transition function then is also going to give us the next state. You see there's an emphasis always on sampling things because that's the default mode of operation. So with these pieces in mind, with a transition function and a policy in mind, then we can define a value function. And the value function is the total discounted sum of rewards that we'll observe starting in state x and taking action a. So it's denoted q pi x a, and basically at every time step, we're going to transition through the world acting according to pi, transitioning according to p, and then collecting these discounted rewards as we go along. We can think of this value function as a function mapping state and actions to real values, or we can think of it as a vector in the space of value functions. So when Richard Sutton says, let's maximize the value function, then really what we mean is let's find the policy pi, which maximizes this q pi x a, this value function point-wise at every state in action. And this leads, as you well know, it leads to Bellman's equation, which says that the value q pi, we can write it as the sum of discounted rewards, or equivalently we can write it as the immediate reward plus the next state discounted value. So there's the policy valuation version, which has a pi in the superscript, where we fix pi, and now we just have this Markov chain going forward, and it's the optimality equation where we take the max over all the actions in next time step. So in terms of, if we look at the operator that this equation suggests, then t pi is the policy valuation operator, and it basically maps a q function to another value function according to this formula over here. And the most important property of the Bellman operator is that it's a gamma contraction. So wherever point we start with in the space, we're going to move closer to q pi. What that means is that the fixed value function q pi is also the fixed point of that same operator t pi. All right, so there's the policy valuation operator, and now there's the optimality operator, which instead of taking the expectation at the next time step with respect to the policy pi, let me bring this up for, so this one samples the next state and the next action x prime a prime according to p and pi. This one will sample the next state according to x prime, but then it's going to maximize the q value of the next time step. And it turns out that this operator is also gamma contraction, although the proof is actually different. It's not the same reason why it's a contraction, but it turns out to both be, they both are contractions. So what's great about this is that then, because it's a contraction, it also has a fixed point, and that fixed point is q star, the optimal value function. Okay, so so far things are fairly easy. We understand this world, everything works nicely. Okay, so if we think about applying these operators or these equations to algorithms and the natural algorithms are, for example, value iteration, where we start from a q zero, an initial value function, and we repeatedly apply the Bellman operator over these value functions. So that would lead us to a sequence of q functions, and because it's a gamma contraction, we know that the limit of that sequence is going to be q star or q pi, depending, is if it's value iteration, it should be q star. Another algorithm is to do what's called policy iteration, where instead of explicitly applying the operator t to our q function, we're going to separate the process, the learning process in two steps. First of all, we look at our current q function. That q function is an estimate, if you will, of the total discounted reward that we're expecting to see. We take that expected, that q function, and we apply, we ask, what's the policy that would greedily maximize the reward that we would obtain? So I'm allowing myself to change my one step decision, and the next time step I have to use the same policy. So you can formalize this as looking at the arg max over the set of all policies of t pi q k. So first we find the greedy policy, the policy that maximizes expected value, and then we evaluate this policy, basically by fixing pi k, and now we can, for example, apply the t pi k operator repeatedly to compute the value function. So these are the two sort of standard methods of solving these problems, if you have a model. There's a third one that's sometimes used called optimistic policy iteration. And in optimistic policy iteration, what we're going to do is we're going to start with a q zero, and look at its greedy policy, and do something in between value iteration and policy iteration. So instead of applying the greedy policy exactly once, we'll apply it a number of times, and then use the resulting value as our new q value q k plus one. Value iteration is the case where m is equal to one, and in some sense, policy iteration is a case where m goes to infinity. Okay. So just to go back to this diagram I was showing you earlier, one way to think about policy iteration is we're trying to solve two learning problems, and what we did neatly decouple them by interrelieving them doing one and then doing the other. Okay. So the first problem is this problem of optimal control, where we're looking for the optimal policy by taking the arc max. And the second problem is to evaluate the resulting policy by running an inner loop where we just, for example, use Monte Carlo simulations to compute these values, or if we have the model, then we run the model. Okay. Any questions so far, by the way? No questions. Okay. Yes. Very good question. I would take it for granted. I would assume that it's given to me as part of the specification of DMDP. Now, it's true that in practice it's going to depend on your time horizon, and we can get into this in a bit, but gamma is going to affect your approximation error as well, of course, right? But for now I'm just going to take it for granted. Do you have a suggestion, maybe? How should I take? Okay. So what I want to say is so far, this was sort of standard, what I would call model based reinforcement learning, where we assume we can take a state and an action, and we can animate all of its successors, or we can sample all of its successors as much as we want. Now, typically we won't have access to the transition function p, and we won't have access to the reward function. And that leaves us with two options. The first option is to learn a model, to try to estimate from samples p, and to try to estimate from samples r. I'm not going to talk about this today, but I think that's actually a very promising area of research in the sense that if we need to be sample efficient, we should really be learning a model as opposed to being model free. The model free approach says we're not going to try to even estimate p and r. Instead, we're going to learn q pi or q star directly from samples. So a way to put it pictorially is if we think about model based reinforcement learning, then we start in a state xt, and we can animate all the possible actions. Let's assume we can animate these actions, and then look at the resulting expectation once we take the transition. And that gives us these nice contractive properties. This gives us a fixed point fairly easily, and everything is nice. In a model free case, we're going to have to sample a portion of that tree. And that's going to reflect a lot more closely the kind of trajectory based or interaction based view of the world that we take, for example, when we play Atari games. So let me start now with model free reinforcement learning. What's the simplest model free algorithm? Well, maybe we can still animate all the states in the actions. In this case, here's what we can do. We can start in a state and start in an action, and then sample x prime, the next state, a next state, and sample a prime, an action from our policy. We get this sample, and then the SARSA algorithm, SARSA stands for state action reward state action. The SARSA algorithm says, start with a Q estimate, QT, and move your Q estimate in the direction of that old estimate. And the very specific form of this, we can think of it as a ST hat being a random operator, where we apply the policy pi and receive a sample from our environment. So that operator returns a reward R to us, and it returns a next state x prime and its next action a prime. And then what we get is we're moving in the direction of that sample update. So one common way that this is going to get written in reinforcement learning is we have QT plus one, which is our next estimate, and that's going to be QT, our current estimate, plus alpha t times the TDR, and the TDR is basically just the difference in current estimate of the value, and the the sampled estimate. And in this case, as usual, alpha t is going to be a step size sequence. I don't know if you guys know this, but the name SARSA isn't the original name for the algorithm. It was it was called modified connection is Q learning. And and as a paper, I think it's in 95 paper by Richard Sutton, where in a footnote, the paper says, this this algorithm is called modified connection is Q learning, but it's a mouthful. So we suggest instead of should call it SARSA. And I guess the name stuck. So there you go. So now SARSA is what we call it. Okay, so yes, typically you wouldn't. So to get the proof of convergence, you need to have the usual a Robin's a Robin's Monroe conditions or some sort of condition where your step sizes decay slowly enough that you can converge at a point, and yet fast enough that you actually average out the noise as the other way around the other way around. In practice, we're often going to take it to be fixed. And then we're going to converge to a ball around the solution. Does it make sense? Any more questions? No. Okay. So SARSA uses a fixed policy pi. So the natural sort of pendant of SARSA, we have the the we had two operators, we had the policy evaluation operator, which kept pi fixed. And we have the control operator, the optimality operator, which now took a max at every step. So the natural thing to do is we can take that TD error, which is a sample from from the policy, and we can replace, we can replace that sample by a max. So instead of sampling a prime according to a policy, we just look at the next state and take the max. Okay. And that seems like a very, very small change, but it's actually going to affect the whole field of reinforcement learning. In this basic case, I'm presenting to you here, it's all fine. So under the rubbins monoconditions, we're going to get convergence in one case to Q pi, and the other case to Q star. But the convergence isn't trivial at all, because now you have three learning problems that are being interleaved together. At the outermost level, you're collecting samples from your environment. So you have this stochastic approximation process, where you don't actually get to observe the whole model, the whole expectation. And then within that step, you have to the steps of policy evaluation and optimal control, where you're trying to evaluate your policy, and you're also trying to find the maximal policy. Okay. So this is the case where we can actually sample from the model from any starting state in action. And typically the proofs will require us to do this synchronously from all the state in action pairs at once. The asynchronous case instead is going to assume that we sample not individual transitions, but whole trajectories from again from a starting state, but whole trajectories from, for example, a policy and the transition function. And then we can think of doing exactly the same thing as before, where we were going to apply the update at every step. I think a few of the time indices got left out here. But we apply this stochastic update at every step. Except now we have this extra tricky problem that our updates become correlated. So depending on where I am in my Markov chain, I might actually see repeatedly the same state, for example. And this means that the effective step size, when we look at the stochastic approximation problem, the effective step size is going to be depending on how often we visit a state, right? Or how, in other words, the station distribution of that state. Okay. So this is in some sense the setting that in reinforcement learning we're usually going to deal with. We start in a state, we sample a whole trajectory, and then we try to learn about the Q function using that trajectory. And as I said, the convergence is even more delicate, because now we have to contend with uneven numbers of updates to our policy. Okay. Yes. Just a clarification. I thought you said that we don't have the transition. Yes. But now you said that we are sampling from the... You're right. What I meant is we can't... We can sample from it, but we might not be able to explicitly enumerate the successes. Effectively, we can't compute the expectation easily. That's... We have to resort to sampling to compute the expectation. In this particular case, actually, it's even... We're making even fewer assumptions. We're not even assuming we can reset. Right. So it might be that we have to follow the trajectory. In the other case, we can start in a state, try out a transition, and say, oh, I'm going to reset now and try to state again. In this case, without the reset, it's much harder. Other questions? No. Okay. So just to give you a sense of where the field is at in terms of what do we know, what don't we know. There's a few questions, for example, what are the rates of convergence of this online process. There's been a lot that's been done in this subject, but we're still not quite clear that we have the right rates. An active area of research is how can we actually do variance reduction? So now, once we move to the stochastic approximation setting, we all know that we can do something... Often do something better than the sort of naive averaging with a fixed step size or a decaying step size. So for example, Muhammad Azza had a paper a few years ago where they showed that having two copies of the Q value, one which is in some sense is learning the value function quickly, and the other one which is giving us a policy to follow, that you could actually get better rates of convergence, which is quite neat and better side of the efficiency. And then there's a whole line of research on asking the question, if instead of looking at single transitions, we actually back up the discounted rewards from longer trajectories, what we call multi-step methods, then can we still guarantee convergence? And I'll come back to this in a bit to show you what multi-step methods look like, but just to say that even in this simple case where we can still look at all the states and store their values explicitly, then there's open questions as to how this converges. And finally, last but not least, is this idea of off-policy learning, which often we're going to be able to sample from one policy, but then we want to learn about a different policy. Very often we want to learn about the optimal control, and we want to be able to change our sampling distribution, and that's also going to do some more problems. So far, what I've talked about is these three circles, optimal control, policy evaluation, and stochastic approximation, and trying to argue that once you interleave these three things, then this gives you the rich reinforcement problem, but there's also subtle issues that are going to arise. Now, the deep learning part adds something else, which is what we call function approximation, where we won't be able to represent Q values exactly. So let me tell you what this looks like. Normally what we're going to do is we're going to say our Q value is parameterized by a parameter theta, and when we do learning, we're also going to now bring in a projection step back into the space, the parametric space of our value functions. So that projection step is pi, and effectively what it says is, in its simplest form, we'll apply the Bellman operator and then project the result of applying this Bellman operator to a value function Q back into the space of functions we have. Okay, and so this sort of hand-drawn diagram shows this to you. You have Q, you're starting Q, you go out of the subspace with TQ, and then you have to project down, and this error that you get from going from TQ to the projection is actually going to compound as you apply successive steps of that process, okay, and it's bad enough that it can actually cause divergence. So as soon as we move to the approximate case, we're no longer guaranteed to converge to a fixed point, okay. There are some classic results that show that that's not always true, so sometimes we can converge. So I just want to point out what would probably the most famous one is the one by Cittiglis and Van Rooij that said that if the approximation is linear, so we have some basis functions, phi of xa, and we define the value to be a dot product between a parameter vector theta and phi of xa, or phi of xa is some vector, some real value vector, then SARSA will actually converge, and SARSA will converge to a Q-hack, which is the fixed point of this equation here. Effectively enough, the surprising is the fixed point of the operator which first applies the Bellman operator and then applies the projection step, okay. And now the error of SARSA is actually going to be bounded by a factor of 1 over 1 minus gamma times the best error you could have if you directly knew Q pi. So put another way, if you knew the real value, you would project it directly into your support and that would minimize the error. Now by not being able to directly minimize this project as this value function, because we don't know it, so we have to interleave approximation now with learning, then this adds additional error, okay. The convergence proof for SARSA is actually in a norm that depends on a stationary distribution of the process, the process induced by pi. Once you move to Q learning, then things might diverge, because as you change your policy, you're actually going to change the stationary distribution underlying the Markov chain of the process, okay. And that's actually still an open question as to what, how we should fix that control issue. There was this whole line of research from 2009 to 2013 on trying to get convergent linear time optimal control. And by this I mean an algorithm that would run in a time linear in the number of features in the linear approximation case, and would still converge. There's been a lot of promising results in that direction, but it's still an open area of research. Another question we've looked at is how to actually explore under function approximation. So by that what I mean is now that we have this idea of an approximate value function, we might not be able to change the policy as we see fit and sample actions as we see fit. We might be stuck with a policy for example, which is greedy with respect to the value function. I won't talk much more about this today, except to say that it's choosing the right policy to sample from is an important issue in reinforcement learning. And finally the multi-step extensions that I've already mentioned now in the approximate case don't usually converge. And there's actually some very, very recent work showing that algorithms that are very robust in the tabular case when we can represent the value function exactly now stop converging in the approximate case. All right, so what I've done so far is started with this very ideal case of having the Bellman operator and applying the Bellman operator to find the fixed point. Oops, I've lost a piece of it. To the practical case where now we have these very large state spaces to deal with. We can't, we don't have access to the model, but we need to do something. Okay, and this is going to take us to the deep learning part. So if we go back to these games that we're telling you about earlier, it turns out that for many of these games, linear approximation with some sort of reinforcement learning method has had some good success. So, Samuel, Samuel's checker player was actually using linear approximation and in that conversion he could play checkers. There's actually a lot of work on go and also on the Atari, for example, that use linear approximation. I'm using these, there's many examples of successful linear approximation schemes for reinforcement learning. But now what happened over the last few years is people realized, or re-realized if you will, because we knew this already, the power of neural networks for approximating these value functions. Okay, so deep learning in other words. And as I said earlier, this started, or this, the explosion happened around 2011, 2012, when people realized that a very simple neural network could achieve better performance than very complex vision architectures. So this got started in vision. It makes sense that if the successes were in vision, that they should first sort of be applied to games with the vision component like go or Atari, for example. So if you're not familiar with deep learning, the basic idea is to say we have this feature map, phi, but now we're going to parameterize it by parameter theta. And we're going to learn that feature map. So instead of taking it to be fixed and only learning the last part, which would then be a parameter theta, we're going to learn the whole thing. Both the last part, which is linear, and the rest, which gives us the feature. And the way to do this is to define a loss function, L theta, and then do gradient descent, very simply gradient descent, or some variation of gradient descent on that L theta. Okay, this is actually the network that was used in the Atari playing agents in 2015. So you can see it's actually a fairly simple network. The very last thing before the fully connected layer rectified linear units, that's the phi. And before then, we only really have two stacks. I guess I should say these are stacks of nonlinear transformations called convolutional filters. And then we're taken as input raw images, in this case, raw Atari frames, and map these raw Atari frames to the files. So it's actually pretty surprising that just by doing gradient descent, you can actually get reasonable features, files out of this. Okay, yes. So you'll say that again. So you're saying where is the A in the actual network? Yes. That's a very good question. In this case, I guess I did lie, you're right. In the case of this particular network, the A is only at the last, so each action has a separate set of weights. Each action has a separate set of weights that we're going to take the dot product with. So the phi is shared between all the actions. It's effectively phi of x. And then we take the dot product between phi and theta A. So A is like a state variable. I'm mostly looking at this. And the inputs in fact, or in this case, the state is the action is discrete. So we just enumerate it. We just make copies. You can think of as making copies of this phi vector. We have 18 actions, for example, in the case of Atari, we would make 18 copies of that of that action vector. If A is a continuous control, then that approach isn't going to work. And maybe that's the case you're thinking about in the continuous case. In the continuous case, the two approaches is to feed it in as an input, as you say. Or typically what people will do is to output a parameterized policy. For example, the very simplest case would be to have a Gaussian with a mean and a standard deviation. And then we would output the parameters of that Gaussian. What was that? No, it wouldn't be filtering because even in a continuous case, you're still trying to learn the Q value. You're still trying to learn the value function that's associated with this output. Sorry, what about the function L? I'll do this side. Let me show you what the function L looks like. So in this deep Q network architecture that I was just showing you, that was used to play Atari games, the function L is the squared loss, the expected, the mean squared error between the target and the current output. If you look at it, this should look very familiar to the TD error I was mentioning earlier because it is. So we're looking at minimizing the square TD error. The main reason for this is because we can take the gradient, it's going to be a very nice gradient. When we differentiate this function, then we're going to get the TD error times the gradient of the Q value itself with respect to the parameters. To this day, this is almost always the function that's used in these kinds of deep networks. Does that answer your question? Okay, good. Now, it turns out that if you just do the procedure I talked about, that doesn't actually work. So remember how I've been telling you how as you move away from the Bellman operator and into a more stochastic regime, convergence becomes less and less stable. Well, it turns out once you move to Q learning with a deep network, that's bad enough that now the simple approach just diverges. And the early work we did with this setup, the value function which is diverged to infinity and then nothing interesting was happening. Some of the issues that arise, arise because of that sequential nature, that the fact that we're dealing with trajectories. So the data is sequential, which means that the samples we're learning from are not ID. And that's actually enough to prevent us from learning the right representation, if you will. Another issue, and this is related to Q learning, is that as our Q value is changing, the policy itself is going to change very rapidly. And that for the same reason it fails in a linear approximate case, that's also going to fail in a deep network case, where we're going to get these oscillations and extreme data distributions. And finally, this is more specific to Atari, but there's also the issue that we want to design a network that's going to be robust to the scale of the rewards and the scale of the Q values. And as a big issue in deep learning with the fact that Neve Green Descent doesn't work, you need to then scale your gradants down, for example, using a second order method or an approximation to a second order method. And the scale of the rewards plays a big role in making things more unstable. Okay, so I want to tell you just a few of the tricks that have been used to make this deep Q network work in practice. And these three tricks are experience replay, target networks, and reward clipping. So experience replay is very simple. The idea is we're going to try to break the correlations, so move away from this trajectory and more into an IED setting, where we can update from IED samples. The target network is going to be there to keep the target values fixed for a while, that's going to avoid oscillations, and then the reward clipping is very simple. Okay, so let me start with experience replay. What we do is as we're experiencing these trajectories, instead of applying this asynchronous update at every step, we're going to instead build a data set, a data set D. In this case, we're going to assume we're going to follow some policy that's reasonable, for example, an Epson and Greedy policy. We store all the transitions in what we call a replay memory. Okay, and then the loss function will become the expectation of under that data set, where we sample to transitions from that data set, and then try to minimize the mean loss with respect to that data set. What that does is basically now completely decouples the data collection process from estimating the value of that policy. Now that doesn't actually fix the optimal control issue, which is that the policy changes. So to avoid that issue, the trick is to fix the parameters of the target loss, of the target in the loss function. So what that means is we're going to compute that the target part of the TD error using an old set of parameters theta minus. Okay, and then we're going to define our loss to be with respect to these parameters. So there's a big difference here now that we have two, we have the parameter theta, which we're actually optimizing over, and then we have this theta minus, which is effectively a fixed target, as long as we don't change those weights. And what we're going to do is periodically update this target network. Now what I want to say is that these two tricks, if you think about them, are very similar to the the old approaches from reinforcement learning. So the target network is effectively doing policy iteration of a kind, where we instead of doing value iteration, we try to keep doing the max at every step for Q learning, then we try to learn the value of a policy by keeping that policy fixed for a while, and then move on with the next policy. The same way experience replay is like building an empirical model. So we're going to try to move away from the completely asynchronous case, and back into a regime where things are a bit more stable. So for me, to get things to work, it's always trying to bring things back to the Bellman operator that we know works and that we love so much. Okay, the reward clipping, it's not so important, but basically we just take the reward function, this is again, valatory specific, but clipping between the range minus one and one. And one thing that this does that's very interesting is in early experiments, what we saw was the values would be overestimated. So what do I mean by this? Well, if you take any sort of approximator, a value approximator, and you fix a policy, the greedy policy with respect to that value function, then you can ask the question using Monte Carlo Samplings, are you being truthful? Are you telling me the value you think you're going to achieve, is that actually achieved by your by your policy? And what you see is that consistently, the values are overestimated, very rarely are the values underestimated. This has to do with the fact that we're taking a max at every step. This has a number of other causes. But one way to deal with this or prevent it a bit is to have smaller gradients. So I won't talk much more about this, but we could talk about it afterwards if you want. One bad side effect is once we move into that domain, we're not even really solving the right control problem anymore, because now we've truncated our rewards. So we can't differentiate a large reward from a small reward. But these are the sacrifices we have to make sometimes. So just to give you a sense for do these heuristics matter? The answer is yes, they do. So these are scores taken from five different Atari games. Breakout, the names don't matter too much, but breakout, Enduro, River Raid, Sequest and Space Invaders. And what you see is that it's very clear that as you add these heuristics, the scores, the performance effectively goes up quite dramatically. So without these tricks, the deep learning approach for reinforcement learning doesn't really work out of the box. Since then, there's actually been a lot of work done and trying to understand whether these things can be overcome. Can we deal with them? I won't be talking about this today, but there's recent research we could do where you can look up on. So unfortunately, this is a PDF, so we'll only have the still to look at. What I was going to show you is very simply, this is a game called Space Invaders. And it's pretty amazing to see these agents play the game after. We'll train them for maybe, usually it's 200 million frames, which admittedly consists in playing the game nonstop for 36 days. But this point aside, we're more sample efficient now. They'll train nonstop for this amount of time. And then the policies that emerge are actually surprisingly human-like. And so it's quite neat to watch, to see the agent plan its way through. So all this to say, there is actually a lot of power in these deep networks in terms of the kind of value function they can represent and the kind of policies we can then extract out of these. Okay, so I'm right on time. So I'm just going to tell you very briefly about some of the more recent research that we've done. This is going to be quite compacted, but just so you don't leave with the idea that we apply Q-learning with the deep network and we're done. Some of the active research is going on is this idea of coming back to this idea of off-policy, which is very often on almost all settings, we'll have a lot of old data generated from a policy mu or many policies, including if we use the experience replay, we'll have this effect of having old data. And what we would like to do is estimate the value of a different policy. For example, the control policy. And we know, as I've mentioned already, that the naive approach will diverge with function approximation. So there's a question of how can we actually correct for this discrepancy between pi and mu. One thing we can do is we can use important sampling. Now in practice, important sampling, the naive version has far too much variance and it doesn't really help us. Another issue is that we might want some safety guarantees. So we have this old data. We don't know if it's good or if it's close or far from the policy pi. So there's actually an interesting line of work by Philip Thomas and Emma Brunskill and others in asking the question, can we actually guarantee a performance based on old data? Some research I was involved in recently is this idea of looking at multi-step methods. So you have the equation here in a multi-step method. Instead of looking at just a one step update, where we look at the next state transition and take the Q value at that next state, we're going to allow the process to roll a few steps forward, actually K steps forward, and then truncate it and look at the Q value K steps in the future. And up to that point, we're going to sum up the rewards to the discounted rewards. Basically, the case K goes to infinity is the Monte Carlo rollout for that value. And it turns out that it's well known that we get much better accuracy if we, instead of using the one step return, where we truncate immediately after one step, we use multi-step returns. And the typical algorithm for this is called TD Lambda or just the Lambda operator, which is going to mix in using a geometric mixture, these different lengths of robots. So you can either write it in this form here, where we explicitly have this mix in. It actually doesn't normalize to one. That's okay. We're also happy with that fact. The alternative form is to write it as we take our Q value and then we update it by a mixture of TD errors, where the TD errors are different time steps. So now we had an algorithm recently called retrace Lambda, which is actually both safe in an off-policy sense and also multi-step. And actually some surprising results that the convergence proof is not trivial. So even without function approximation, once we move to the slightly more complicated case, there are corner cases where the dynamics of the system will not get us to converge. So this is still an ongoing area of research. Another interesting question is whether we can devise alternatives to the Bellman operator that might have better guarantees or better properties in practice. So one of the things we looked at was this idea of a gap-increasing operator. The action gap is the difference between the max value you could get, so the max over all the actions at that state, minus the value of another action. So if two actions have exactly the same value, then the gap is zero. If an action is much worse than another action, then the gap would be quite high. And it turns out that the Bellman operator is not the only operator that will give us good behavior or good Q values. And so we looked at these, there's actually a family of operators that are convergent. A subset of these are what we call the gap-increasing operators. And what they do is they look at the Bellman operator here and they subtract the current action gap. Effectively, it's a bit like doubling down. They say, I'm going to take my usual Bellman operator and increase the gap. All the actions that I think are suboptimal, I'm going to make them even more suboptimal in terms of my estimate. Okay. What's very surprising about this is that these new operators aren't contractions anymore. So we can't guarantee that we converge this based on a contraction property. And actually the suboptimal Q values might not even converge. But we actually show that under certain conditions, including beta being smaller than one, then the limit of that process will actually converge to Q star for the optimal action. So you have an operator that's going to preserve the optimal value while increasing the action gap for the other values. All right. So I'm going to conclude here just to bring up that picture again and to say my perspective on reinforcement learning is that it's this interleaved learning process. And it's super interesting, I think, to start sort of poking at the different pieces and saying, you know, how can we make things more stable? How can we learn more efficiently? And of course, how can we apply this to something like Atari games? So hopefully this was enjoyable. And if you have any questions, I'll take them now. Thank you very much. Okay, so thanks very much, Mark. And yeah, we'll open up the questions. Okay, just to check whether I understood properly, the multi step is the equivalent of what is known in regression multi-carrier literature as long stuff's words, algorithm. I'm unfortunately not familiar with that. So I couldn't tell you. Then we can discuss. Yes, yes, yes, we should. I'd like to know what to do if you have now two players playing against each other. The short answer is you shouldn't use Q learning. Well, we know that these algorithms don't have regret minimization properties. So if you have two players, for example, the safe solution should be to, actually, I'm going to take it back. And I think I believe in the case where full information you can still use Q learning. But in general, if your optimal strategy is a mixed strategy, you couldn't use Q learning, right? In the case of Go, I don't actually have the, I don't know if that result is true or not. If you can use Q learning directly, that's a good question. Normally, I would say for two players, if as soon as you have partial information, then you should use something like regret matching or hedge, which would actually guarantee low regret. But I don't know if you had a more specific question in mind. Two players and of course, when infinite number of players. So many. You are playing so, okay, like I'm thinking of video games where you are playing networks, so you don't know how many players are around you and you try to learn automatically, please. I think that's a very interesting question. I think the floor is still open as to how to solve these questions. I think one important question is whether the dynamics are stable between all of these learning processes. In the case of something like Go, even then, once you bring in approximation, there can be an issue where both players could learn to play a subset of the whole state space, at which point there's no guarantee as to how they perform elsewhere, right? So once you go to infinite number of players, I don't even know how to, but this is very interesting. So are there questions? I had one. So what if, say you change the rules of your game, so I think say in your space and there's games and you drop a block so that your spaceship can't move anymore. How easy is it? Was the computer game, would the agent then adapt quicker to the new setting, having already learned the old setting? Absolutely not. So I think that's a big flaw of these methods that you learn a value function and that value function tells you how good the world is going to be and then usually you'll have a policy which maximizes that value. Now that function is learned from slow repeated iterations, right? Typically involving, for example, in this case, the experience replay. So there's no easy way to adapt and I think that's actually an open question, is how to design a reinforcement learning algorithm that would be adaptive in this kind of way. Some people might say, well the simple solution is to have a value function which includes variations of the environment, but I think the case you're saying is this is something the agent has never seen, but never, never. And in this case there's no good solution yet, I think. I have one more question. So can you explain to us then going from games saying to a wind farm control or something more economic, how would you use these kind of approaches? What do you mean exactly? So where you don't have pixels anymore? Yes. How would you control a wind farm with this kind of power plant? I think to some extent, in the case of the pixels, the additional piece we know is that we know how to deal with vision. These convolutional networks have proved really useful, basically leveraging signal processing, leveraging a lot of literature. These convolutional networks are really good at processing this information. Once we move to something different where the input space is no longer visual or audio, then I think the question is do we have the right features or will the deep network learn these features? I think so far it looks like we can also. Now for example, there was some work by my colleagues at DeepMind where they actually were optimizing another wind farm, but the cooling system inside a data center and they were very successful with that approach using reinforcement learning, even though there's no pixels involved. Thank you. So another question is in the game you are showing us, we can see that you have the score which is displayed. So I guess this is the good proxy of the value of the frame and all this game like this, or do you have those techniques or is it working if you have no scores? That's an excellent question. So actually, the game that's displayed here, Pitfall, is an interesting example because the score is mostly you lose points. You very rarely gain points in this game. At the moment, there's very few reinforcement learning techniques that deal with these kinds of problems. In a sense, we're moving through a decision process framework where we assume that there's a clear objective to be maximized. It's given to us and easily read out to a setting where we're asking, what will the agent do in the absence of this signal? One solution might be to look at something like what we call intrinsic motivation, where we say we're going to try to come up with their own reward signal that would cause the agent to have a behavior that's interesting. I've done some work in that direction. We had a few papers recently on using probabilistic models of the images as a proxy for driving intrinsic behavior. But again, it's sort of an open question as to how to best do this. Any more questions? Okay, I've seen that these models can be generalized to a lot of settings. For example, I was thinking about financial setting, game setting, and so on. Do you have general rules which in your experience, these approach works or doesn't? I think the thing to remember with deep learning is often success comes after a lot of engineering. So far, they've proven surprisingly powerful in a number of settings people would not have expected. I think people are asking the question, why is there a property of these kinds of networks that make them good? So if the learning problem was convex, then we would all understand it's because we're doing stochastic gradient descent and then we're just finding the minimum of that convex function. So the big question mark is we know that there's a lot of representation power in these networks. There's actually results showing you can overfit these training sets for machine learning problems to get zero classification error. And yet, the overfitting doesn't seem to hurt us when it comes to time to attest time or generalization time. So that's the surprising part is that the optimization procedure doesn't seem to hurt the performance too much. And the fact that it's non-convexed doesn't seem to be an issue either. Now, there might be problems. There are papers trying to say, we know these problems, we can't solve with deep networks, they tend to be more synthetic. And I think it's a very interesting question, open question, whether there are problems where it's not going to scale up. I think sample efficiency is an important part because as we said, as we're saying it, 36 days is too many days. And if on a 37 day, then there's a block that falls on your screen and you have to restart this process. That's something, that's clearly something we need to figure out how to solve that issue. Thank you. Okay. So I think at this point, let's move further discussions to after the talk, to a more free setting. And let's thank Mark again.
In this talk I will present some recent developments in model-free reinforcement learning applied to large state spaces, with an emphasis on deep learning and its role in estimating action-value functions. The talk will cover a variety of model-free algorithms, including variations on Q-Learning, and some of the main techniques that make the approach practical. I will illustrate the usefulness of these methods with examples drawn from the Arcade Learning Environment, the popular set of Atari 2600 benchmark domains.
10.5446/57390 (DOI)
Merci Etienne pour l'introduction. Je voudrais aussi remercier les organisateurs pour donner la possibilité de donner un talk. Je n'ai déjà donné plusieurs talks dans ce tour, mais je suis très content d'avoir des discussions avec les organisateurs. Et je vous remercie de vous donner une lecture. Je vous remercie de vous donner une lecture. Je vous remercie de vous donner une lecture. Je vous remercie de vous donner une lecture. Je n'ai déjà donné plusieurs talks dans ce tour, mais c'est la première fois que beaucoup de personnes sont dans le tour. J'ai pensé que Sam Rax est un grand succès. Mon talk sera dividi en deux parties. Je vais première vous introduire le thème métropolitain sur le blackboard. Ensuite, je vais changer de part optimale de la talk, qui est un joint-work avec Tony Le Lievre et Lézay Niazo Gedo. Vous verrez que il y aura des interactions de minfield dans cette partie. Alors, quel est le thème métropolitain? C'est un algorithme utilisé dans les sciences de l'appli. Il est atteint par la simulation, selon un thème de probabilité, qui peut être écrit dans le suivant. Ici, nous avons l'Onda, qui est un thème de référence, sur un état de état éthique. Et l'Onda est une fonction de l'E2R+, qui est un thème de référence, qui est un thème de référence, et qui est un thème de référence. Donc, on veut qu'on soit à l'éthique de pi, ou qu'on compute des objectifs de fonction intégral sur pi. Et le exemple que j'ai dans le mind, en statistiques physiques, c'est d'aller avec l'E2R, et d'aller avec l'Ebeg-measure. Et l'Etaux de l'Ex, qui est une fonction d'Ea-Ux, qui est un thème de référence, qui est un thème de référence, qui est un thème de référence, qui est un thème de référence, qui est un thème de référence, qui est un thème de référence, qui est un thème de référence, qui est un thème de référence, donc vous avez des paramètres de modèle, qui est une Theta, qui est de l'E, et vous avez des a priori densités, qui sont des Theta, et puis vous avez une densité conditionnelle de observer pourquoi, et puis vous avez des paramètres de la Theta, et ceci est de la façon de la fonction. Et selon la rule de la biaise, vous avez observé pourquoi, la densité de la posteriorité est simplement une Theta, qui est de l'E, qui est de l'E, qui est de l'E, qui est de l'E, qui est de l'E, qui est de l'E pour convertir une X pourhäng Peninsula sort du thorax, geko 1, ek resonant avec l''télafarpe Doky deinen techn 궁 kom stainless parler de série v de la t Cela ceci est dynamic é al a thompe like m. pour fixer l'exe, pour les fonctions de Y, nous avons probablement des probabilités dans la ville avec respect à l'Onda. Et la autre chose que nous avons besoin, c'est facile de simuler selon cette densité, cette poignée. Ok? Nous introduisons le cas de métropole pour assurer l'acceptance ratio qui sera le minimum entre 1 et eta y q y x par eta x q de x y quand eta x q de x y est positif donc quand le dénominateur ne vanniche et nous le prendre equal à y à 1 par convention d'autrefois. Ok? Donc, maintenant comment nous construisons une chaine de marches qui sera parmi les pi? En fait une partie de la X-note est évaluée de la variable et selon la trajectorie de la k° de la k° nous générons des propositions yk plus 1 distribuées selon la densité à xk plus 1 et nous formons une variable 0, 1 d'un parcours et une même set xk plus 1 est évaluée à ce propos et la variable plus plus 1 d'un parcours yk plus 1 et nous avons des propositions yk plus 1 parce que la fonction alpha est évaluée entre 0 et 1 et nous acceptons cette probabilité que si nous restons à la position de courant de cette façon nous obtenons un marque de chaine et maintenant je vais compter le marque de kernel donc quand je veux compter une expectation de f xk plus 1 où f est une fonction test selon la trajectoire à temps k alors c'est convainc de utiliser la propreté de la expectation pour première condition respecter la trajectoire à temps k plus le proposon yk plus 1 et puis ok donc ce qui se passe maintenant dans cette expectation nous avons juste avantage sur le uniforme indépendant 0, 1, variable uk plus 1 et nous obtenons que c'est f yk plus 1 alpha xk yk plus 1 plus f xk 1 minus alpha xk yk plus 1 ok même et quand nous allons prendre la choise de yk plus 1 maintenant nous avons enfin obtenu que c'est l'intégral de f y q xk ou p p xk dy donc ce que c'est p p est simplement le suivant de kernel donc soit le proposon est différent de la state de current et puis vous acceptez le mouvement avec la probabilité alpha de xy ok ou au y vous restez en position de current vous avez le cas où vous proposez quelque chose différent z devrait être différent de x des z différents de x et vous ne vous acceptez pas le mouvement et et le cas où vous proposez simplement x qui peut être important pour les states finie capital E puis vous avez quand vous proposez x vous restez en position la même bien sûr donc c'est delta x p mass x dy ok et puis l'important est que vous avez le suivant de la condition est que eta de x q de xy alpha de xy ce sera simplement une fonction de x sur y ok parce que vous multipliez avec ce minimum vous obtenez eta de x q de xy eta de y q de yx et vous avez un minimum de 2 expressions symétriques et c'est la fonction de x et y et bien sûr cela lead à le fact que la fonction indécriture de y de x eta de x lambda de x p de x dy d y sera equal à eta de x alpha de xy peut-être que je dois mettre q d xy alpha de xy lambda de x d y et bien sûr parce que tout cela est symétrique aussi le même eta de y lambda d y p de y dx ok alors bien sûr, c'est aussi vrai quand vous travaillez sur l'indicateur de y est equal à x et il reste vrai quand vous normalisez tout, donc on finit avec cette belle relation c'est que pi est réversible avec respect à la direction de y et un conséquent de cette réversibilité est que pi est invariant avec respect de y on intégrerait cette variable de y on obtient sur le x on obtient que c'est simplement pi de y p de y e et p de y e est simplement 1 on voit le Markov kernel et bien sûr cela lead à l'invariant de pi pour Markov kernel p est-ce que vous avez des questions si tout est clair maintenant je vais tourner le slide donc je vais faire des remarques je ne suis pas dans le bon endroit maintenant je suis donc nous avons fait des choix spéciales de la function alpha en fait et en fait la réversibilité de pi avec respect est présente quand nous prenons d'autres fonctions de la ratio ici nous avons le cas spécial de minimum de 1 sur la ratio et les deux conditions que nous avons besoin c'est simplement que A0 est 0 et f pour positive u fA1 c'est u times A1 et un autre exemple est simplement cette fonction mais la choise que j'ai écrit, c'est la choise du metropolis c'est la choise du barcais est mieux pour des variantes de l'ampirico ergodique de la papier par Pesco c'est pour la même proposo dansitie q c'est parce que en fait la choise maximise la probabilité d'accepter des mouvements qui sont réels de mouvement, avec des x et quelque chose important pour la seconde partie c'est pour introduire le «random walk metropolis asting algorithm» donc nous avons choisi la choise de la densité q de x, avec des variantes de la probabilité de la différence ici et je considère la fonction de la rondom donc ici la standard de la distribution de la rn avec des variantes ou des variantes de la matrixe de la identité de la square sigma et ce qui se passe maintenant est que il y a des simplifications dans le ratio de la ratio metropolis parce que sur l'un de l'autre, vous avez une pi de y-x et sur le dénominateur pi de x-y et puisque pi est symétrique cela simplifie et vous avez juste le ratio de la densité normalisée ok ce sont les «random walk metropolis asting algorithm» donc une fois que il y a construit des marques de kernel avec pi comme une mesure réversible puis on peut aller à la théorie aérgodique pour les marques de kernel et trouver des conditions sur le kernel p comme ça nous avons les résultats aérgodiques d'abord la convergence dans l'élection de xk comme le temps k va à l'infinité seconde la théorie aérgodique pour compter les intégrales avec pi quand la fonction est intégrable c'est facile d'y compter cette sorte de avrailles arithmétiques par le parc de la chaine et dans une bonne situation vous avez aussi des théories centrales pour cette convergence donc nous avons ce terme aérgodique renormalisé par la route square de nombreuses pièges dans la chaine et cela converges en bas pour une distribution de la Gaussian centrale et où la variante asymptotique est donnée par cette expression ici en fait vous pouvez vérifier que de la équilibre de Jensen cette square capital F est plus grande que l'intégrale ici cela utilise aussi la variante pi ce qu'est capital F c'est la solution pour la question de Poisson ok donc vous vous solvez F- capital P capital Fx c'est equal à small fx minus la expectation d'F à partir de pi et pourquoi cette solution pour la question de Poisson appartient dans la variante asymptotique ce n'est pas tout le cas de l'IDK où seulement la variante F à partir de pi jouera un rôle alors c'est simplement que quand nous regardons cette summe d'erreurs et quand vous utilisez la question de Poisson vous avez des termes de boondari et les importants sont cette fxfxj avec la question de Poisson et c'est simplement pxfxj-1 qui peut être interprété par cette expectation conditionnelle et c'est un increment de Mark-Tengall et en fait une façon pour prouver la théorique centrale est de utiliser des théoriques centrale pour Mark-Tegel bien sûr, ces termes restent en fin de la fin en fait, cela sera divisé par 1 à l'arrière de K et cela sera fin et nous avons ce Mark-Tengall qui joue la roule centrale donc maintenant je vais tourner à la deuxième partie de mon talk et la question est la première de la setting de la question de Gotion du métropole dans la dimension N et maintenant je mets une supercrete N parce que je vais laisser la dimension aller à l'infinité donc c'est la question de Gotion dans la densité et nous avons la simplification de la ratio de métropole que je m'ai mentionnée avant et la grande question quand on utilise un métropole de l'agorithème de la Gotion et en particulier quand la dimension N s'étend c'est de la façon de choisir les variantes de l'opposé parce que il peut y avoir une mauvaise exploration de l'espace et pour les parties ergotiques de la sens que nous avons vu dans deux situations opposées la première situation est quand la dimension N est trop grande les grandes mouvements sont proposés mais ils sont presque toujours réjectifs donc vous vous restez très longtemps dans le même endroit et c'est pas bon seconde si la dimension N est trop petite alors la grande proportion de la motion proposée est acceptée mais alors ces mouvements sont très petits donc vous restez aussi dans le même endroit donc ce n'est pas bon donc le tour du paramètre sigma, ou le tour généralement de la densité est très important pour avoir une grande proportion d'épicions ergotiques donc il y avait un travail prévu par Robert Skelman qui est assis que la densité est la produite de la densité de la dimension 1 correspondant avec ux qui est simplement la summe de la coordinates de x donc le target est de type iid et ils s'assument que la condition initiale est distribuée selon le target et c'est une très forte assumption en fait et puis ils prenaissent des défunctions ils divident le le je n'ai pas compris la variante square root de n et puis ils rescalent par accélérer par la même factor n et ils obtiennent des diffuses de Kaling comme dans le théorème dans le théorème pour la convergence de la rénormalisation du randon pour la motion de Brunian et ce n'est pas surprise d'obtenir un limiter diffusif ok et ce qu'ils prouvent c'est que le premier component ce est l'index pour le premier component qui convertit en distribution pour un processus de diffusion dynamométrique qui est ce processus ok donc ce processus bien sûr a la densité stationnare e-v c'est la calculation que Pierre Dogo a fait sur le blackboard d'hier et c'est l'overdampage qui est associé à cette distribution de target mais il est accéléré par une fonction de la constante L que nous avons ici et bien sûr pour aller plus vite pour l'équilibriumme nous devons choisir l'h de L comme largement possible donc c'est la définition précise de la fonction H et de la distribution cumulative de la fonction normale appuyée maintenant, que sont les conséquences pratiques de ce résultat de la scale? ce que je disais c'est que nous devons maximiser l'H de L et c'est l'expression de l'argument du maximum et sans doute il dépend encore de ce qui est principal de ce moment mais ce qui est très beau est que quand un compute la limite de la réception avaraine qui est la expectation de alpha il condamne à des autres expressions qui involveent le CDF de la normale loi mais quand un compute A de L star ce dénominateur se démarre de ce terme et on obtient quelque chose qui ne dépend pas de V et c'est equal à 0.234 c'est le premier numéro de magie dans la recherche et cela justifie d'utiliser un constat de la stratégie avec les rates de la réception de environ 1 à 4 mais il est questionnable d'utiliser des résultats de la scale à la equilibrium pour déduire ce que doit être fait lors de l'algorithme transgéant ce que nous avons besoin est d'évaluer la phase transgéant donc en fait je me rappelle la notion de la couture qui a été définie d'ailleurs par Mireille pour être capable de dédouer le résultat donc on a une séquence de vétal avec une dimension n de ce que les vectors ont des coordinates donc le logement est invariant par la permutation de la coordinate et c'est dit de être de nouveau, si pour chaque couture le logement de la coordinate de la couture n est négulière par le produit de nouveau et c'est le cas de l'équivalent à la loi de l'autre large qui est la convergence de la probabilité de la mesure empirique c'est un livre de Smithman donc mon set est le suivant je ré-write l'évolution de mon métropolystique algorithmme donc les coordinates différentes vont interrompre par ce set de set qui est quand le plus de l'UK est plus petit que le ratio de la densité qui est donné ici dans ce set et le résultat principal est que si je suppose que il y a des régularités nous avons bondé un second et un troisième dérivatif et la mesure de la priorité sur les reales lines comme ce qu'est le 4e moment de la prime est finit alors si les positions initiales sont échangeables sur la chaotique comme ce qu'est ce limiter alors il y a une propagation de chaos sur le temps nous travaillons dans le même défi accéléré par le facteur N donc ce que est la distribution limite c'est simplement le logement de la solution unique pour ce SDN qui est aussi un exemple de l'exemple mentionné par Dan Crisang dans sa lecture d'hier matin et ce qui est la non-liliarité est donné par deux moments les moments de la prime et les moments de l'esprit et je vais le préciser la fonction gamma et g nous avons aussi une convergence de la probabilité de l'acceptance de l'abraide pour une autre expérience bien sûr la hypothèse est satisfaite si les conditions initiales sont IID selon la hypothèse alors, que sont gamma et g? et g bien donc v est un dérivé de second order et b correspond à la période de ce gond donc ce sera bundé donc je n'ai pas de concernes b pour a c'est la période de b prime donc c'est un numéro entre 0 et plus infinité donc je dois aussi définir la fonction gamma quand a est equal à plus infinité et quand a est equal à 0 et cela implique des expressions compliquées avec une distribution cumulative de la normale la et la même pour g donc quand nous prenons a est equal ou b est equal à a gamma a est equal à g de a et cela implique une expression très simple ou une expression plus simple et en fait quand nous commençons à créer un librais bien sûr c'est préservé par le métropole d'assistance algorithma et c'est préservé aussi par le limiter de la scale que je fais donc x t est distribué selon la densité invariant et quand je compute le moment de v prime square je peux réévaluer cela comme le dérivatif de e à minus v interrédit par part que c'est simplement le moment de ce gond donc a est equal à b et en fait nous récovions le résultat par Robert Skelman et Gilx ce sont des cas particuliers de notre résultat maintenant que sont les proportes de gamma et g ils ne sont pas négatifs de l'above la fonction gamma est continueuse et bondée par la constante positive la fonction g est continueuse mais à la origine il y a une discontinuity à la origine et la dernière nous avons un genre de type de régularité où en fait nous avons une discontinuity à la origine nous avons à introduire ce factor pour obtenir des régularités dans les coefficients et dans le SDE en fait vous voyez que vous avez un square root de gamma, une coefficient g bien sûr la force de gamma est bondée par la ligne donc il a la constante positive donc il a la même régularité que gamma et g vous voyez que c'est multiplié par v prime et en fait nous avons des bonnes nouvelles parce que v prime de x t est joué en même rôle à la square root de A et nous pouvons de cette façon dire que les deux termes ont la même régularité en un sens et de cette façon par la compétition de l'étoil des formuleurs de la différence de deux solutions nous pouvons prouver une uniquité pour ce SDE seulement dans le sens de Mackie maintenant je veux juste vous donner une intuition pourquoi il y a une interaction de la ligne de field de ce modèle ce est ce qui appartient dans le secteur d'explanation en fait et je vais performer une exposition de Taylor à la preuve de suivi et le premier et le second termes ici donne simplement des distributions normales avec une exposition de cette exposition selon cette mesure empirique et la variante est liée à la expectation de la square root de A et puis ce sont les remets et en fait de la dépendance nous avons que ce produit ici a 0 expectation et en fait il reste seulement des termes qui sont diagonaux de cette forme et tous les termes qui viennent de la exposition de cette chose mais nous avons une autre square root de N ici donc les deux contributions sont de la même manière nous pouvons vérifier que cette expectation de la square root de la différence est bondée par un constant de la range donc cela signifie que la différence entre les deux expressions est simplement d'une root de la square root de la nombre de compétences ok donc je vais réévaluer la dynamique de la métropole je n'ai pas seulement changé l'acceptance de la set donc nous avons quelques normales standard normales variables de GK plus 1 avec ces mesures de empirical qui donnent la déviation standard et la expectation donc vous voyez l'interaction de Minfield et nous avons aussi quelques variants entre GK plus 1 et GK plus 1 qui est donné par ce terme et pour prouver le théorème une combinaison, la calculation de Gauchon la technique de diffusion de la approximation et la propagation de la technique de Caos mais c'est un peu long mais pas très difficile ok donc maintenant je vais essayer de aller à des parts pratiques et le premier est d'understand le long terme de l'évolution de cette Sd in non-linear dans le sens de MacKinnon et donc je donne la Sd encore et le premier résultat est que la densité d'E-V est la meilleure mesure pour la Sd ce Sd ok ce qui est clair est que c'est invariant parce que avant la scale quand vous commencez de cette densité pour les conditions initiales c'est prévus par l'évolution par l'algorithme métropolisques donc monいい point deация est la unité prevention donc pour la sd vous avez que la Gamma ébounait de ce � que telle mesure doit être la la botton,鐵à dit ce gerçekten vous людей a plus et к par fait c'est equal à 0, donc cela ne signifie pas des mesures invariantes, parce que vous avez juste une motion de Bronian multipliée par un facteur, donc cela ne peut pas avoir une distribution invariante. Et donc A de Psi, Psi est une finite, Psi est une finite, on ne le dénonce pas, et B de Psi, Psi est le moment de ce qu'on respecte cette invariante. Et puis, on a, de la calculation, la preuve de Pierre de Gaulle, l'expression de l'invariant.City. Quand nous computons A de Psi, en reliant avec la dédivation de Psi, avec Psi, nous l'intégrons par une partie, et nous obtenons une relation algébrique entre A et B de Psi, et de cette relation algébrique, qui peut être ré-rétenue comme le fait que cette expression, en taken à A de Psi, Psi est 0, nous terminerons avec le fait que A soit equal à B, parce que cette expression est positive quand A est différent de B. Donc, j'ai E2-V comme une invariante unique, et maintenant, la prochaine question est, est-ce que la solution de la solution marginalise convertit à cette invariante? Et pour vérifier, je vais passer la question de la question de la fourche, qui donne l'évolution de la densité de Psi à 90. Bien sûr, A de Psi et B de Psi sont les moments correspondants de cette densité. Et la première question est, est-ce que nous avons une convergence de Psi à Psi à infinity? Ou, la deuxième question est, nous avons encore un paramètre libre, qui est le facteur multiplicatif de l'invénéminateur. Et peut-on réduire la choisi de ce paramètre libre de l'analyse de la vieillesse de la densité? Ok, donc nous avons besoin d'une fonctionnelle inéquality pour prouver une convergence. Donc, une probabilité de mesure neuve satisfy un peu de la vieillesse de la vieillesse de la vieillesse de Constantrault si, pour une probabilité de mesure neuve, absolument continue de continuer avec l'invénéminateur, la relative entropy de la vieillesse de la vieillesse de la vieillesse, qui est donnée par cette formulae, qui s'appelle Kullbach-Leibler Divergence in Statistics, est plus petit que 1 à 2 rôles par rapport au fichier d'information de vieillesse de la vieillesse de la vieillesse. Et la information fichier, c'est simplement cette expression. Donc, pour satisfaire un peu d'invénéminateur de la vieillesse de la vieillesse de Constantrault, vous avez besoin de la potentiel qui devrait probablement s'étendu quadratiquement à l'infinité, en fait. Ok, donc c'est ce que je disais ici. Et bien sûr, dans l'invénéminateur, vous avez une certaine relative de la vieillesse de la vieillesse de Constantrault, qui est élevée à 1. Donc, nous espérons que la densité initiale est comme si la prime-square moment est finie, et que la relative entropy qui est respectée à l'étole minus v est finie. Et puis, notre résultat est que la dérivaison de la relative entropy est plus petite que cette coefficient. C'est la coefficient positive que nous avons utilisée pour prouver la dérivaison juste avant, en tant que l'information fichier. Et si l'un a une dérivaison de la vieillesse de la vieillesse, nous avons un nombre d'exponentiales rate de convergence, ce qui est ce rate lambda, qui est une fonction non-incrédite de la relative entropy initiale. Ok. Donc, comment pouvons-nous vérifier quelque chose comme ça? En fait, vous computez la dérivaison de la relative entropy et vous avez une bonne expression. Et nous terminerons avec des termes non positifs, ce qui obtient toujours même pour une dynamique linéaire. Et si la dynamique était linéaire, cela sera finit, parce que, en fait, nous avons une dérivaison de la relative équalité plus grande que la relative entropy et vous avez un détail d'exponential par comparaison aux audits. Mais nous avons quelques autres contributions non négatives qui sont liées à la non-linearité. Et puis, c'est toujours la même chose, vous avez à essayer de nombreuses computations et vous allez essayer de mettre les choses ensemble jusqu'à trouver des trucs pour se faire de la dérivaison. Et le truc n'est pas si simple en fin de l'année, mais encore encore, il en a des temps pour trouver. Et c'est la termine A-B2 ici. Je me rappelle que la termine A est le moment de la dérivaison de la prime. La termine B est le moment de la dérivaison. Et en fait, on peut le réécrire dans le suivant, que l'on peut l'intégrer par des parts. Je ne l'intégrerai pas par des parts, je le reconnais juste, mais c'est la dérivaison de la réérosité logopistique. Et maintenant, je peux juste faire quelques questions de l'équalité. Et c'est plus petit que la termine A pour les informations que nous avons déjà mises. Et nous avons terminé avec cette inquiétude et nous avons terminé par la termine A. Et ce n'est pas totalement fini, même si nous supposons que la dérivaison de la dérivaison est en train de se déranger. Parce que quand la termine A est en train de se déranger, ce ratio va déranger. Donc nous devons préventer la dérivaison de la dérivaison de la dérivaison, mais quand nous avons une dérivaison de la dérivaison, nous avons aussi une transporterie de la distance W2 Weisherstein à la relative entropy. Et nous savons déjà que la relative entropy se déranger. Nous savons que cette distance W2 Weisherstein ne va pas aller à l'infinité et cela permet de contrôler A. Et puis nous obtenons ce rate de conversion exponentiel par prendre cet infimum sur la dérivaison A plus que l'un des deux pour la coefficient A en temps long. Ok? Donc peut-être je devrais finir rapidement. Donc je vais juste summariser le dernier slide. Donc nous avons cette inéquité, ok? Et ce que nous avons fait c'est de essayer de optimiser par A cette coefficient pour prendre cette coefficient en temps long possible. Et c'est possible d'obtenir plusieurs régimes asymptotiques L'ampli de A B, quand la ratio est à 0, est donnée comme ça est donnée par cette expression quand la ratio est à 1 et cette expression, quand la ratio est à plus de l'infinité. Et puis nous avons fait un lien avec une stratégie constante acceptante en fait, ok? Donc nous avons ajouté une stratégie stratégie de choix de L à cause de cette façon pour les trois régimes asymptotiques que je considère. Et maintenant, nous pouvons relâcher les deux régimes. En fait les régimes, mais les régimes L sont ici et la stratégie constante acceptante ici, ok? Et cela lead à une alpha qui devrait être 1 à 1 pour ce limiter qui devrait être 0,35 pour la seconde chose et 0,27 pour la dernière régime, ok? Et donc cela signifie que la stratégie constante acceptante avec alpha entre 1 à 4 et 1 à 3 semble sensible. Je vais le garder. Et je vais juste aller au exemple numérique. Ce n'est que un target de Gaussian ce qui est bien avec un case de Gaussian est que la relative entropy a une simple expression en termes de la seconde order de moment sur la expectation. C'est la seconde order de moment, je pense. Et cela est la expectation. Et quand on peut déranger, quand on peut conclure une dérive de la relative entropy et optimiser cette dérive sur L, on considère un peu de l'ergotique, après un peu de une période de burn-in, T naut, ok? Et la capitalité est fixée pour les deux préoccupations de la expectation de la square. Et cela devrait diminuer en fonction de la période de burn-in, T naut. Ok, donc cela devrait être pour la seconde order de moment pour la première order de moment. Ceci est obtenu avec une stratégie constante L avec une optimale L avec Robert Skelman et Geeks. Ensuite, nous avons une stratégie qui nous a donné une certaine acceptation de 0,27 dans un manière adaptable ou non adaptable. L'optimale est la choice optimale et L n'est la choice de L qui minimise cette expression pour une petite dérive. Ok? Et vous voyez que tout le monde se déroule dans la même manière, il devrait être près de 0, mais la stratégie constante L. Si je vous en soumise et ce n'est pas surprise la stratégie constante L est pas bonne. Ok? Les stratégies constantes acceptantes sont les mêmes. Ok? Elles sont presque comme les optimales expérimentations de la stratégie de convergence. Et aussi les mêmes qui sont minimisées par la dérive relative de la stratégie. Ok? Et bien sûr, cela confirme ce que fait le pratiquant qui sait pour longtemps que ils devraient choisir les algorithmes et que la réception de l'avantage de la stratégie est entre 0,2 et 0,4. Ok? Merci pour votre attention.
We first introduce the Metropolis-Hastings algorithm. We then consider the Random Walk Metropolis algorithm on Rn with Gaussian proposals, and when the target probability measure is the n-fold product of a one dimensional law. It is well-known that, in the limit n tends to infinity, starting at equilibrium and for an appropriate scaling of the variance and of the timescale as a function of the dimension n, a diffusive limit is obtained for each component of the Markov chain. We generalize this result when the initial distribution is not the target probability measure. The obtained diffusive limit is the solution to a stochastic differential equation nonlinear in the sense of McKean. We prove convergence to equilibrium for this equation. We discuss practical counterparts in order to optimize the variance of the proposal distribution to accelerate convergence to equilibrium. Our analysis confirms the interest of the constant acceptance rate strategy (with acceptance rate between 1/4 and 1/3).
10.5446/57395 (DOI)
Thank you very much for coming with me today. And I'll give two lectures. The first one will be related mainly to multilayer Monte Carlo and the second one we will touch upon multi in the expressions of the things that I'm going to say today, but not in an adaptive fashion as we will see. So first go then to discuss a little bit about the activity within multi-level Monte Carlo. Let's see how this goes. So the plan is as follows. I will, well, these are some pictures from the place I come. This is a chaos actually. It's a King Abdullah University of Science and Technology. Some of the students are here today. And let's see. This is a plan. So most of you are very well acquainted with Monte Carlo and possibly multilayer Monte Carlo as well. So I will recap a little bit just to set up the notation at the beginning. So we will speak the same language. And then we go into different versions of our activity. As you would see, there are many ideas for adaptive approximations in multilayer Monte Carlo that we can see. And for that reason also, I will touch upon different models. I mean, one of them will be computations with random PDEs, that is PDEs that have random coefficients. And the second one will be diffusions. These are etostochastic differential equations. And the third one will be pure jam processes, say stochastic reaction networks, whose trajectories are piecewise constant. So I think this will motivate a little bit different ideas that we will have in this. And finally, we will discuss a little bit of the contribution multilevel and optimal hierarchies. Hopefully, if the time allows, I will also discuss optimal stopping with non-asymptotic rules within Monte Carlo. So I'll try to really be adaptive in this process. So let's begin with our first example. This is a linear elliptic PDE that is very popular in the context of random PDEs as a model case. So here, the coefficients a and f are random processes, stochastic random fields. And mainly, we'll concentrate on the effect of a, because a is with the two, right, the one that acts non-linearly on the solution u. And out of the solution u, we are interested in computing a statistic, which is real value. Again, as an example, in this lecture, I'm going to go through the computation of the expected value of a functional of the solution here depicted by psi of u. Psi could be linear or non-linear. We have some restrictions on smoothness on psi. But for the moment, think of a linear functional that is the magnetic acting on u. Again, this is not the only thing that you can compute with Monte Carlo, but this isn't the one that we will be focusing mostly through the lectures. All right. So the goal is set. We want to compute this expectation. Now this expectation involves the solution of a differential equation. Seldom, do we have the actual exact solution of u of our disposal? So instead of u, we're going to approximate it by uh, say, where uh comes through a discretization of this equation. In this case, of course, it will be only space discretization. But in general, you may have time as well or even other parameters. So think that you have prescribed the use of your favorite method. In this context, you could define it elements, find a difference, find a volume, so whatever. And that discretization is indexed by the parameter h. All right. So h will govern in two aspects of the solution. One will be the approximation characteristic of uh to u, but at the same time, the cost of evaluating uh. With the assumption, of course, of consistency, meaning that when h goes to zero, uh converges in the proper space to u. And at the same time, the cost of evaluating uh goes to infinity. Right? Otherwise, we will let h go to zero and get the exact solution. Now, since we were interested in this function actually, which is a real value random variable, we will approximate, I mean, the notation denote the approximation by gh, where now h is the discretization parameter. All right? So that's hopefully a natural notation. Let's go through the error then. The difference between what we would like to compute, which is the expected value of g minus what we actually compute, which is a sample average based on IID samples out of the inexact quantity, which is gh and not g. Right? So we split upon two main sources. Right? One, again, the trick is to somehow subtract the expected value of this approximate quantity to split the difference between this and this into two things. Right? One that we call the bias error, the other is a statistical error. And for the bias error, we have essentially an estimate of this kind, which is coming directly essentially from the deterministic analysis, right, point-wise, and then integrated upon this expectation. So essentially what we have is that if your method has order w here with respect to h, which is the discretization parameter, when essentially the constant that comes from the deterministic analysis will be random, of course, because it will be depending on a and an f. But once you take the expected value, you get the average of such a constant, which is no longer stochastic, it's just a deterministic constant. All right? So this analysis holds as long as there are no changes of order with respect to the randomness in the problem, which is a reasonable assumption. But if that happens, actually, this will not be the correct way to draw things. But again, this is an actually way to explain how this works. So let's again consider that this w doesn't change with respect to omega, and therefore we just get a constant here times h to the w. Okay, so this is more or less a standard discretization error. Second term is the standard statistical error, right? And this error essentially is mainly governed by the number of samples in. And I say mainly because the random variable that you're actually sampling depends on h, right? So this is an unbiased error, correct, because we take expected value here and then some sample of the same random variable in an ID fashion. So we only need to control its variance, right? So and that is motivated by the central limit here, essentially properly scaled, right? This will be approximated by a normal distributed random variable. And this allows to write then later the constraint of having this error smaller than a certain quantity in probability, you know, as a constraint in the variance, right? So as we can see, again, we have two contributions here, one coming from h, the other coming from m. And as long as the problem is a rational well-posed problem and there's nothing strange, these variants is uniformly bounded with respect to h, right? And therefore they claim that m is essentially the quantity that governs the statistical error, okay? And if the problem has a singularity of some kind, maybe this is not completely true, but I mean, again, in the examples that we're going to visit, this variance is going to be bounded uniformly with respect to h, okay? So two sources, one is governed entirely by h and converges algebraically with respect to h. The w here is determined by the numerical method that you use and the regularity of the problem that you're solving. And this m is very robustly coming into this analysis, right? We only need to ensure that this variance is bounded, essentially. So now that we have the two main errors identified, we try to see what is the corresponding complexity, which is the computational work that is needed to achieve a certain accuracy with Monte Carlo in this context for a prescribed confidence level, okay? And to do that, essentially what you do is to minimize the work subject to the total error constraint, okay? So the total error constraint, again, improbability, because this was essentially an analyzing probability, is the sum of the bias plus the statistical error. The bias was governed by this, the statistical error governed by that. And the total work, naturally, is what? Is the product between the number of samples that you have to solve for, right? Sometimes the expected cost per sample, okay? And again, this is an example, right? And the example that we have illustrated is a linear elliptic differential equation, right? And in that case, what we have to do is to solve a linear system every time we need to create a new sample of the random variable. So if v is the dimensionality of the space that you're solving, right? I mean, could be 3D, for example, 2D, 1D. Then h to the minus d, if h is a discretization parameter, is essentially the number of degrees of freedom that you have in your discretization. So h to the minus d is a degrees of freedom. And then to the power gamma is essentially the cost that one incurs by solving the system, okay? So gamma equals one will be an optimal solver, and gamma equals three will be the Gaussian elimination, okay? So gamma can raise between one and three. And of course, h to the minus d is the number of degrees of freedom in your system. Is that clear? Okay? So this adapts to, again, to the type of solver. If you use one type of solver or another, this gamma may change, okay? Let's see what else is to be said. Yes, okay. So I mentioned the following. I said that this cost was understood in the mean sense. And why do I say that? Well, if actually you have an iterative solver, right? And the spectrum of the operator that you're solving with is random, right? Because the a is the feasibility coefficient, was random itself, the number of iterations may depend on the realization, right? So what you're really interested in is the mean cost, not just the cost. I mean, the mean cost will be representative. And there will be variations with respect to each of the realizations. Is that clear? Okay? Now, if you use a direct solver, this will not be the case. But okay, I just want to emphasize a little bit on that. Now, if you actually do the minimization of this bound and subject to this constraint, you get this classical cost with respect to the tolerance that you're imposing. Okay? In the context of the central limit theorem, the confidence level is only acting on the C2. So it really plays a role in this constant. I mean, in this exponent, it's just acting on the constant. And that's why you don't see it. But again, everything now depends on the accuracy you want to achieve and can be split into two contributions. One that is told to the minus 2 and the other is told to the minus d gamma divided by w. Now the first part, told to the minus 2, is the classical cost in terms of number of samples of a Monte Carlo method. This is essentially what is needed to make this less and tall. So that will be the cost that one will have to pay if each sample cost me over 1. The second part of the cost is coming from the fact that I need to make h sufficiently small so the bias constraint is satisfied, meaning that this term is actually less and tall. Okay? So essentially what you see here is the number of samples times the cost per sample, which is essentially what you were seeing here as well. Okay? Now, what Monte Carlo tries to do is to go beyond this bound. So what we will improve somehow is this part and this will still be a barrier that we will not be able to cross at least with Monte Carlo methods. So just to give the notation again for multilevel Monte Carlo, let's recall this construction of Henry and Giles. Let's take a constant beta for subdivision. For all purposes, think of beta equals 2 for the moment. Okay? So what we say is the following. Let's just take in a single h that was meant to satisfy the bias constraint. What we're going to do now is to take a number of such h's. In principle, if you come from analysis, from numerical analysis, this is a natural way to proceed because every time you are actually trying to solve a differential equation, you try with a certain mesh. If you're not happy, you subdivide it right and you test the stopping criterion. And if you're not happy, you subdivide again until you meet the stopping criterion. Normally, what you do is that once you get to that level that you accept the error to be smaller than your prescribed accuracy, you throw all the other things, right? You don't care, essentially, and you just work with the last accepted guy. Here on the contrary, you're going to keep those for a certain purpose. And the certain purpose is to produce an effective control variant. So what one is doing here, again, let's go through the definitions first, is to introduce these delta operators that are differences between consecutive approximations. So essentially, this, remember, is a functional evaluated using a mesh of size hl. And this is a functional using a mesh of size hl minus 1. This is the difference between two random variables. And we are assuming, of course, that they are not independent. They are going to be sampled according to the same omega. That's fundamental in this construction, because that is what will make this difference actually go to zero, essentially, when l, bit l here goes to infinity. That again comes from consistency. In l2, this is what we're going to mainly need for this to be working fine. So let's go again to the construction. The construction of these hls is done via a geometric discretization, meaning that you start with a given mesh. If you're not happy, so divided by 2. If you're not happy, so divided by 2 again, by 2 and so forth. That creates this hl based on h0 divided by beta to the minus l with beta equals 2. Is that clear? So now that these deltas have been defined, then one rewrites. Remember this was the guy that we were trying to use as an approximation to this one, so that the bias between these two was less than tall. Now this is somehow the random variable that one is going to apply control variance to, and before doing so, one rewrites this expectation as this other sum. So this is an exact representation, and the sum is running overall levels of approximation from zero to the level l capital that satisfies the bias constraint. Yes? Now instead of using multicadload here, one instead uses multicadload independently for each of the terms coming into this sum. All right? So, and that's sort of a scary proposition because the first time you see here, this is saying at the beginning I only had one expectation to compute. Now I have l capital plus one. How is it going to work? Well, okay, before we look into that, let's see how this looks. And as you see, this, which is actually the estimator for multicadload, looks precisely like the sum of l capital plus one multicadload estimators, where each of them now is sampling something different, each of them is sampling these deltas. And of course, remember that for l equals zero, actually, this is not the difference, it's just g evaluated on the course's approximation. Okay, so you can think of this as, you know, g is h0, right, if you think of hierarchy approximation, and then the other things are just corrections, right, that are being added to the course's level. Now that we have this, how do we motivate, at least first intuitively, why is this going to work better than the original multicadload approach? Well, let's see, the bias is not going to be changed. We said, essentially, if this L is chosen so that, you know, this bias is less than told, what we are rewriting, essentially, is this quantity. So if you're going to actually improve this sampling is because we're going to pay less in the samples, essentially. The difference will not come from the bias. Yes, please. So how do we choose for each level? Oh, this is coming. This is coming. This is coming, don't worry. This is coming. Okay. Now, the variance here is the variance of multicadload, right? So what did we say that the constraint, you know, the statistical constraint looked like? Well, essentially, thanks to the construction, based on independent samples, the variance of this estimator is just one over ML capital of the variance of this GL capital. Now, you think that the GL capital is more or less close to G, so more or less you can expect that this looks like the variance of G divided by ML, right? So this is essentially saying that in the deepest level of approximation, the capital, the one that satisfies the bias constraint, you need to use total to minus two samples, right? And that is what you expect from multicadload. If you, on the contrary, use multi-level multicadload, right? The variance for the estimator, again, by the construction of independence, not only between the samples that we use on each level, but among levels, right? We get some of variances across levels. The first variance, the one that is coming from the level zero, actually, looks very much like this term, right? This is not a difference. This is not a difference as well. But the other terms are actually one over M, right, times variance of differences. And what's the point here? Remember, as little L gets larger and larger, we hope to have convergence, okay, just by consistency. Now, in which sense we expect to have convergence in this setting, what is most useful is to consider not only you have convergence in the mean, but also in L2, which means that these variances of differences are going to zero, right? This is where multi-level multicadload becomes effective. And if they actually become very, very small, it means that for really deep levels when L is large, right, each sample cost me a lot, but the variance is correspondingly small. So the number M that we have to use is really small. And this is where you get real advantages, because you are sampling much less in the most costly levels, all right? Now, you have to check some conditions usually to make sure that the choice of the level zero say sound, but again, the whole thing can be put in terms of these inequalities that again indicate that L2 is very relevant in the choice of the multi-level parameters. So far so good? Any questions? You said there was to be correlation between the different levels. No, there's not. Actually, no, they are constructed totally independent. Okay? That's important. Yes. From the independence in the construction, you get immediately this sound. Okay? Otherwise, you will have extra terms, all right? It's not that it's impossible to do that, but the construction here is done on independence in some chance. Okay. So cartoon type of approximation essentially, we are somehow moving samples from sort of deep refined levels that are costly into levels that are cheaper and less refined. Okay? Okay. So if you want to go beyond that intuition, you need some assumptions, right? And these assumptions may or may not hold in your problem. You have to verify that they are satisfied and sometimes you may have precisely this kind of convergence and some other type, I mean, depending on what problems you have. So here we are again going to concentrate on this algebraic type of convergence with respect to H. Remember, H was the parameter of discretization. You can think of delta x, delta z, delta y in your linear elliptic PDE. And we were assuming that the HL was essentially H0 times beta to the minus L, right? A geometric subdivision. So this first equation here is nothing different from what I wrote before, right? It means that the error in the mean is H to the w, essentially. You see? So nothing is different. It's just a rewriting of what we have before. Now again, this w, which is the exponent of the weak convergence, depends on the assumptions on your problem and the numerical discretization that you're using and has to be found for each problem, right? The variance assumption again is indicating some kind of error rate, but this error now is measured in L2. And instead of taking a difference between g and gl, now we are taking, remember, the difference between gl and gl minus 1 when we put it inside of the difference. And we have a strong rate of convergence that is s, which is not going to be w, right? In general. So for the last one, we have the cost, the mean cost of evaluating a difference on the level L, right? So this is delta gl. And this again entails the cost of evaluating gl and gl minus 1 because we're going to take the difference. Usually when assumptions that gl is the one that dominates, so this goes like that. And this is nothing from what we said before again, because before we said that the cost was H to the minus d gamma. And again, this is the same thing. Just rewritten in terms of this beta and this L. I emphasize again that the constant c, w, and s also depend on the problem and have to be found. And remember, gamma may depend also on the solver that you use, not just on the problem and the discretization. So if you change the solver, gamma will change even though you keep the rest of the things constant. As an example, for a smooth linear PDE of the type, linear type that we discussed today, if you use multi-linear piecewise continuous finite elements, we will have 2w equals s equals 4 and of course, depending on the solver, use gamma ranging from 1 to 3. So what's the total work? And here we start getting close to your question about the m's. The total work will be found in the same way as we did with Monte Carlo. We have L capital plus 1 levels. On each level, we have to use ML samples and the cost per sample. It's the work of delta GL in a expected sense. So what do we do? Well, remember, we have to find ML. So if you fix L, essentially, you can write an optimization problem and I'm going to go into that a little bit more when I go into multi-index Monte Carlo, but we can just say it again. I mean, remember, if you fix L, the bias constraint is not going to be important and what you want to minimize is minimize with respect to the ML's from L equals 0 to L capital. What? The total work. And this is the sum from L equals 0. I hope it is possible to see this. There's no reflection or anything like that. So L capital ML and this work delta GL. And then, of course, there's a constraint. The constraint is the statistical constraint. The statistical constraint, again, is the variance being less than the total square. So what is the variance again is just 1 over M0 say variance delta, actually G0. Just write it differently. Plus the sum from L equals 1 to capital of the variance delta GL. And this is less than total square. Of course, there's a constant here in between, but I'm not writing it. The constant comes from the confidence level that you want to impose. Okay. So the control variables here are ML, capital plus 1, right? And here you see how they act. So you can just write the last of relaxation and you realize that you can find these ML's. And ML's actually are going to be, at least if you make a real value relaxation first, this will be total to the minus 2. And then there's a constant here that is coming out of the levels. And this essentially is the variance of delta GL, I think. You may correct me if I'm doing this wrong. And then this is the cost, essentially. And this is for root here, if I'm not mistaken. Okay. So you can first write this lagrange of relaxation, solve it, and then say round it above, I mean, take the ceiling of this number to make sure that everything goes well. Okay. And from there, you get to that formula, essentially, assuming that the cost is not trivial, essentially, that you're not getting less than one realization per level. But that is a technical assumption. You can stay with this for the moment. So what do you see from here? You see that after you substitute the optimal number of samples, the cost only depends on L capital. Because these are problem, say, and discretization-based constants. So once you give the discretization method and the problem, of course, this work L and the variance L, essentially, are given. Maybe you don't know them, but precisely a priority, but they're given. And this is a sum of positive terms. So first thing that you observe is that when the tolerance goes to zero, L has to go to infinity. Right? Is it clear why? You have to satisfy the bias constraint, right? So the bias constraint, essentially, was h to the w less than toll. And L is coupled directly with this h to the w. So L, essentially, goes to infinity like a logarithm of toll, absolute value, say, that's a ballpark. You may have log log terms on top of that, but really it's log toll. So this is going to infinity as toll goes to zero. So this sum can only do two things, either stay bounded as toll goes to zero or diverge. If it is diverging, it's because this work is somehow growing faster than what the variance is being able to converge, right? But if it is actually finite, which is essentially the case where this variance is dominating the work somehow, we have a very interesting case. But it's sort of not intuitive the first time you see it as well. It's the fact that you get a cost that is essentially all of toll to the minus two. And if you remember Monte Carlo, when we discussed it first, toll to the minus two was the optimal cost with Monte Carlo when each of the samples, essentially, in the mean cost me all one. So essentially, this means that you don't feel the cost of discretization even though you're refining the levels, right, infinitely much as toll goes to zero. Is that okay? In principle, not, right? So somehow it means that even though we still have to sample of those at the level L capital, we only sample so few, so little samples so that that cost really does not influence the rest of the cost that come from the other levels. Because we still have to sample in that way. That is clear, right? Okay, so now that you see what is coming, let's see the result with these assumptions in mind. L capital, as I told you, is determined by the bias constraint and essentially is linked to toll by a log factor, right, where the log essentially comes connected with a weak rate of convergence and the logarithm of the subdivision multiplicative constant beta. And the samples ML are chosen optimally like we discussed before just by solving that Lagrangian relaxation problem. And after substitution, essentially we get this fantastic case, toll to the minus two, where S is larger than the gamma, right? So essentially both the work and the variance are behaving with respect to L in an exponential fashion, right? So the work is growing with the D gamma rate and the variance is decreasing with the S rate. So as long as S is larger than the gamma, this sum is convergent. And this is what explains the first result. The second is a limit case where essentially they outmaneuver each other. So essentially they balance each other. You get something that is constant here. So the total result here is L on this sum, L times a constant, and then you get L squared and L was log toll. So that explains the second line here. And in the last one, well, the work is growing faster than the variance is converging and instead of getting something that is convergent, you get some algebraic deterioration. But still this algebraic deterioration is better than what you would get with Monte Carlo. So you're decreasing from D gamma here in the explosion to D gamma minus S. OK? So you're still exploiting a little bit of the strong convergence. I mean, the convergence in it too that one assumed. So OK? So far for the introduction with the uniform methods. Now the question that we want to more or less touch upon is how to extend this Monte Carlo into non-uniform adaptive discretization settings and whether it's worth it. OK? And there are several ideas to introduce non-uniform discretizations or stochastic levels, if you wish. So let's begin with the... OK, well, this is... I put just to include in this slides and I already gave up trying because, I mean, the pages are not enough actually to describe so many things. And I rather just direct people to look into mics. Himongu said to keep up with the growth of the community. And this is actually what you get the best references in the... List of references in the area, so I'm not going to talk too much into that. So let's go to the activity then. And in this context, again, we're trying to approximate the expected value of an output quantity. In this case, I'm switching the example. I'm going to stochastic differential equations, itto differential equations. So in this setting, each of the paths evolved in finite dimensions with respect to time, right? So you can do this with the sum of the paths to a final value and then in that final value, you may evaluate the function g here that comes in this example, like a put option in finance and then you have to integrate this g versus the PDF of the final time. OK, so you can do this with Monte Carlo just by sampling many, many paths and again averaging as we discussed before. So here are some references again. I'm not going to stop and I'm going directly into the problem formulation. Here is an SE. This SE, if you have not seen it before, it's not a problem because you can think of this discretization via Euler-Marujama that is coming just in a second. So x is something that is finite dimensional now, it's not a function as in the first example. It evolves in RD. And g is a given function so that our goal is to compute the expected value of g of xt and so the evaluation at the final time of the solution of an SE. Again we are given an accuracy constraint tall and a prescribed confidence level and everything is driven by k-dimensional venerable process. All right, this has plenty of applications that I will not stop upon and what we're going to do is through this adaptive Monte Carlo and actually see that there are several versions now of adaptivity. The first one we did was with respect to the weak error but later we did something that controls the strong error and yeah, let's go into that later. So first discretization algorithm. This is just Euler-Marujama so if you have not seen SEs before, this is a new term that comes in. So remember it's coming through these. If you just think of the first part, it's just forward Euler right based on this function, the drift function a and the second is just the same type of discretization. You see you have to evaluate b on the left point, essentially the current point x bar but you multiply times these random variables. These random variables are normal distributed with mean zero and the variance is just the delta t that you're taking. So the delta t does not need to be uniform at all but yeah and again weekly you can just scale them to normal zero one random variables and the multicolline estimator looks exactly as we discussed before. It's an average of IID samples but the IID samples are not from the exact process but from the solution of coming out from this forward Euler. Forward Euler of course is not the only discretization that one can do in this context but it's the one that I'm going to exemplify. Okay so as we discussed before this can be split into tolerance for time discretization and tolerance for statistical error and this is a particular case as we discussed before the cost per sample here is essentially proportional to the number of time steps. The number of time steps essentially is proportional to one over a ton. This coming from the weak convergence rate one of Euler which is the same as Euler-Maluchin in this setting and this brings us to the complexity told to the minus three for Monte Carlo. Okay okay and now you want to introduce adaptivity. So what is adaptivity in this context and this context adaptivity means the ability to choose the time steps non-uniformly and this non-uniformly can depend also on omega. So the time steps such as t0 is 0 and the final time step is t capital are not going to be equally spaces and they may adapt to omega because the dynamics of the solution may depend on omega in a dramatic way maybe. So in this context then we will create realizations that depend on x bar, I mean here denoted by x bar and depend on this mentions delta t of omega. So again why do we do this? Well sometimes there are non-smoothness on A or B that ask to, I mean to deteriorate the usual behavior of the uniform method so ask essentially for the adaptivity in a natural way and what do we need to be able to produce such delta t? Well you need, at least with our approach, you need to rewrite the weak error in this other way essentially, you need to introduce an error density which is a function rho w that does essentially not depend on delta t so that the error, the weak error can be split essentially in contributions that are coming out of each of the different discretization steps. You see here if you discretize over the time steps, this is a sum over time steps of essentially delta t n square times rho w. Now the fact that delta t here is inside of the expectation is allowing for the choice of stochastic time steps because rho w may still vary dramatically with respect to omega, that's the point. If this rho w will not be varying dramatically with respect to omega then it's cheaper actually to take a non-uniform but deterministic discretization with respect to delta t, is that clear? There's another head to do this fully stochastic so whenever possible you avoid that. Now the funny thing is that in this context there's not only a weak density but there's also a strong density. You can actually make an approximation and you see here the power is one but here the power is two, you can actually write the L2 error here in the same fashion for Euler where now this rho s is not exactly this rho w, this is still something that you can approximate with computable quantities and it's able to distribute the strong error across the different intervals that come into discretization. In principle this is not obvious because when you square this you may get a lot of cross terms. So to be able to tell that actually the diagonal terms are the one dominant this needs some understanding. Okay, but what is the whole point now is that once you actually produce these error densities you can produce single-level adaptive algorithms. So single-level adaptive algorithms will try to find these time steps so that say for example a bias error is satisfied or even say this type of L2 error is satisfied. And what does it allow then? It allows for the following construction. If you want to create a multi-level adaptive method what you derive essentially the discretization with is with a sequence of tolerances. So if you have a reliable single level adaptive method the only step that you need to go from there to a multi-level adaptive method is just to create a sequence of tolerances and to say that the first level will be driven by what will be defined by discretization of my single level adaptive method so that they reach that accuracy. The next level is again refinements on those that achieve the next accuracy constraint and so forth for all the levels. So you see now each of the levels does not have a prescribed H. This notion is gone. Each of the realizations may use a completely different mesh that adapts to the characteristics of the omega that you have in mind. But still one will be able to extract the good properties of the variance convergence, the mean convergence and the rates of the explosion of the work with respect to the level. Okay? So what is optimal at least to show results on the algorithm in this context is to refine according and to stop according to these simple rules. Essentially what you do is to compute solutions and error indicators are n for each time step. And the error indicators are n in the context of this error density defined essentially as rho n times delta tn squared. I mean and rho n could be the weak error if you're controlling the weak error of the strong, if you're using the strong approximation that's not important. But again now this is defining the contribution from times the n in a given realization to the total work. And what we do to stop the discretization is to verify whether the maximum of these guys is or not above this. And this you see is the tolerance that we prescribe for the bias constraint which is what we want to achieve divided by the expected value of a number of time steps. So this essentially is the mean error that we would like to achieve on each interval. That turns to be optimal essentially to equidistribute the error along intervals. One caveat though, this n is not known a priori, right? Because this is the number of steps that you need to solve. So these have to be found on the fly. But this is a constant and this is another constant that are essentially probably independent. So what you have to remember here is that this is going to be not the same as this C of refinement. The refinement criterion is a little bit more stringent than the acceptance. So essentially if you see that you're not going to stop because some of them are going to be larger than that, well then you are going to refine many more than just those that not satisfy this constraint. Now another technical point in this construction is the following, if you have a refinement and has not been accepted and you mark some intervals in time for refinement, you need to produce essentially the corresponding interpolation values for the delta W's that drive your discretization. And these, you know, not to introduce any biases, have to be consistent with the points that you already sampled. And this is done according to Branian Bridges, right? That is the correct way to do this discretization. So this plot essentially shows that, I mean, that for a given level, for example, you use the blue discretization, right? And this entails evaluating W at all those points. But now you need to evaluate W in points in between. And of course, they are not going to be linear interpolants of the previous values, but they will differ upon. And this difference is actually given by the Branian Bridges, okay? So it's just conditional Gaussian sampling. That's nothing more. Okay, any questions? No? Okay, so what was the main idea again? Let the tolerance in the delta W algorithm define the hierarchy. That's essentially what you have to remember. And this hopes, this works in any problem. If you have a sound error density of PEs or EZEs, you can just use this approach and it will work. Like I said before, adaptivity is useful when you have issues with A or B or the process itself may have issues with respect to, say, boundaries where it is being, for example, stopped or reflected that those actually ask for adaptivity. You may have stiffness as well, and this is also nicely treated by adaptive algorithms. So again, if you have a stiff drift, you may choose at least delta T. If you don't want to go through this posterior global error control, if you just want to drive things by stability, you just may take delta T to make sure that you remain within the stability region of the method. And that again, if the stiffness is just coming from A, it can be done quite easily. So what is interesting here? The point is if only when you do a posterior error estimate, you show things about the discretization being efficient or not in the sense that the estimate for the error essentially gets close to the error you want to impose. But in this context of error densities, you can show actually, you show much more, you can show actually that the algorithm stops, that asymptotically you have normality, asymptotically you have accuracy. And again, what is asymptotic in this case is when tolerance goes to zero. There is no notion of H equal to zero because H is gone. So the only parameter that drives the discretization here is the accuracy constraint, and the accuracy is given by toll. So when you look at asymptotics in this context, it's when toll goes to zero. And of course, we have kept the confidence level constant in this discussion. We are not letting the confidence goes to one. So asymptotic normality, you also have asymptotic accuracy. I will just tell you later what this means. But it really means that when tolerance goes to zero, essentially the error is bounded by toll. And the complexity essentially is up to essentially a constant, you get the optimal complexity. So for the ethos to classic differential equations, this one discussion that I have not made, and I just put the warning here, it's actually working fine in the theory, but it's not obvious. The discretizations that you get are not adapted, I mean, they are adaptive, but they are not adapted. Adapted means that they are not adapted to the natural filtration of the problem, which is given by the linear process. Why? It's because the density, and I have not told you maybe, no, these densities essentially involve variations of the final value with respect to the current value. So essentially, they may need, for example, derivatives of respect to the position Xn of g of xt capital bar. This quantity is not adapted. It's looking to the future from, say, the current point n to final time. So in principle, you can discuss whether these things are going to converge to the correct solution. They do. But I mean, this is not true for any adaptive method that you can come up with. And no one has to be worried, I mean, at least careful about it. Okay, so that's why. So if you do examples here, for example, you have a drift singularity, this is just going to be simple in the case, in the sense that the stochasticity is really not relevant. I mean, here we have, say, deterministic time for the singularity, and the mesh is going to refine automatically around the value of alpha. We are not telling the algorithm that alpha is there, but it does find it. You see the mesh here is refined from the level, essentially, 10 to the 0 to 10 to the minus 10, just to control the error. Okay. But again, like I told you before, this is sort of the deterministic type of singularity. So in this case, it doesn't pay too much to let delta w to depend on omega. Delta w is not uniform, but it really doesn't depend on omega. So you can split the construction here in a first phase, where you actually find all these meshes that you're going to use in all the levels. And then you just run in the second phase, material multicolor increasing the number of samples that you need on each level until you hit the variance constraint. That is effective. Okay. And actually, what you can show is that the cost of the first phase to find actually the optimal discretization in Monte Carlo is much cheaper than the actual cost to solve the problem. So that's again, actually, when you have deterministic type of singularities that you can find the mesh very quickly. Okay. So in this case, you see an advantage from uniform to adaptive. You don't see much more of a difference between uniform Monte Carlo and adaptive Monte Carlo, because essentially what happens is that this singularity is not strong enough to modify the strong rate. It does modify the weak rate. But if you remember what I said before, the complexity in Monte Carlo is mainly governed by the strong rate. I mean, there's this independence of the weak, of course, but in the inequalities that I showed you before with these conversions of the sum, the strong rate was the one that was opposing the weak, the work growth, not the weak. Okay. So, but if you actually go to stronger singularities, so now we let P actually be beyond one half and, of course, you have to make sense that this is actually has a unique solution, blah, blah, blah, but it does actually, as long as P, I think is less than one strictly. So you can see here for P equals 0.5, that was the previous example, you don't see much of an advantage between multi-level Monte Carlo uniform and adaptive, but as you start raising the P, you clearly see the advantage between adaptivity and non-adaptivity. And again, I have to tell you one more thing, this has been done using the strong density, not the weak density. So this is the latest work that we have for adaptivity. It's actually on the strong, I mean, you use the strong density to rewrite this L2 error and you now create the levels according to this strong density. That's how this plot is being created. One more thing, okay, the alpha here was uniformly distributed between one fourth and three fourths, so actually the measures have to be stochastic, because the position of the singularities is being just jumping around. Okay, so I told you essentially how to do adaptive multi-level Monte Carlo approximation, yes please. Sorry, alpha is independent of w, that's right. So essentially you set alpha, you run the things, alpha is independent of w, correct. Yes, I mean, this was just a simple example. So people can actually reproduce and see what is going on. But yes, I mean, things can be more complicated because the singularity, as you're saying, can be coupled with the dynamics, okay. So it could be that actually the singularities appear because of some drift problem, right, with the stiffness, or it could be that x is getting close to a boundary and the boundary has an issue. So you can think of the boundary being penalized and penalization is drift stiffness, right, so yes. But that will not be able to be written in one line essentially, so yes, at least this motivates what you can gain. Thank you. Okay, any more questions at this point? I'd like to then summarize what we did on this topic. So I hope the idea of adaptivity in this context that I gave you is quite general. You can use it for any problems you want essentially, as long as you have a sound single level analysis, if I may say, okay. The message is that it is better in the context of multilevel to use an analysis that will go towards the L2 error, if possible, right, drive the analysis of the construction of the levels according to that. Of course, you need the bias error to control the number of levels you're going to use, but at least the distribution of the shape of the meshes should be done according to the strong error. And actually that can be proved, actually, that is optimal to do so. You just write a problem that is related to these, but instead of writing this part here, you just write the error density times the, I mean, the integers of the error density times the delta t's, you just write the whole Lagrangian relaxation, which now will involve finding functions, functions delta tL, say, from L equals 0 to L capital, and then you will see that delta tL is directly expressed in terms of the row strong, not the row weak. The only, the last level is connected to the row weak, but all the rest are driven by the row strong, and therefore that motivates directly to the use of the strong for the activity that we found later, sorry, but, I mean, the first works we did was with the weak activity. So how do we do with the time? We split in between or? As you like. Okay, I can just draw the conclusions and it would take five minutes. Let me put the conclusions and then, okay. So as conclusions, I told you again that this way of thinking for multi-hebel and adaptivity is quite general, you can use it anywhere you want. You have to be careful, though, that when you replace the statistical constraint, which means that the statistical error has to be less than tall with a prescribed probability, computationally it's much more effective to write that into a variance constraint, and this can be done as long as you have a central limit type of approximation. The central limit does not apply directly in the multi-hebel-vontical setting because you have some of independent but not identically distributed random variables, so you have to use resort to some linear or failure type of result, and you have to verify again these four each of the problems that you work with. So you have to be careful with that. Okay, so let's leave the puzzle random measures for the next one. Thank you. Five minutes? Yes, sir. Five minutes. Thank you. So we are going to discuss another idea for adaptivity in this second context. The second context involves paths that are also random and depend on time, but as opposed to the stochastic differential equations that we were given before that were etotype, these paths actually can be solved exactly. So these paths are given by piecewise constant functions, and the only random thing about them is essentially the time where they jump, and which kind of jumps we have them. These processes are very popular, they are macro processes of course, and they are used in Xenium applications, but let's go a little bit more into them. This is a logarithmic scale, this is time, and here in blue, red, and green are three components of this vector that is changing according to time. The numbers that are shown here involving are the number of species for each of these three components, so you can think of three populations randomly changing inside of a system according to maybe biological reactions or chemistry reactions for instance. Even though these green paths do not show it, they are as discontinuous as the blue ones essentially. It's just that you have logarithmic scale and you may be fooled by the logarithmic scale, that's it. So that's a motivational example, this is a gene transcription and translation you have here in this system, the component G that may react given G itself an M, M that actually may react again and give M and P, and the P that actually goes into D when it meets itself. So you see that there are sort of reaction channels, essentially when this thing happens, you know deterministically what is going to happen. The only problem is that the time where this occurs is not going to be prescribed, a theory is going to be random. And let's think of a simple example, let's think of water say in a container and water can be created from hydrogen and oxygen and it could also dissociate into hydrogen and oxygen, so this is a reversible equation. But the way the speed of these reactions actually happen depends of course on the temperature, but if you fix the temperature out of this discussion, they depend for example on how much hydrogen and oxygen is there because if there is no hydrogen, oxygen will never meet hydrogen to create water, right? So the likelihood of these events to happen depends on how dense these components are in the system. So and this can be mathematically described. So what we want to do again will be to compute some given output quantity of interest at a given final time up to a certain tolerance again with high probability. So how would describe the evolution of such a system? I told you before this was an X that lives on a lattice essentially because it's counting. So the components of X essentially describe different species in the system. So we count how much of the species one we have, how much of the species two we have and so forth, okay? So X is taking values on these lattice, right? And it was according to time and of course omega. Now we assume that we only have finitely many possible reaction channels like in the case of water for example, we may have three components, one is how much hydrogen, how much oxygen, how much water, right? And we can only have water dissociating into hydrogen and oxygen and the other two meeting to get water. So when this oxygen and hydrogen meet, right, we assume that they can only create water. So there's nothing stochastic about the fact when we know that they meet. The question is when they meet and this is the stochastic part. Okay, so when this thing happens, right, the J reaction type of in the system happens, the only thing that we have is a change from X that was the state of the system into X plus nu J. In the case of this water for example, when oxygen and hydrogen meet to create water, you have a plus one in the water and the corresponding minuses in the oxygen and the hydrogen. Now, when does this happen? Is given by the so-called intensity functions for the jumps or in this context also called propensity functions. These are non-negative functions, A, J, that are evaluated on the state X and give values on R plus. So that the probability of having a reaction of type J in an infinitesimal interval that goes from T to T plus DT, given that you have the state X is essentially A, J of X times DT. So for instance, if you have no hydrogen in the system, you know that new water is not going to be created. So this A, J corresponding to that, that reaction will be zero in that interval until that situation changes. So again, we want to compute the expected value of a given quantity for the given of several G and the applications are everywhere. So I'm not going to stop on that. So we have plenty of references and as well this is not complete, but we just wanted to show a little bit the efforts on this multilevel version. So what is interesting about these systems? First thing as opposed to the previous discussion that we have is that exact algorithms are available. What do you mean by exact algorithms in this context? You remember when we sample a la Monte Carlo to approximate quantities of expected type, we always have statistical error unless the problem is trivial and has no variance and then we are not discussing anything stochastic. So exact in this context means pathways exact, essentially that with finite work you can simulate a process that has the correct law. So there is no bias error. You can do unbiased simulation with all one cost, which means that already you have the beautiful regime that we discussed before that was total to minus 2. The cost is total to minus 2, the number of samples in Monte Carlo and then all one to solve for each path in the mean. So the whole discussion here is about constants. So you do variance reduction in order to reduce the constant in front of the total to minus 2. So that's the whole point in this context. As opposed to the other case where we actually can push the complexity from total to minus 2 log square times total to minus 2 into total to minus 2. Here it goes from total to minus 2 into total to minus 2. And still you can get implicit gain. So constants matter, that's what I'm trying to tell you. So then if you have an exact algorithm, why are you going to introduce an inexact one? Well, because the two things, because the statistical error is already there. So having a bias that is zero in the presence of a statistical error may not be optimal because the cost of simulating such exact path may be a lot. So introducing a path that is a little bit cheaper and has some bias and the bias is smaller than the statistical error will not change things dramatically from the error point of view, but it will do it from the work. So it is actually a good idea to actually introduce bias in a bias simulation. Right? Good. So what is the problem then here is that essentially the cost comes from all these events that are happening in the system. So if you actually have a lot of molecules in a big reactor, you can think that there are a lot of reactions happening. And if you have to track all of them to do this exact simulation, the cost is going to be horrible. Okay? So, well, essentially these interarrival times between jumps are exponentially distributed according to, with intensity, there is a sum of all the AJs in the system. And this can be a large number, depending on the system. Okay? So as opposed to that, then one would like to do approximate time stepping. Time stepping that is cheaper but introduces a bias. This has been done in, I mean, pretty much from the tau-lib, there is a forward-oiler method to more refined versions of it. And you have drawbacks, of course. First is you're going to introduce a time discretization error that you didn't have. Right? And two, you may have non-intuitive, at least qualitatively wrong type of behavior. Essentially, if you go back to the discussion that we had at the beginning, our process is actually counting. Counting molecules of different types in a system. These are not going to be negative, right? I mean, if you're doing the proper thing, the true trajectories will never cross zero. But if you do these sort of approximate algorithms with a non-zero probability, you may cross these things. And you may want to avoid these events because they have an issue on the accuracy. The global accuracy that you can get. So there are ways to deal with these. For example, the pre-lib is just to adjust adaptively the time step, you know, the control, the one-step exit probability. But we would like to always relate things with respect to the goal, which is to relate the old type of errors that we have in this approximation with respect to the goal, which is the expected value of g of xt capital. So what we did in this line of adaptivity was to first create a single-level hybrid algorithm that at each time step, adaptively switches between the exact path simulation and this tau-lib sort of oiler in order to do what? To minimize the computational work. So why is this somehow needed? When you're close to this zero level that you don't want to cross somehow, the tau-lib, which is the approximate forward-lowered type of discretization, does have a cost that is not being reduced somehow. So you're not gaining too much from using it. But the exact simulation essentially has a cost that I told you was essentially going up with the number of particles in the system. If you're going to get closer and closer to zero, that cost actually gets smaller. So close to boundaries in this respect, the exact simulation is advantageous because it doesn't really use bias and it's cost to do because the inter-rival times are relatively large at least. So this can be quantified and a rule that switches between these two things can be done. So again, this is adaptivity, but now not adaptivity just with respect to time, but between methods. So you're combining two methods. One that has sort of infinite accuracy or infinite order with respect to time discretization and the other that has order one, which is essentially coming from Euler. So that was one thing. The other thing is that this idea can be generalized into multilevel Monte Carlo algorithms for our control again. And not only that, it's like you can do this according to the different channels. So I told you before that when you introduce this, there are different J channels. And with respect to these Js, what matters really is the propensity. If the propensity is sort of large, you're taking a lot of events over a unit time step. If the propensity is slow, these things fire not often. So if you can actually split among the reactions, do a time splitting method and essentially treat some of the reactions with the approximate method and some of the reactions with the exact method. This can be done as well, even in the context of multilevel. And again, all this, what is going on here is that you're introducing biases and approximations that have much, much less cost when compared to the unbiased algorithm. This is why this gets useful. So how do we go about it? To motivate the different methods, the whole thing can be written through this course random time representation as accounting algorithm. Essentially the value, the number of species that you will have at time t, essentially is the number of species you have at time zero, right? Changed by what? Changed by the number of times the Js reaction occurred. And the number of times the Js reaction occurred is described here by a Poisson process whose intensity is non-trivial, it's just the integral from zero to t of A J, the propensity of the reaction J. This is again a non-negative function and this is just a random time change on a given Poisson process. So this is a natural number that goes from zero to whatever and this is just the vector that introduces the changes according to that reaction. Is that clear? So it's just bookkeeping, you see? And if this A would not depend on X, this would be trivial to simulate, right? You will just change accordingly the time in a unit of Poisson process and know exactly when the jumps are going to occur in your system and that's it. So to simplify this, when you want to do approximations, you just do a donor method, essentially you freeze the value of A at the left point and approximate this increment by a Poisson random variable with intensity A J times delta t, if you wish. And the first time I wrote it, they put tau here and this is why they call it the tau-leap method. Okay, so first problem that you see here from this approximation point of view is to find tau so that you're not going to cross these important boundaries with a given probability. Delta is given. So this tau will depend on X and delta and this will prevent you from jumping according to this inequality. What we did actually is to do some large deviation, turnoff type of bound to actually come with a computable formula that one can use to ensure this inequality. And then from this delta t, we can actually discuss whether it is worth taking the approximate step or the exact one. So and then one step farther is to go into a global probability of going out because remember this discussion that I just made was based on one step, essentially. If you're here, how you have to take delta t, so the probability of exiting in this time is below a certain value. Now what you really want is the probability of the whole path going out or not going out. So you have to link the one step probability to the whole path probabilities and then link these quantities into the effect of the global error that we really want to compute and approximate. So this is the first step that is good because I mean in principle the probability of going out for a path, if things are going to be done correctly, it's a rare event. So you want to, instead of sampling that rare event, you want to compute quantities that are sort of easy to deal with. And this approach does it actually. Okay, and like I told you before, then one can write a multilevel Monte Carlo based on these type of paths, right? Find the m's and find the l. So you control the exit type of error, essentially that is related to the indicator function of the path going out. That will produce an O1 error. So essentially that's why you have to compute to control this via the exit probability. Then the weak error that is more or less what we have discussed before with the diffusions and then the statistical error that doesn't change from the discussion that I made before. So the issue of this is here is that you have to do things correctly to simulate couple pairs that have the right law and are correlated in the right way. And of course being able to accurately estimate the different quantities in the multilevel construction. This is particularly nonstandard here from respect to the diffusions that I discussed before for the following reason. The paths, right, they're taking values on a lattice. So when things are going to converge, right, I mean as you discretize the L, the delta t or increase the L if you wish, the paths are going to be closer and closer. But you know, in diffusions closer and closer may mean you're in a continuous space, so they may mean actually getting closer and closer. But here you have a lattice. So what does it mean to get close in a lattice? You're far or you're there? So actually closer in this setting, it means that the probability of going far is going to zero, right, and the probability of being equal is getting larger and larger. What's the problem with that? That the problem is that this creates immediately high kurtosis. And high kurtosis is bad news in the multilevel Monte Carlo context because it means the estimate of variance is difficult. So one thing that appears here naturally as opposed to the diffusion case that I discussed before is you have to work a little bit farther to estimate the right variances in your system. Remember the m's are driven by the variances. So you need good estimates of these quantities. Otherwise your estimates on the m's are wrong. And if the estimates of the m's are wrong, bye-bye Monte Carlo, right? So that's important from the practical point of perspective. I think it's in delta and the index and the... Little n, yes? It's in delta that you use to balance the probability of s. Yes, everything goes there. So you know that. Everything goes into there. I mean, each level in principle here contains a delta Tl but contains also a delta L. Because remember, there was a... There was this probability of going out, right? So you may prescribe this delta L as a function of the L and also a delta L maximum per level. So this thing goes as before like delta T0 divided by 2 to the L, for example. But this doesn't go that fast. This is chosen in a different way. So these two things combine to give the method. Is that clear? So these two are varying. All right? Thank you. Good. Thank you. Okay. So this is an example, the example that I showed at the beginning. And here you see that from the exact method, essentially, which is SSA, the error bound versus the tolerance essentially goes like total to minus 2. So essentially, this is the complexity that we have. And if you use this hybrid multilevel, you can still keep the same complexity and what you gain is a constant but the constant can be substantial. All right? In this case, it's 10 times faster. And this is actually the usual plot that you would see in our papers. It's the control of the weak error. So what the user gives in this context is a tolerance but he also prescribes the confidence level. So in this case, the confidence level was prescribed to be 5%. So when you do your simulations, essentially, you expect to see around 5% of them going above the prescribed tolerance. So you see here, the tolerance is getting lower and the corresponding sample error for the different runs of the material, Monte Carlo, distributed like this and then essentially these numbers here are the percentage of those above the bound. Okay? So you change the quantity here, these exceedance values should change as well. All right? So you can actually have, depending on the problem, different type of gains. I'm not going to discuss this too much but you can go above 10, of course. You can get 10 to the 4, things like that. But the plots look like this always. You're improving the constant. You're not improving the rate. The rate was optimal to begin with. Right? So it is actually a pain to introduce a bias approximation with respect to the unbiased method. Okay? And here's an example of our reaction splitting method where you actually do this differently in the different reactions to get even farther. Okay? So this is, I think that's it. So summarizing here, this is a different type of idea for multilevel adaptive, right? You're switching between methods. Okay? You can also do the delta T adaptive as well but here you're switching between methods. And you're going from an unbiased method into a bias method to improve the constant in the complexity. All right? Any questions? Okay. So now we discuss discretization from another point of view. We're going back to uniform discretization but now we ask a different question. We say we have been driving our discretizations with this type of geometric type of refinements, right? I mean you get an initial discretization parameter, then you divide it by a given number, then you divide it again by the same number, and so forth. Okay? This is natural within, again, numerical discretization, right? You try with a mesh, then you start to divide it. Try with another mesh. It's not optimal. It's not optimal. That's a natural question to ask, right? I mean, should we choose the HLs or the delta TLs like this or should we not? Answer? No. It's not optimal. Okay? So with some assumptions you can show that this is not optimal. This is the work we did in this optimization of hierarchies for Matias Monte Carlo. And this other work that is here, this continuation of Matias Monte Carlo addresses this problem, right? That these variances that you need actually to drive and get the optimal ms are not known a priori and somehow have to be found on the fly, right? So you need to create an algorithm that not only is adapting the ms but is trying to find these parameters that are needed to find ms, okay? So that's what the continuation is doing. And this is implemented in the lecture in this code, MIMC-LIVE, this library, MIMC-LIVE. So let's go to the assumptions then to focus. So what do we have here? We have a Matias Monte Carlo method as before, but now the levels, GLs, are going to be driven by these HL parameters of discretization and we do not say that they are geometrically distributed a priori, okay? So we're going to let them vary. So we're going to take L capital plus one of those and we are going to assume that essentially the weak rate is algebraically convergent. The strong rate again is algebraically convergent. And the work, for example, is algebraically divergent. That's in the linear PDE that we have before. And then what we have to find are not just the ms but also the Hs, okay? Is that clear? So as you know, the constants and the Qs here will depend on the problem and the solver, as we discussed before. And these are just a couple of examples, right? In Euler-Maruyama you have Q1, Q2 equals one. And if you have the other PDE example, we already discussed that the Q1 was two and the Q2 was four. Gamma again is defined by the solver and it's obvious in Euler-Maruyama, it's just one. So I don't stop into that. I just try to show you that these things can be found. And if you have no theory at all, right, if you're desperate, you can just try to fit something to the convergence as you see, right? And or the explosion in the solver. So you can actually measure things as well. So two useful quantities here is the quotient between the strong convergence and the explosion of the work that is chi and the eta is the quotient between the weak convergence rate and the explosion of the work. Of course, the best case for multivariate Monte Carlo is when chi is larger than one, the limit case is chi is equal to one. And the sort of algebraic improvement with respect to Monte Carlo is chi less than one. All right? And again, you see that if you change the solver, you may change from chi greater than one into chi less than one as I described here. In the PDE example, if you use an iterative solver, you get 1.34 and if you go into the direct solver, you get 0.89 already. So you can influence the convergence rate of multivariate just by changing the solver like that. Okay. So the optimization, how does it look? It looks like this work that was the work that you discussed before, subject to this constraint. This is a bias constraint less than this fraction of the tolerance. And this is the statistical constraint, which means that the total variance, right, is less than theta times tau divided by c alpha. And c alpha, again, is the confidence constant that is coming from the user, essentially, when he imposes the confidence level alpha. Okay. So the good news is that you can solve this problem. And of course, you can even verify the normality, how far you are or how close you are when you do actual computations. And here's a result somehow for chi equals one. You actually have, is a case where you actually have geometric discretizations. But I have to warn you, these geometric discretizations, you see, h a is somehow beta to the capital minus l. So it's a subdivision that we discussed before. This doesn't depend on the levels, but it depends on the total. So it's not your usual geometric discretization that you would think of. Okay. And the beta, actually, is not two or three. I mean, it's something that may even have an influence from total. Okay. So the spacing between the levels is not trivial either. All right. This is one more thing that is interesting here. The optimal choice of the splitting parameter, which is the fraction that you devote to the bias versus the fraction that you devote to the statistical error, actually goes to zero, sorry, goes to, oh, come on, goes to, I mean, the fraction of the bias goes to zero as a total goes to zero. Yes, that's right. So essentially, if you see here, theta is the fraction that you give to the statistical error and one minus theta to the bias. So theta essentially goes to one, why, because theta is fixed and L goes to infinity. So this theta is just going to one. All right. So asymptotically, you're putting most of your error, you know, your error allowance to the statistical error, not to the bias. And of course, the level behaves like log total and with this constant. But for non-Ka equals one, things become less simple. You can still find the relations on the theta and you can find bounds on the number of levels with respect to the different parameters as this shows. Okay. Nothing really changes from the classical results in terms of exponents, but you can identify how the input constants affect the actual complexity in the actual constants. So you can play tricks to reduce these constants knowing that they exist and the shape they have. Okay. So the shape they have in these constants in the different cases are like this. So you can actually see that in some of them, for example, the variance of the level zero plays a role, but in some others they don't. So here in this setting, if you're here, you may get an advantage by doing a special trick for level zero. So whereas in the other cases, you may not. Okay. So like I said, the exponents are the same as you have in the classical theory, but now you identify the constants as well. And you identify how you have to space the levels too. And that is more or less the discussion that I have here. These are not going to be necessarily nested because you have this beta that behaves in a strange way. And the H0, also the first mesh, also behaves in an interesting fashion too. Here's a discussion on the continuation of multi-level Monte Carlo. And again, the plots always look like I discussed before. You have a direct solver here and an interactive solver. You have the prescribed error. And here's it versus the tolerance. And you're checking essentially that when you run over and over and over the multi-level approximation, you get a fraction of those that go above in accordance to the more or less the confidence level that you prescribed. Now coming back to this question of finding the variances and other relevant parameters to do optimization, in the continuation of multi-level Monte Carlo, what we do is to solve a sequence of problems until you hit the problem that you want. So essentially, what happens? We know that in the best possible case, the explosion of the work in multi-level Monte Carlo is told to be minus 2, right? Best possible case. It could be even worse. And you can have a long talk here. What that mean? The worst is like this. So if you actually solve now a sequence of problems where you first begin with told zero you'll say, and then you go times told zero divided by a certain fraction, right? Call it F, and then keep going, right? Until you hit the actual tolerance you want, you can sum these works here, and you can get a choice on this fraction so that the actual sum of these sort of preliminary works does not create a huge overhead on the last one that you want to solve. That's the first idea. And then what happens here? That when you start solving these sort of axiomary problems, you start learning the constants that you need, okay? So somehow by creating these fractions, you do not overshadow the actual work that you want to do optimally, but you start sort of uncovering a veil over the variances that you need and the other things that you may need to find the optimal elements. That's essentially what is behind the continuation of Monte Carlo, sort of a motto with respect to a tall type of idea. Not really complicated, but it has to work quite nicely. Of course, on top of that, there are tricks to actually improve a little bit the variance estimates on the deeper levels, right? What happens statistically is that in the course levels you have a lot of samples, just by construction, right? And Monte Carlo produces a lot of samples in course levels and very few in the deep ones. But the deep ones are the really costly ones. So if you make a mistake in the variance there, you make a mistake on the M and things go really rotten. So to get a better hold on the deep levels, you actually do some kind of Bayesian type of estimate where you use as a prior model an extrapolation that is coming out of the course levels and you enrich it with a few samples that you have from the deep levels. That's what is behind too. So it's not just one idea, but I mean, I'm just telling you the couple that I consider most relevant. So the good news again is that if you do all this sort of little learning process, the complexity that you get is not much of an overhead with respect to the optimal line here. That will mean that for this sort of pink magenta type of a curve, you are just writing the cost that you would incur if you would know all these nice constants beforehand with respect to finding them on the fly. So it's not too much that you have to pay, really. So that's with respect to the optimal splitting theta and it's not really much relevant. So what is the second part of conclusions? We didn't discuss much of normality, but we have a result using the linear failure in this context. Better splitting between Bayesian statistical error can improve the work a little bit and this continuation of Monte Carlo tries to provide a more stable algorithm that allows you to find the constant in between. Geometric hierarchies are nearly optimal, but they are not the geometric hierarchies that you are used to work with. They come from this optimization. They have specific values that may depend on toll as well. So for ETOs, I included in the slide some assumptions and some results with respect to the densities that I don't have time to comment on, but I'm going to share the slides so if you have questions you can come to me. Again I emphasize here that these files actually depend on derivatives of the final solution with respect to initial conditions. So this is what really makes the method non-adapted to the natural filtration. So let's say this is multilevel in that context. This is what I meant by strong convergence and the accuracy result. This is what I meant by the work optimality result and I included a little bit more in terms of complexity. But OK, CLT2. So last thing before I run out of time, a little bit of an activity with respect to M and I just touch upon the classical expected value computation using just Monte Carlo just to close it up. So what's the point here? You want to compute the expected value of X using IID sample averages and you want to prescribe again this delta which is the confidence level and you want to prescribe also the toll. So what is the point here? The point is that usually you do this approximation based on the central limit theorem. That rule, that will be a rule of the following type. Essentially you are going to prescribe toll and you are going to prescribe delta and then a number of samples M will come out. It may not be so reliable when you have X that has low decay and density, it's essentially heavy tails. So for that reason we provided a rule that is a little bit more robust and essentially combines the central limit approximation with other type of bounds. Let me walk you through a little bit before I finish. I have 10 minutes, right? Yes, okay. So this is going to be a little bit intense then. So this is a general stopping rule, right? Based on the samples you are going to estimate the probability of failure and based on that we are going to accept or not. So the probability of failure actually is the probability of your approximation not being close enough to toll with the prescribed probability. So this event of failure having a value larger than delta essentially. So again as we described with the central limit theorem before the use of variance constraint you can actually, if you believe that your sample average, your sample variance is actually accurate enough you put a rule of the central limit type substituting sigma here which is a true standard deviation by its approximation and if this actually is accepted then you stop and you give your approximation. These may have issues, okay? So consider the following Pareto distribution. The density is decaying like X to the minus 4.1, right? For X larger than 1. And you see here the behavior of this quotient, this quotient is the probability of the actual failure versus delta the one that you wanted to impose. So this quotient is larger than 1, this is bad news, this quotient is less than 1 or equal than 1, it's okay. So you see here tolerance versus delta, right? So here it's like having more and more confidence in your computation and here is more and more accuracy as you go to the left, right? So as you let tolerance go to zero essentially for delta fixed things are going to be fine but in the regime where delta is say a few percent and sorry, when total is few percent and delta is sort of stringent and that's natural for engineering computations this is not going to be nice, it's going to be actually in the red and it's not going to give you the prescribed tolerance. So there are many results in the literature where you fix the delta and you let total go to zero. What I claim is that essentially those results overlook this regime which is important in the applications. Okay, so the whole point here is to penalize this fact because we really don't believe sometimes that we have such an accurate estimation of the variance to uncover this what we do essentially is to use very same type of bounds that are essentially sort of better control rates on the approximation in the central limit theorem and combining those with the sort of H-worth type of expansion which is something that you would use when you have full confidence that the tails are going fast essentially to zero then you get sort of an improved rule and when you go above this rule you actually see in this example that you can achieve the right prescribed failure rate with almost no change, almost no change you see in the computational cost and the cost here is just determinants of samples because we are working with just Monte Carlo in this context. And this can be done also for another kind of not so nicely behaving random variable and you see again that for small delta and relatively few percent total this is a bad case but you can fix it using this algorithm and without incurring too much in an increase of the work. And finally this is an example where we actually expect the usual approach of central limit theorem to work nicely and it does actually the work here is such that the failure probability is below the delta but this algorithm also achieved that with that an increase, a sensible increase in the work. So in the nice cases this behaves well, in the worst cases or the bad sort of cases this will have better control on what is going on. Okay so that's a summary and I think I get to my end that's remarkable actually. So thank you. There any questions or comments? Yes, Amar. So I would be interested to know if we could apply this criterion for example in the case where X is related to Rar event. So that is a Bernoulli random variable typically which is 1 and 0 but 1 with very small probability. What? That's not the case we looked into. I think that needs a measure change. I don't think one can apply it directly here. Just using some simple averaging of your output. So our robust is the criterion to... I mean what I'm saying is that it can be used but provided the first you do a measure change. If you don't do a measure change all the estimates are going to be wrong, right? You mean wrong inaccurate in fact? Absolutely. The whole thing is completely reliable, right? So Rar event asks for a measure change to start with and then if the measure change is sound, the important sampling is sound then you can use things like this. But I wouldn't recommend using this in a Rar event without looking into an important sampling algorithm first. It can be combined with the same... That's what I'm saying. But I wouldn't recommend it to use it without the important sampling step first. I have a second question perhaps. Because again that regime is different, right? I mean that regime is like for example 0, 1 with a very tiny probability of achieving 1. I mean here it's like unbounded, right? You can get unbounded variables with a certain probability and this is the case, not that fast. So you can achieve them with the probability that it's not so negligible and this is what gives you trouble in the estimation of the basis. It's not the same regime as the... Have you looked in the direction of concentration of measure inequalities like we know... Well, that's also not applicable here in the sense that for concentration of measures at least if you use classical large deviations you need exponential moments. These guys do not have any, right? I mean we use them in the Cherno for the Taulib but here you don't have an exponential moment. So if you have a concentration it's too little. A short comment. Maybe the applicability of your method is connected with a kind of ratio between the expectation and the variance. Because the case of Fourier events is that 1 over square root of p goes to infinity and p goes to zero, something like that. The relative value of the expectation... Yes, yes, but I mean that's also what defines the regime of red events, right? Yeah. But in this case again I emphasize that is the tail that is creating the issue and we create sort of something that works more or less uniformly with respect to that and does not address at all the question of the red event. The red event should be addressed in a complementary way. At least it's my humble opinion. Other questions, yes? Have you looked or is there any chance to extend this framework to the case of PySYS adaptivity? I'm thinking of SPDE where you would like to have a discretization that has some local adaptation in space. But this is not about this last part, it's the previous part. Yes. Well, I'm kidding. Okay, okay, okay. This is about the... Again, the adaptivity would respect to... Pathwise. Pathwise, okay. So what we discussed was pathwise but in time. Yeah, but the time step is fixed. No. Independent of the... No, no, no, no, no. The time step is delta T of T and omega. Delta T is fully stochastic. So it cannot have two features that come with omega, pathways. So yeah, that's actually the most general but you only use it if you're obliged to. That's my advice. That if you can use, for example, the terministic type of activity, that is cheaper. But you need some prior information about the problem that you're solving. Right? Okay, thank you. Did you have any photo questions? If not, so let's thank again the speaker. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
We will first recall, for a general audience, the use of Monte Carlo and Multi-level Monte Carlo methods in the context of Uncertainty Quantification. Then we will discuss the recently developed Adaptive Multilevel Monte Carlo (MLMC) Methods for (i) It Stochastic Differential Equations, (ii) Stochastic Reaction Networks modeled by Pure Jump Markov Processes and (iii) Partial Differential Equations with random inputs. In this context, the notion of adaptivity includes several aspects such as mesh refinements based on either a priori or a posteriori error estimates, the local choice of different time stepping methods and the selection of the total number of levels and the number of samples at different levels. Our Adaptive MLMC estimator uses a hierarchy of adaptively refined, non-uniform time discretizations, and, as such, it may be considered a generalization of the uniform discretization MLMC method introduced independently by M. Giles and S. Heinrich. In particular, we show that our adaptive MLMC algorithms are asymptotically accurate and have the correct complexity with an improved control of the multiplicative constant factor in the asymptotic analysis. In this context, we developed novel techniques for estimation of parameters needed in our MLMC algorithms, such as the variance of the difference between consecutive approximations. These techniques take particular care of the deepest levels, where for efficiency reasons only few realizations are available to produce essential estimates. Moreover, we show the asymptotic normality of the statistical error in the MLMC estimator, justifying in this way our error estimate that allows prescribing both the required accuracy and confidence level in the final result. We present several examples to illustrate the above results and the corresponding computational savings.
10.5446/57398 (DOI)
Thank you. I think it's perfect time for an app, so I'm very grateful to see you here. I'd like to thank the organizer for the invitation. I think it's a very great event and I was very pleased to be here this week. So my talk will discuss how to perform some global sensitivity analysis in a stochastic system and perhaps to motivate a little bit this question and the methods I'm going to introduce. I must say that for many years we have been working on uncertainty quantification but essentially parametric uncertainty. So models having some parameters that are not well known and we developed techniques to deal with the situation, so parametric uncertainty. But in many situations the model itself is not deterministic. So it's not that we have parameters that we don't really know which value to give but the dynamic, the evolution of the system in time is truly stochastic. So we have some inherent stochasticity. So many physical systems have these situations. It can be because at very small scales usually you have noise, typically thermal noise that can appear and so the evolution is really inherently random. So here I'm thinking of molecular dynamic chemical systems for very small size reactors for instance. Reactors could be a biological sense. There are also situations where there are some forcing acting on the system we want to simulate and there's no way we can predict what will be this forcing. So this could be for instance a case in finance, it could be the case in many situations in energy, think for instance of wind turbine, it is submitted to turbulent winds, storms or whatever and you cannot predict a priori what condition the machine will have to stand on support. So in engineering stochastic modeling becomes very important in that case. And there are obviously the situation of all models that in order to maintain tractability, computational tractability, we have to reduce, we have to solve at say coarse scales. While we know that at smaller scales there are some phenomena that we are completely neglecting. Sometimes we can average or buy some homogenization techniques upscale these effects but there are situations where we don't know how to perform such upscaling and one possibility in this case is to add some random noise that mimics the effect of the small scale process that we cannot model. That's the case for instance in climate modeling where they add some random noise to force the flow at small scale to mimic the effect of turbulence mixing. So stochastic modeling and here I'm just thinking about physical models but you can think of all models related to social science for instance. So stochastic modeling is very common. I can be very complex. It's a very nice way to deal with things that we cannot prescribe or that we don't know. But the cost of doing so is that usually you have to prescribe some parameters typically. Say I don't know what will be the wind so I decide to model it with stochastic process but now I have to prescribe some parameters say time correlations, intensity of the wind gusts or the six line. So we have now a system that has some inherent stochasticity because this is a modeling choice that we made and some parameter and this parameter usually cannot be measured. They need to be learned or to be inferred from information. If we have some experimental data for instance but they are not certain. So now we have two sources of uncertainty. One that is inherent to the model and the second one that comes from lack of knowledge about the parameter. And this is really the question we wanted to address with my colleague Omar Knew to see how we could propagate and perform some sensitivity analysis in such systems. So I will have two parts in this presentation. In the first part I will focus on the case of stochastic ordinary differential equations. So typically scalar ones, so equations or ordinary differential equations driven by some vener noise. And I will show how we can consider this equation plus some parametric uncertainty on how to solve this equation by means of polynomial chaos expansion. And the second part I will go to the stochastic simulators and treating this type of system and we will have to introduce new questions that we believe are quite interesting. So I am considering a very simple SDE. So X is a state. Here everything is scalar but you can extend everything to a higher dimension. So vector case for instance. So you have the evolution of your solution X. You have a drift term with a C that may depend on X. It could also depend on T. And you have diffusion terms here with the vener process. So very classical. So what you can do is for instance to simulate this equation to generate samples or trajectories of this system by considering for instance the other schemes. The simplest one you can devise and what you have to do is to draw some IID normal random variables with variance depends on the time step. So you can integrate a trajectory and repeat multiple times going independently realization of this increment. And when you have a set of trajectories you can build some statistic, mean standard deviation, correlation, whatever is needed. And if you consider a... Oh, I thought it was a spider. So if you have a functional of your trajectory, it could be just the value at a certain time or it could be an exit time or whatever it is. So I call it G and now you can estimate the expectation of this quantity of interest X. The same thing is done by simply by sample average, generating M replicas of a trajectory of the system. So the situation I want to tackle now is exactly the same except that now the drift and the diffusion coefficient may have some non-parameter. That could represent special variability. That could be just the magnitude of... if it's just a random variable, it's just the magnitude of the noise that is entering the system. So I did not queue these random parameters and I'm assuming that they have a known probability. So how do you know this probability? Well, perhaps they are coming from some Bayesian inference based on some experimental observation and that would be your posterior of your Bayesian. And the main assumption I'm doing here is that the knowledge of the parameter doesn't say anything about the forcing process as it is. Two are independent, which is quite, I think, natural and I actually don't see situations where this would not be the case. So it's not a strong assumption, however, it is crucial for what I'm going to develop later. Okay, I'm just here making some assumption about square-integrability. So the process has to be a second-order process. If I condition on the random walks, so I still have the random parameter here, so this conditional variance is finite and that is the same if I condition on a certain value of the parameter, the variance is also finite. This equality here are 0.1. Okay, and our goal is now really to say I want to investigate how the variability in X depends on the different source of the variabilities, which are the linear process and the parameter and I want to separate this different effect and being able to draw some conclusion. So the typical approach that people are following consists in conditioning on a certain value of the parameter, then you average with respect to the process and you do that for some moment of X. So typically you're trying to look to the dependence of the mean of the solution of your process as a function of the parameter. So you average with the process and you keep the parameter out and you can do that for a second moment. So you have now the dependence of the variance with the parameter. And now you have functions here that only depends on the parameter and you can apply the classical sensitivity analysis or I mean the parametric one that I've been developing and I will give you a flavor of the idea that are behind. But here is the idea and one possibility is to go through the functional approximation of this two first moments, but you could go to higher moments on a polynomial basis for instance. Okay, and once you have this representation, you can sample as much as you want this approximation and study some statistics that can characterize the dependencies. Okay, now that's the classical approach. So in my view, you are losing some information because the first step you do is always averaging with respect to the venerable process. So there is a loss of information and somehow you're not too penalized because you can do these analyzes for higher and higher moments. But how many moments you need and it becomes very difficult to interpret the outcome of the sensitivity analysis. So in that among different parameters, you can say one is very important for the moment of order five. What does that mean? It's not clear. So what we want to do is slightly different. It's an orthogonal decomposition of the variance. So X has a certain variance and I want to split these variance into different contributions, contributions that is due to the inner noise, so the venerable process, contributions that is due entirely on the parameter and the contribution to the variance of the interaction. And the way we do that is by the soboil of the decomposition of X where the process that I see here as a function of the vener and the parameter is decomposed as a mean value, a function that depends only on the vener, a function that depends only on the parameter and a mixture. And we ask this composition to be, this function is a decomposition to be mutually orthogonal and this makes the decomposition unique. In the case where W and Q are independent, that's where my assumptions become crucial. And here you have the definition of this function. So for instance, the function that depends only on the vener is the conditional expectation of X given W minus C, full expectation. So because this function is orthogonal, you have immediately the decomposition of the variance. You can normalize the partial variance and you get what we call the sensitivity indices. So they are all between 0 and 1. Their sum here is equal to 1. This one characterizes the impact of the vener process on the variance. This one of the parameter and here the mixed contribution. I will illustrate this idea of characterization on an example later. So here is the interpretation of this coefficient and what you can show is that the variance of the mean with respect to W of X for given value of the parameter, so this is really the variance with respect to the parameter of the mean of the solution. This is actually the variance due to W only. Sorry, to Q only. So clear interpretation of what we have classically. However, if we average with respect to the parameter, the dependence of the variance to the parameter, we see here that we have the sum of two contributions. So the contribution due to W to the vener process and also the mixture. And actually there is no way that given this term here, you can split into these two contributions. That's why we believe that the approach we are proposing is richer. We have a finer analysis in particular. If you discover that SW is really large and dominant compared to the other term, there is very few chance that you can learn your parameter by observation. This is kind of important conclusion that you can do. So how to compute this coefficient? What we propose to do is then because we see the process as a function of W on Q, it is to us quite natural to have a series representation. So I'm considering basis of a complete orthogonal set of function psi alpha, which are functional in Q. Actually, I introduce here the measure of the parameter PQ. So there are an orthogonal family. And we are expanding the solution X in this form. So the coefficient in the expansion, the X alpha are again stochastic processes. But they don't depend on the parameter. So I have this expansion. Now I can plan this expansion in my original equation. I will get a residual. If I truncate my expansion, I get a residual. So I'm requiring this residual to be orthogonal to the representation basis. So to every of the psi alpha I have in my expansion. So I'm forming here the garb of projection of my problem. And I end up with a system for the mode or the coefficient of the expansion. And you will notice that this equation, so if you gather all the coefficient X beta in a vector, you end up with an equation that exactly the same structure as previously, except you were a scalar equation, it's a vector equation. But otherwise you can still recognize here a drift term and here a diffusion coefficient term. And what's quite special is that W is still unique. So if you started with W scalar, you still have the same stochastic process that acts on all the modes, force all the modes. Okay. So I'm not going to detail how this garb of machine is implemented or how it is solved. But basically instead of having a single ODE solved by a pair of schemes, now you have a system of ODE, a coupled system. It's not very complicated. Now based on the expansion coefficient that I have in the series here, if I take for, say, convention that the first of this function is a constant. So X0 will be the mean of the solution, well, mean with respect to the parameter. And now I can have the expression of the different function in the Sobolevding expansion and the expression of the different partial variance, which basically if you want to compute, say, the partial variance due to W, only you have to compute the variance of X0. Okay. So basically for each trajectory, you average with respect to the parameter and you look to the variance of the result. But this is made quite easily thanks to the polynomial chaos expansion. Well, maybe I didn't mention, but typical basis we use here are polynomial chaos. So it's polynomial functions. Okay. So let's take the first example. So linear additive system. And I have two uncertain parameter. One is Q1. It appears in the drift term. So it's basically the location of the attracting point. And for the diffusion, I have another parameter Q2 that appears here. If I make nu equals zero, I have an additive noise problem. And if nu is different than zero, we have a multiplicative noise problem. Okay. So I will start with nu equals zero. And I will consider also that Q1 and Q2 are independent. So in the case where we have an additive model. So nu equals zero. You can show that you need just a degree one polynomial expansion to exactly capture the solution. Okay. And here are trajectories of the mean mode, X0. So you, it really looks like typical trajectories for the system or you process. Here is trajectories for the mode that is linear in Q1. So due to the drift term, and you see that there is absolutely no variability. Because basically the system has a completely deterministic evolution. If you fix Q1, plus some noise that comes and had, but they don't mix. Okay. So if you fix Q1, so here you have a range and we respect to W. And well, this is the average we suspect to W. That's all you have. And this is the mixed term, sorry, the term related to the magnitude of the forcing. So this one obviously has an impact on the solution. Okay. If we have a multiplicative noise term, now it's a bit different because the two parametric sources can interact. So now you have variability on the linear term in Q1. And obviously as before, some stochasticity in other mode related to the linear term in Q2. You can even go to higher order, but very quickly, due to the structure of the system, you see that there is no variability in this mode. So you could even, if you were smart enough to analyze the structure of the galerking system, you could drastically reduce the size of the basis. There's absolutely no need to simulate these trajectories. There is no variability at all. Okay. So these are just densities of the coefficients in my expansion to show that some of them are really Gaussian, as you would expect for Gaussian forcing, but some of them are not Gaussians because this is due to the interaction between the modes. Here are also some protection of coordinates in planes to show how there exists some strong dependence between the coefficients. So how the galerking system is really coupling the dynamic between the different modes. Here's perhaps the simplest plot you could imagine to illustrate the idea. What we have here is a single realization of the Brunyan. We have fixed the noise and we are varying the parameter. Okay. So you see here's different trajectories for different values of the parameter, but you observe that they are very similar. I mean, they are shifted. They can be enlarged, but they are quite self-similar. Okay. And this is really this similarity that allows to have a polynomial expansion that converges very quickly. Here's the opposite. We have fixed the parameter to a certain value and we are running different values. And you see that the stochasticity is clearly very much different. Okay. But now we can decompose. So here is a set of trajectories and you can decompose into the function that depends only on the parameter. So essentially we have removed all the noise and we just have variability with the parameter. We can do the opposite. That is, we look or we average over the parameter and we have here variability that is due to WLE. And here's a mixture. Just for illustration. Okay. This one is not that much important. And here are the, not the sensitivity coefficient I have introduced before, but the partial variance directly. So it's not normalized because the variance varies a long time. So it will be difficult to report everything in terms of normalize. So if you have absolutely no variability or no uncertainty on the parameter, so they have variance that are both zero, then all the variance of the system is due to the noise. Okay. That's only W as a source of variability. Now if you have variability also in the drift term, but not in the magnitude of the diffusion, the total variance in red here is essentially due to the variance due to the noise and you have a contribution of the parameter, but you have no interaction. That's what we have seen before where the old trajectory was exactly the same. So this is what is happening here. On the contrary, if you have here a determinist, well, a perfectly known drift term and some uncertainty on the magnitude of the noise, what you have now is a contribution of the, of the winner, of course, but also of the mixture. Okay. And now if you have uncertainty in all the parameters, so drifts and diffusion, now you have all the contributions that are non-zero. So with this way, we can really distinguish what is the inner run variability, what is due to the parameter and have a very fine understanding of what's going on. Okay. Here's another more complicated system. It's, I have just changed the drift function. So the system has two attracting branch. Okay. And we are starting from a point somewhere in the middle and depending on the realization of the Brunyan, we are led to go to a first attracting branch or to another one. Of course, there is a non-zero, but very low probability that when you have selected a branch, you could also branch to the other one, but this is a very rare event. Okay. And for this system, I'm considering some uncertainty which are on the initial location. So clearly if I'm starting from the very high point, it's very likely that I'll end up on this branch and not on the second one. So I have uncertainty on the starting point and I have also uncertainty on the magnitude of the stochastic force. Okay. So here is just an illustration of what is happening depending on my starting point. So if I start from the left, I always go on the same branch or essentially always go to the same branch. If I stop somewhere on the unstable point, I may select with equal probability one branch or the other. And as a contrary, if I start on the other side, I tend to always end up on the same branch. Okay. Here is the effect of the noise magnitude. So I'm starting always at the same point. If I have a very low stochastic forcing, basically I always go into the same branch. I mean it's like deterministic attractor while I stay on the deterministic attractor. And if I increase the noise level, I am able to change this picture. Okay. But of course we would like to do this in a more quantitative way. Okay. Although we understand perfectly what's happening in the system, we want to provide numbers. So as you've seen, even for a given starting point, depending on the brilliant or for a given brilliant, depending on my starting point or intensity of the noise, I can either finish on a branch or another, meaning that for a large time, my solution can be discontinuous. I have a branching process. I have a bifurcation. So the polynomial basis I was using before are not going to work because or I would need an extremely high polynomial degree to capture the discontinuity. So we have here to use some multi-weblet basis with some activity in the parameter space. So this is just to illustrate here. You have the mesh of the parametric space. And we have here, I think, initial condition and noise magnitude on the other direction. And what is represented here is the size of the element over which we have a polynomial approximation. Okay, and here is the result. So you have the evolution of the total variance. What you can conclude is that initially it's essentially the variance that is due to the parameter that dominates. Okay, essentially it depends where you start before the noise can have time to act. So as time increases, you have more and more contribution of the noise, but it's not very much important. Okay, and later on there is the mixed contribution that emerged, which in fact represents a case of events where without noise you would have uncertainty. Only the blue one with the noise is just fluctuation around the branch, so to say. And here the fact that the noise can make you switch from branch to another, which also depends on the parameter. Okay, this is exactly the same, but for higher noise level, and you see that now the magnitude of the different contributions tend to be more balanced as before, because you have more noise in the system. It's quite what you would expect. Okay, as a final note, here it was made with a gallerking projection. This can be difficult if you have a complicated nonlinear drift function, perform the actual projection of the system, but you can also proceed in a nonintrusive way by selecting some value in the parameter space, computing the trajectories for these specific values, and then construct by say regression for instance, or quadrature, the coefficient of the expansion. And you can do that, and perhaps this is a possibility to better illustrate the interest of going through this polynomial chaos approximation. You see that in this case, the coefficients here, or to advance, or to compute the coefficient for a given realization of the vener, you would have to solve NQ trajectories for NQ different values of the parameter. The complexity here is from one trajectory, you go to NQ trajectory, but if your approximation here is sufficiently converged, then you can query your approximation for any value of the parameter, and so you can resample as much as you want. And if most of the variance is carried by the parameter, as opposed to the vener, this can be extremely effective. But there remains a lot of work to really analyze and come up with algorithms that somehow balance the parametric error, sampling error, and do something very effectively. Okay, one final comment here. Here I'm working directly on the representation of the analysis of the whole trajectory, and there is a good reason. For instance, you may want to apply this technique for the case of quantities of interest like exit time, but exit time are not smooth functions, not even with respect to the parameter. You could imagine that for a given realization of the noise trajectories that goes and exit, you change the value of the parameter, you may be missing the exit boundary and continue and exist much later. So there is really an issue with the regularity. So working directly on the expansion of this function, it will be extremely difficult because you cannot construct such approximation for G directly. However, you can use this trick to approximate the trajectory and resample, and by resampling this trajectory, you can very easily generate many, many samples of the exit time. Second part I want to discuss is the case of stochastic simulators. So Raoul discussed this, and it was a work with actually Alvaro Morais, who was a former PhD student of Raoul. So it's a different class of system, systems that are governed by a master equation. If your background is chemistry, that's how you know it. So basically what it says, it specifies the advancement in time of the probability of your system to be in some state x as a certain time t, knowing that it was in a state x0 at time t0. And on the right hand side, you have basically two contributions, the contributions that makes you leaving the state x at time t, and the contributions that makes you going to that state. So you're receiving some probability mass from elsewhere, and you are dumping probability mass by going elsewhere. So this is really a balanced equation. So in our case, x will be on the lattice. It will represent typically a number of molecules in a certain system, a molecule of different species. And this balance equation here has a sum over k reaction channel. So there are k ways for the system to evolve. So in the chemical system, you have k reaction. The function Hj here are the propensity function that basically fix the time rate at which a certain reaction j occurs. And they are generally dependent on the state. And the mu g here are the state change vectors. So each time you have a reaction of time j that occurs, you have to update the state of your system, and that's the quantity by which the state is changing for a single event. So it's a Markov process, and it's used in many, many different applications. But we have to solve this type of equation, while there are plenty of techniques to solve this equation, but what we consider is the Gillespie algorithm, which is an exact approach for this system. What you do is that you really keep in time, or keep track in time of the evolution of the system by jumping from a reaction to the next. So if you have k reaction with current propensity function Hj, you know that in the next time interval t plus t plus dt, this is the probability of the next reaction to work. And it's exponentially distributed. So the more reaction you have, the more likely it is that it occurred in a small period of time. So the Gillespie algorithm is quite easy to understand, so you start from your initial state, x0, then you draw the time to the next reaction, given that you are in a certain state, you can evaluate the rate of reaction here. You decide of the time to the next reaction, so you will advance your system to this time. And you pick at random the reaction that fires first with relative probabilities that are given by the propensity function. So the ones that have large propensity function are more likely to have occurred first than the ones that have low probability function. And then you update your system, and you continue till you reach your final time. So it's really mimicking all reaction in the system. So our goal here is to say, what if I have some uncertainty or some parameter in this propensity function, how can I deal and perform the same type of analysis that I had before using these type of simulators? But, well, actually we had absolutely no clue on how to do that. So the first question we wanted to address is, even for this simple problem, so I can generate multiple trajectories, I can estimate the variance of the population size at some time t, but I'd like to be able to answer and say, if my question is among this k reaction, which one are responsible for most of the variability? So I have no parametric variability, but already I'd like to address this question, which are the reactions that are causing the most of the variance. And surprisingly, we didn't find any answer to this question, which seems quite important, but no. What people are doing is generally they average, they look to the average trajectory and eventually they look to the sensitivity of this average trajectory with respect to the coefficient here, but it's really direct. So how can we do that? So here's a typical example of a set of trajectories. You have here two reactions, how to distinguish the effect of the two is really unclear. So let me just generalize a little bit more the symbol of the decomposition. So I have a functional that depends on n independent random functional, and here I'm just writing the orthogonal decomposition that I had before, but it is generalized. So I have the random input in my functional, here's just an ensemble notation for the symbol of the decomposition. And here it's the definition of the orthogonality, and here's a variance where you sum about all possible, well, all functions in your decomposition, except for the first one, which is a mean. So how can you define this decomposition? Well, here is the definition of this partial variance. So if you consider a set of the direction specified by the vector u, then the variability due to this vector is simply the variance of the expectation of f given this stochastic input minus the sum of all the variance of the functions that contain this set of indices. And you have here the other definition. And from that you can define two important indices, what I call the first order indices, that is the part of the variance that is due to only a certain subset of input, and only this subset. And then I have the total variance, oops, sorry for that, here, so for subset of variable or input in u, u is the index of the input that I want to characterize. So the total variance is not only the variance that is due to this input, but also this input and their interaction with any other input. And I think that in this situation where I have independent input, I have a nice way, a Monte Carlo approach that was proposed by Subaul, to estimate this partial variance and then this sensitivity indices, so first order and second order. The computational effort you have to do is basically compute correlations between your stochastic output when you keep part of your inputs exactly the same, so you generate exactly the same replica for part of the inputs and you redraw the inputs that you want to analyze. So basically for each coefficient you want to compute, you have an additional trajectory to solve, so you double, but the cost is eventually scaling linearly with the number of indices you want to compute. So if you just want to separate or if you have k-channel in your system, you will have to increase as compared to say estimate classically by Monte Carlo moment or an expectation, say now you have to estimate end time or as many input more trajectory, but it's still linear. Okay, but how to apply this to the stochastic simulator? Here in this Gillespie algorithm it's very unclear what are the stochastic inputs on the channel, but more exactly when I draw a random number, I can hardly say okay, this is a number drawn for certain reaction channel. Actually, if I change the indexation of the channel, I may completely change the dynamic. So how can I decide of things or inputs that are staying the same for some channel and others that are varying for the channel? And this was really smart idea by Alvaro Morais was to say okay, what if you go to the Kurt's representation where you represent here the evolution of your system in terms of independent standard Poisson processes because now each channel in this representation has its own Poisson processes. So you can repeat them or you can resample them and you can decide to keep the same sum of them and we compute others. And then you will be able to compute some correlations. And of course, it doesn't mean that by conditioning on a certain realization of Poisson process for a certain channel, you don't change the reaction time of this channel because everything is linked. Okay, so here is typically you have to compute some conditional expectations. So you may fix the value of some of the Poisson processes associated to a subset of channels and you can resample the other one. And then you can compute the correlation to get the sensitivity in this. So if you already have this representation, Kurt's representation, you already have all you need to answer the question of what channels are important in terms of variables. In fact, this is the key position in some of Poisson processes, variable processes. It's done under which assumptions on the coefficients Ae? I don't think there is any particular... But this is... We can make it for any... Any type of systems. Any type of systems? Yeah. Okay, without any assumption on the coefficients Ae? I mean... No, no, no. Those on the intensities of the germs, in fact. No. Now in this problem, you don't have any restriction on our assumption. Okay. It's also known as the time change representation. Okay, it's Tg, so it's a scale... It's Tg here, so the Poisson here are normal, but the argument here depends on the... I see, it's linked to the integral of Ae. Okay, thank you. Okay, so let's see what the conclusion of such analysis is. So here I'm considering the simplest system, so the death birth process. So I have individuals that are with birth rate B. It's independent of the state because it's spontaneous generation. It's not couples that are making children. No, it's just... So you have a birth that occurs once in a while at a rate given by B, and then people are also dying, but at a rate D that is linear in the size of the population, so it's D times X. The more individual you have, the more frequent is to observe a death, or people dying. Right? Okay, so if you apply the Gillespie or the Kurtz algorithm, you can generate trajectories. There is an invariant measure for T going to infinity, which is actually a Poisson distribution that you have on the right. Okay? And now what we would like to say in this variance that we may have, say, at T equal 8, is it B that is more important? Is it my coefficient D, or is it the birth or the death channel? Actually, I should say. Okay. Do you have ideas? I had no idea. Okay, so here are the results. So at very early time, all the variability is due to the birth process, because we start with a zero population, so if there is no individual, no one is going to die, and there will be no variability on the number of dead people, right? Okay. So initially, it's all the birth process that carries the, sorry, here, the variability, and as time is increasing, variability due to the death channel is building up. The sickness of this line here represents the distance between the first order sensitivity indices and the total order sensitivity indices. So the sickness of the line represents the interaction between the two channels. Okay? And actually, if you run the computation over a very long period of time, okay, here it's a log scale, you see that eventually the contribution of the death channel is going to die, okay? Because if you don't have variability in the birth, well, nobody remains. Okay? So over a long time, you need to have variability in the death to have some, in the birth to have variability in the death, okay? And eventually, everything will be due to the birth channel and interaction with death. Okay? Now, a more complicated system, this is a Schlogel system. It's just four reactions, well, actually two reverse reactions. We have three species, B1, S, B2, but B1 and B2 are considered in large excess and are considered constant. So we just modeled the evolution of the number of individual species S that we call X. Here as the propensity function, so you have two body reactions, three body reactions, et cetera. And here are some trajectories. For this system that represents the infection of a cell by a virus, you have basically two asymptotic states. One is the virus has replicated a lot, okay? Or the virus is not replicating and remain at a very low level. So infection is low or infection is high. And you have here the density, not the histogram, more exactly at T equal 8 of the solution. Okay? So for this system, when we carry the analysis we had before, so we want to characterize the effect of the four channels that we have, this is what we obtain as a variance is essentially due to the first and second reaction, while the second and third are much less important. We see that we have strong interactions. But the main conclusion is that first and fourth reaction are the dominant one in terms of inducing variability. And how can this be explained? It can be explained by the fact that it's essentially these two channels and their variability that plays on the selection of containing a low level of viruses or giving birth to lots of replicas. So selecting the upper branch or the lower branch is essentially dictated by channel one and four. While the two other channels induces some noise, but they are way more, way less likely to induce a complete shift of branch. That's what you have here in this plot. So here it's on each of these small plots you have a draw of N1 and N4, so the process of channel one and channel four, and we repeat for other realization of channel two and three. And you see that most of the time you stay on the same branch, it's quite unlikely that you shift. While on the other case, which is completely the opposite, you see that you tend to be less selective in your branch. Okay, that's another system, but I have a taxi to take, and I don't think you would be very much interested into the details of these results, but it's just too stress that you can have a quite fine understanding of how the different channels are combining their stochasticity. It's quite obvious that now we are, and perhaps I even have this, you can add some parameters in the propensity function and you consider them as additional source, independent source of uncertainty. So this is a case for instance where you add in the propensity function an uncertain parameter. This is a situation where you draw different question processes. This is a picture you had before, but it's always the same parameter. While here, it's the same question processes, but you change the value of the parameter. Okay, and again, the trajectories are more similar. Okay, and now you are able to separate and say, okay, my sensitivity indices have a contribution due to the parameters, that's a blue one here, due to the channel, stochasticity, and one that is due to the mixture. And obviously in that case, it would be quite easy to, I think, to learn the parameter because our uncertainty on the parameter is a dominant source of variability, so we should be able to learn quite easily these parameters. The opposite, if we had here a yellow contribution that would be far up, why I would say that you don't really care about the actual value of the parameter because it doesn't give you much variance, and it will be very difficult to learn them. Okay, I think maybe I would prefer to answer questions rather than commenting these slides. Any questions? Like when you are discussing the sensitivity analysis for the stochastic ordinary differential equation, you have like capital Q1 and capital Q2, and you assume that they're independent and you've done the sensitivity analysis in that way. If you just have an idea, if you don't have this independence between this Q1 and Q2, how it's changing, it makes it easier, the sensitivity analysis. Or to consider them independent. If I consider them dependent. Dependent? Yes. How it changes these sensitivity analysis? Well, first you could still continue to perform a separation between inner and stochasticity and parametric one, but it would have to be globally the global contribution of the parameters. Now, if you want to have a finer analysis and distinguish between the contribution of Q1 and the contribution of Q2, then if they are independent, you can apply the symbol of being the composition. If they are not independent, you cannot do that anymore. So there are some, recently they were proposed some techniques to perform this kind of variance decomposition in the case of dependent variable, and perhaps you could reuse them. But the interpretation of the sensitivity coefficient that you get at the end or the characterization that you get is not very well understood at the moment. Typically, you have to account between correlation between parameters, and you can have sensitivity coefficients that can even become negative, which is really strange, right? So thank you Olivier for the presentation. So to come back to the sensitivity analysis, so here you have presented something related to path 5 sensitivity, where you conditioned one motion and so on, but you could also perform a sensitivity analysis regarding the PDE underlying the stochastic differential equation, so that is we know by the distribution of linear PDE. So you could perform also the PCE on these PDE. So could you comment a bit about the choice of doing one way compared to the other? So what is your motivation? There are some, well first I wanted to work by Swise, which is a good motivation. One issue with trying to work directly on the densities or on the master equation is rationality. First, I mean, systems that we want eventually to solve are high dimensional, so solving this master equation in high dimensions is very complicated. That's one point. Second point is that the approximation techniques we are using, so polynomial chaos, honestly they are not that good at enforcing positivity. So expanding density can be problematic. That's another motivation. And basically for us it was making more sense to say, okay, these simulators are existing, stochastic simulators or your favorite tools. Let's try to reuse this machinery as much as possible. And as you've seen, this is actually what we are doing. Not with the gallerking, but it's, gallerking projection is my data, but you could do it non-intrusively and then you just need to have a solver, which I think is a very strong point for users. I'm not very much aware of people working on the direct solution of the master equation. But honestly, positivity is a big issue. Okay, so I think it's time for you to leave. Thank you. Thank you very much.
Stochastic models are used in many scientific fields, including mechanics, physics, life sciences, queues and social-network studies, chemistry. Stochastic modeling is necessary when deterministic ones cannot capture features of the dynamics, for instance, to represent effects of unresolved small-scale fluctuations, or when systems are subjected to important inherent noise. Often, stochastic models are not completely known and involve some calibrated parameters that should be considered as uncertain. In this case, it is critical to assess the impact of the uncertain model parameters on the stochastic model predictions. This is usually achieved by performing a sensitivity analysis (SA) which characterizes changes in a model output when the uncertain parameters are varied. In the case of a stochastic model, one classically applies the SA to statistical moments of the prediction, estimating, for instance, the derivatives with respect to the uncertain parameters of the output mean and variance. In this presentation, we introduce new approaches of SA in a stochastic system based on variance decomposition methods (ANOVA, Sobol). Compared to previous methods, our SA methods are global, with respect to both the parameters and stochasticity, and decompose the variance into stochastic, parametric and mixed contributions. We consider first the case of uncertain Stochastic Differential Equations (SDE), that is systems with external noisy forcing and uncertain parameters. A polynomial chaos (PC) analysis with stochastic expansion coefficients is proposed to approximate the SDE solution. We first use a Galerkin formalism to determine the expansion coefficients, leading to a hierarchy of SDEs. Under the mild assumption that the noise and uncertain parameters are independent, the Galerkin formalism naturally separates parametric uncertainty and stochastic forcing dependencies, enabling an orthogonal decomposition of the variance, and consequently identify contributions arising from the uncertainty in parameters, the stochastic forcing, and a coupled term. Non-intrusive approaches are subsequently considered for application to more complex systems hardly amenable to Galerkin projection. We also discuss parallel implementations and application to derived quantity of interest, in particular, a novel sampling strategy for non-smooth quantities of interest but smooth SDE solution. Numerical examples are provided to illustrate the output of the SA and the computational complexity of the method. Second, we consider the case of stochastic simulators governed by a set of reaction channels with stochastic dynamics. Reformulating the system dynamics in terms of independent standardized Poisson processes permits the identification of individual realizations of each reaction channel dynamic and a quantitative characterization of the inherent stochasticity sources. By judiciously exploiting the inherent stochasticity of the system, we can then compute the global sensitivities associated with individual reaction channels, as well as the importance of channel interactions. This approach is subsequently extended to account for the effects of uncertain parameters and we propose dedicated algorithms to perform the Sobols decomposition of the variance into contributions from an arbitrary subset of uncertain parameters and stochastic reaction channels. The algorithms are illustrated in simplified systems, including the birth-death, Schlgl, and Michaelis-Menten models. The sensitivity analysis output is also contrasted with a local derivative-based sensitivity analysis method.
10.5446/57399 (DOI)
Thank you very much for the introduction and thank you everyone for resisting until the very last. So the last talk will be kind of a very relaxed laid back walkthrough, only a bit of a different perspective in terms of statistics and application of statistics to the topic of uncertainty quantification and one particular subtopic which is called reliability analysis from the engineering perspective. So it's kind of a difference. I'm kind of the black sheep here because I represent the chair of risk safety and uncertainty quantification in ETH3 which deals with carrying out research in the field of uncertainty quantification for engineering problems with application to structural reliability, sensitivity analysis we just had an excellent talk from Olivier, model calibration which is related to by-edge and inversion, reliability based design optimization and so on and so forth. The chair is led by Professor Bruno Sudre and I apologize I see that the colors are somewhat weird so I hope the images will come out nice anyway towards the end because I have nice animations. So we do research on a number of topics and today I will be actually introducing two of them. One of them is methamodels and the other one is structural reliability analysis. Okay, so just to give acknowledgement where it's due, this lecture is largely based on three courses that we teach in ETH Zurich, one is uncertainty quantification engineering, the other one is structural reliability and risk analysis and the third one is actually a block course in a joint venture with the University of Zurich which is called uncertainty quantification and data analysis and applied sciences and we teach the first block, everything is taught by Professor Sudre and myself. So that said, the outline of my talk in the very first part I will try to kind of set a common jargon because we come from different backgrounds and so we need to kind of understand each other. Then I will introduce one particular type of tools that we use which are called Gaussian Process Modeling and then we will see how to use these objects within a particular type of application which is called reliability analysis and then how to use it in a let's say more advanced way and then I will draw some conclusions. Okay, so let's start with the terminology. Okay, so we're all familiar with computational models and what are computational models in engineering fields. Okay, so they also call simulators, actually most of them people talk about simulations or simulators when talking about computational models and what they are, they're basically a combination of what you're very good at which is the mathematical description of what, of reality, okay, of something that has to do with some physical process. Okay, and then some discretization techniques to basically go from continuous to discrete algebra that you can actually solve and some algorithm to smartly solve them. Okay, now how do we use them? Well, basically there are a number of ways to use them and from basically try to calibrate them from real data such then we can create what are called virtual prototypes which are basically cheaper than just building a hundred of cars and then throw each one of them against the wall. We can maybe try to calibrate by using some cars, we can try to build some type of finite element model of the car and then make a simulation because that's actually cheaper than destroying the car. It would be surprised how few cars are destroyed in an actual from zero to the final design of the car. We're talking about five or six and we're talking then about millions of cars out there. Then they're used to optimize the system so as an example why you have to reduce cost and within performance constraints and you can use them to assess the robustness, reliability, will this building fall in case of an earthquake, will it not and so on and so forth. However, remarks, what's one of the characteristics? They're expensive so a real engineering model doesn't take less than one hour to run even on dedicated infrastructure. Another very big thing is they're very often proprietary, they're both from someone else. There's no source code, we have no idea about the equations that are sold inside so they're completely black boxes. We can run them on a certain set of parameters, we can get the response of the model, we don't know how we get there or most of the times we don't. But then we have an issue because the real world is uncertain so even when you have your design, your beautiful Tesla design on your computer and you want to throw out in the road and then you order the pieces from the manufacturer and they're all different than one another and all different than what you requested. Sometimes it's even a property of materials and sales. As an example, you're talking about concrete and well depending on the day the concrete was mixed, the properties changed. But then you're also talking about unforgast exposure so you're also talking about the loads that the systems will be subject to so as an example due to well extreme events, floods, earthquakes, hurricanes or anything accidental human action but also terror. So we have to keep all of them in mind basically when dealing with those very highly expensive systems. So how do we deal with this in a quantitative sense? So there is uncertainty, we need to take care of uncertainty. So what I'm showing here is basically it's a framework, it's kind of a mindset so that this will set some semantics that I will use later on so that we always know what we are talking about. So we've talked about physical models and that we all agree basically they are well whatever computational model you're working on could be PDEs, finite element models, finite differences, whatever type of integration technique you want to use. Those models typically depend on a number of parameters and or boundary conditions. But those boundary conditions are subject to uncertainty as I just said that could be due to practical or physical reasons but it could be due to lack of knowledge as we saw in an earlier talk. And for the sake of today I only have one hour to talk about basically the contents of three courses so bear with me if I will tell you. Let's only consider those particular sources of uncertainty for today that can be represented through random variables. Okay? A number of parameters of boundary conditions and they can be put together through some type of random joint probability distribution. And then what's the idea? The randomness in this particular type of models and in particular type of input models, input models, models of uncertainty. When propagated through a physical model which again for the sake of today we will consider deterministic as opposite to what was just presented by Olivier. But otherwise it would need probably a couple more lectures. So the probabilistic input once propagated through the deterministic model results in actually of course a probabilistic output, a stochastic response. And with those things we want to do something with this particular output. We want to give some form of characterization. They need the moments, they need the probability of failure or not performing within nominal ranges, the response PDF or whatever. Sometimes you want to do some types of analysis as an example sensitivity analysis that was just introduced or even Bayesian calibration and through this particular type of information we can actually update our problem and rethink about the sources of uncertainty as an example to work in smaller dimension which is typically easier than in higher dimension or to actually have reduced variability ranges once you condition basically your initial information on data. So I will now focus throughout the lecture on the third step which is uncertainty propagation. So it's really going from the uncertainty in the inputs that we give as granted. One whole field of view Q is actually figuring out the uncertainties. It's very important but today let's imagine that we have a joint PDF of the inputs and the idea and the question is I have a black box model, how do we get to the uncertainty onto the output? And what do I want to know? Well if I'm lucky just mean instant deviation, if I'm less lucky, response PDF and sometimes if I'm really less lucky I have to go to the probability of failure and I will focus a little bit on these on the last parts of the lecture because it's a very interesting problem. So what's the solution? Monte Carlo simulation. Monte Carlo simulation is very nice because basically the methodology you know very well so we'll kind of skip it so you just make a big sample from the inputs that you assume known you get the model responses and you do whatever statistics you like. So this is great it allows you to do anything that you like but convergence that's a big issue. Monte Carlo simulation requires a large number of samples to get meaningful estimates and as I said so the typical number of samples we're talking about 10 to the 3 to 10 to the 6 and sometimes much higher we'll see an example later and just that's not feasible. We have typically in engineering scenarios we have a budget of maybe 100 runs that's it. So metamodas come in quite handy so I will be using an kind of an interchangeable way the word metamodals and surrogate models but I will introduce a little bit of a subtle difference between the two in a later part of the talk. So what is a metamodal? Well the idea of a metamodal is some kind of inexpensive to evaluate typically analytical function that approximate accurately with respect to some accuracy measure a computational model okay and it is built from a small sample of point-wise evaluation so I'm talking about a non-intrusive or black box approach to build the model and this small experimental design which could be as an example a Monte Carlo sample but small typically a few hundred points or a hundred points this is completely evaluated point by point through backboxes we know nothing about how this thing comes out and a couple of metamodal techniques you probably have heard about one of them is polynomial cost expansions that was just described by Olivier and another one you may have heard about it is is Kriging where instead of basically having a deterministic regression problem you have basically you represent your model as something stochastic and more on these in the next couple of slides. So that's more or less a setting okay we want to propagate uncertainty doing Monte Carlo would be perfect but it's too expensive so we try to use metamodals okay so what's your approach well we do actually a small Monte Carlo or something like this if you're a smarter we can choose actually the points carefully more in this maybe later then we calibrate the metamodal of choice in such a way that until the which is the metamodal in some sense behaves like the full model and then we substitute the model with its target okay and then we perform Monte Carlo sampling why do we do this because Monte Carlo sampling with the metamodal is inexpensive we are talking about orders of million or millions of model runs per second okay it's very very cheap they're polynomials or something like this per core so what's the idea the idea is we trade the computational cost of Monte Carlo okay which would be n Monte Carlo times n Monte Carlo runs for the trade of the for the cost of the experimental design okay so we ignore the cost of Monte Carlo now because doing it with surrogates is inexpensive and however we have to focus on the costs to calibrate the surrogate so one very well-known tool for uncertainty quantification in general and one very well-known metamodel is what is called Gaussian process modeling and or kriging I will use the two words interchangeably so I apologize for any one of you who know what the Gaussian process and what kriging is but I have one and a half slides to explain you Gaussian processes so please bear with me so the question the idea is what is the Gaussian process okay well given that you have your your own nice probability space with your sigma algebra and your probability measure and you have some random variable or some random set of parameters that belongs to a space of dimension i t m so m will be the dimension i t of our space okay the number of input parameters of your model then a stochastic process is basically a Gaussian stochastic process is such if basically for any finite sets of points that belong or element that belong to r to dm the joint distribution of z of this set of points is a Gaussian okay so it's basically an extension to infinite dimension of a random vector okay which is defined basically by the joint distribution of any random set of points okay apologies to those of you who know what I'm talking about so couple of notes what is important about Gaussian processes is that they are entirely determined by two quantities one of them is basically their mean okay which is a function of the input parameters x okay which is the expectation value of the random process and by its covariance okay we know any joint distribution is a Gaussian variable is a multivariate Gaussian Gaussian variable basically knowing those two types of information tells us everything that we need to know completely characterize the process then the covariance function itself is normally it's actually a positive definite kernel which is in most cases although it's not necessary but it will make our life easier later on usually stationary so it's basically a kernel that only depends on the on some form of distance between the the input points and I will use a different notation later on because it makes stuff again simpler if you rescale these objects by removing the the variance of the process you get what is called the auto correlation function in the literature of Krigin this is very often used auto correlation that rather than auto covariance or they're kind of both of them are used so just to see a Gaussian process in action in in in one dimension by the way they're also called Gaussian random fields some people may have heard them with this particular name so consider this particular covariance kernel okay so it's basically well scaling times a Gaussian multivariate Gaussian independent basically with what are called scale parameters so it's a parametric kernel operand I forgot to put explicit dependence on theta here but obviously there is dependence on the theta vector just scaling parameters for each one of the input variables and consider that you have these objects and consider you have a 1d random field and look at the effects of having a covariance kernel so those are different realization of this of the 1d version of this object for addition values of theta i and I'm sorry this is not really visible and you see that basically varying the theta changes the the auto correlation length basically and how does it translate on random realization so we are talking about a random process so there are trajectories those are several random processes generated I think with the same random field I mean with the same random seed maybe wrong I don't remember honestly but for different values of theta and as you see basically having an auto covariance that decays very rapidly results in a relatively rough process of realization and having a very long covariance length or slowly decaying one results in a very smooth process okay so that's in a nutshell why are those nice because by tweaking some of the parameters we can actually get some kind of continuous random variable which has properties of smoothness that can be somehow regulated okay but so far so good for Gaussian processes but what's the use of Gaussian process I mean this object is just a random variable it's not an emulator it's not a meta model a surrogate nothing so here comes the idea of craving okay also known as Gaussian process modeling and in the machine learning community Gaussian process regression they're all related basically and sometimes they're exactly the same thing so the idea is the following thing take our model y equals m of x okay and for the sake of simplicity again y will be just a scalar value so we are talking about a model which is scalar outputs okay consider this is our model then we consider that our model is a realization is a particular realization this is very important of this fancy random field which is basically composed by one first part which is a deterministic trend okay regression trend and then some scaling parameter times a stationary zero mean unit variant variance Gaussian process okay but omega basically represents all this information that I said before including basically the type of covariance that you are using and the parameters of the covariance kernel and so on and so forth okay okay of course there are regression coefficients and these are the variance of other Gaussian process now one thing that I think that is very important our model is one realization of this thing that means that this particular Gaussian measure that we are introducing by using this machinery has nothing to do with the response of the actual model okay this is very important because it doesn't mean that we can only represent models whose output is actually Gaussian so let's take a couple of assumptions so now we have a lot of let's say degree of freedom and we have to make some assumptions to actually kind of choose meaningful Gaussian processes and let's see so there are historically there are many approaches to how to make a trend the the simplest of them is basically considering that well we know there is no trend basically there is just a well known constant that's fine you're happy with it ordinary basically tells you okay just try to grab the constant from the data and universal query is basically put whatever you like as a trend of your model and then the second assumption that we need to do so this is an assumption so at least the form of the trend we need to plug in depending on our prior information considerations on the nature of the model or just basically random guess and the second thing we have to choose is basically the type of autocorrelation function of the Gaussian process of the the unit variance stationary Gaussian process and typically you have to choose a family I showed you earlier one particular family which was the Gaussian family there are many out there they have fancy properties that are that can be interesting really interesting I will not dwell on this we can discuss about that later but the point is we have to choose one form of this particular parametric variance function okay so then what do we do we start from some data so I said metamodors the idea is you trend the cost of the Monte Carlo for the cost of the training so we have some kind of set of training points which I will call experimental design and their number is n e d okay it's this number which is expected to be much smaller than like the full Monte Carlo simulation you would like to run but you can't afford and the corresponding model responses basically I will always use this fancy notation with a with a parenthesis on top that means I'm talking about points in the experimental design then we assume that the model is some realization of this particular random process that I described before such that the values at the design points are known so it's basically a condition we are conditioning the random process to the particular data and what's the goal remember we want to use a metamodel we want to approximate the model with these cheap approximations so the goal is predict the value of our original model okay or to approximate the value of the original model in a new point x0 that doesn't belong to the experimental design just as a mnemonic I will put zero as a as a I forgot the name basically on the on the low part yeah as a subscript thanks sorry as a subscript instead of a superscript so that is immediately apparent that it's not a point in the experimental design so it's a very it's a very bad notation but very quick when I only have an hour to describe the whole thing so please bear with me if it's not very accurate and these objects must be something that I can use instead of the model to get the same information in a at least statistically statistically coherent way that I would get from the full model okay so we are looking for a predictor so let's think about a bit about what is a Gaussian process so we said a Gaussian process is that thing that is uniquely defined by the joint distribution of any set of basically on on the joint distribution of its value on any set of point x and this set of point could be the experimental design therefore basically if we had an experimental design we know that the joint distribution basically what we know is that each one of the responses of the Gaussian random field where this notation obviously okay I marked it here means the the the Gaussian process on the experimental design point it's a Gaussian variable with it's very easy to demonstrate that it has this particular mean and it can be entirely described but it's very compact form so it's a very nice linear linear model given that we have z which is now a standard random variable and the joint distribution of any two points so it's not very clear okay no it's clear this is bold so the joint distribution of any number of points and these I apologize they should be removed this this index should be removed is actually an nid joint Gaussian distribution with a certain mean and a certain covariance okay so this is the response of of the Gaussian process on the experimental design points okay this allows us to define two ingredients basically one which is called the regression matrix which is basically just well the sum it represents this part basically of the creating process it's a trend only and then what is called the correlation matrix which is basically what is derived from this correlation kernel or covariance kernel that we defined earlier okay now the question is okay it's so far so good but that's on the experimental design so the question is now we want to predict the next point which is not in the experimental design so what's the joint distribution well it's the same game as before the joint distribution is actually we want to predict the new point so what will be the joint distribution between the new point and the points in the experimental design well it will be still be a Gaussian but this time it has one more dimension of course it's because we are predicting one point obviously if you have more points to predict these objects will become larger and larger Gaussian and with the mean which is just the mean as before plus the value of the trend in the particular point where zero and covariance matrix which is exactly like before let's look at the correlation part you just add one on the main diagonal and then you get the covariance sorry the cross correlation between the new points and all the other points in the experimental design this basically the same ingredients as before okay so now we have a joint distribution what do we do with this we have the joint distribution of the experimental design plus this new point that we want to predict so what do we do with this well let's just take the mean predictor with the Gaussian random field it's a very logical choice because it's also the mode predictor basically it's also the most likely value and well it's very it's a little bit of algebra but very easy at the end of the day to get a linear predictor like this from the previous equation so basically the mean predictor which is what we will use as the surrogate approximating the model is given by this expression basically there's a term which is basically just a trend plus a correction factor uh that is due to the Gaussian random field and we'll see later what this does this is a very important uh term and then we can also get a variance out of this because remember we are talking about a Gaussian process and okay what we care about is the mean as the predictor so this will be our surrogate model that's good but it's still a stochastic process so it has a variance so for each point we can also have a look we can also calculate explicitly in a closed form the variance for each new point so why is this interesting the predictor has basically one term which is a regression term plus one term which is basically a local a local correction because that depends basically on where you are and on the covariance on the particular points you have and one of the properties of those two things is that actually this particular surrogate interpolates experimental design if you now throw inside here um any point which is already in uh in your experimental design then the mean value will be exactly the value of your model so it's interpolated and in addition the variance of the process is zero again for those of you who are familiar with Gaussian processes I'm talking about models without noise again for the sake of time so that's very nice so we are basically we can predict the value on one point it's consistent because if we try to predict the points we already know we get exactly the same value with no with no variance and how do we use this information I'm very sorry I don't know why the images look like this they don't look like this on my screen but probably if I turn it like this you don't see it so it won't have much so due to this particular Gaussianity we know for every point basically the value the mean so this is this fence if you could read the the the the legend you could see that basically in the blue line this is an example of a 1d uh analytical function and uh the yellow dots believe me the area low dots here would be the points in the experimental design and the dashed line would be the predictor so this is this mean predictor that we just defined so as you see it interpolates through the data and it kind of kind of copies the behavior of uh of the function and what you have in the in this gray area is actually the plus minus I think 1.96 sigma confidence level so I'm just taking basically 95 confidence level for each point considering that every point has a Gaussian distribution so it's pretty easy to do and the nice thing is the gradient prediction of course it's an interpolator the more points you add the more this object becomes it's a synthetically consistent so the the whole variance tends to zero everywhere and the values tend to the actual values of model so we're happy so far and so good but we've only been talking about I know everything about my Gaussian process okay but in practice I don't know neither the actual parameters given that I assume a certain covariance function I don't know the parameters I don't know the the scaling factor and I don't know the coefficients okay so the question is in practice what do I do so I know that I need to solve Monte Carlo I can't solve Monte Carlo but I can run a hundred runs what can I do with those 100 runs okay so well I made it I make a choice as I said in the beginning about the particular autocorrelation function I like Gaussian so let's use a Gaussian autocorrelation function or whatever that is and then I try to estimate the parameters from the only data that I have which are the experimental design points okay and how do I do this well the cool thing is that everything here is Gaussian so we can we can get a lot of stuff analytically and what we can do is actually do maximum likelihood estimation to actually get the parameters of the theta parameters of the autocorrelation function and the corresponding beta and sigma squared so just very quickly I don't want to kill you we have a bit yeah you have enough time I guess how do you do this well you just use maximum likelihood estimation again the the likelihood is analytical given all the Gauchanity everywhere and I don't want to to go through the solution but just a couple of remarks basically what is cool about this particular expression is that solving for beta and sigma can be done separately then solving for theta and I will in a minute explain why but I want to give you a caveat again I don't think you will be writing it down and developing in a in a couple of minutes but once you introduce the estimation of beta from real data so you're not giving you're not given the beta but you're estimating estimating this from the data then the variance of the Gaussian process changes slightly okay this is important because somehow the additional uncertainty you have due to the solution due to the inference of the additional parameters is reflected in your predictor volume so somehow the predictor becomes more uncertain okay due to the fact that you have to estimate additional parameters okay let's say this is not very important if I'm in on any textbook about about creating but what is nice is that actually the the log likelihoods if you see it's quadratic in beta and that's very nice because it has a it's convex and it has a closed form solution very nice it's basically generalized least squares super and once we have beta again we can actually solve directly for for sigma squared as well and we have again an analytical expression given that data are noise least okay otherwise this has to be estimated numerically from the data but beta always admits the generalized least square solution and then basically to get the correlation hyper parameters you just have to solve with some method of your choice global local anything an optimization problem on the reduced likelihood function basically which is just basically the term that remains out of your whole thing but for every value of theta you can calculate sigma and you can calculate f and so well it's basically just a matter of numerical optimization okay so let's see how do you use this so what do we do with it with all this machinery okay now we know given a small sample we know how to build an approximation to the full model okay using Gaussian processes and how do we do this I mean let's try basically we see just a very very easy simple example y equals x and x and our experimental design is basically six random points that are given us by someone who measured them because why not we've not been very lucky as you can see because there are some points are uniformly distributed all around there is quite some information about your model but there's not so much information about this region because basically some of the points are clustered up together in one in one region so well that's life this is what we have to work with so let's apply Kriging and well this is a the result of doing a Kriging surrogate given this particular data I'm using a Gaussian kernel in reality I'm cheating this would be a Mataan kernel but because in the previous slides I said Gaussian I will I keep Gaussian here and the optimization is just very simple gradient gradient based and as you can see you see oh pardon you see the following so you see that in the experimental design points your surrogate which is the dash line perfectly interpolates okay and when you are close to the experimental design points as expected the Kriging variance is tends to zero why is this important okay well to some degree the Kriging variance gives you information and here comes the reason why I wrote meta models everywhere instead of just surrogate models okay the surrogate is one way to use this object but the meta is some information which is not in the original model which is the variance of the Kriging model that you use to approximate it can actually be used to learn something and in particular basically this variance tells you something about how confidence the surrogate is about its prediction its own prediction okay so it's actually pretty clear when you are far the points farthest from the experimental design you have the maximum variance the points closer to the experimental design or where you have the most points basically you're conditioning your conditioning of the Gaussian process is much stricter so you have much smaller variable variability so the question is can we use this information okay can you do something better so instead of just randomly because we've been unlucky and someone gave us this this kind of mass experimental design let's say we have a budget of 10 model runs instead of just six how do we choose the next points okay we start from these how do we choose the next well this brings us to kind of a keyword keywords that are very very popular nowadays which is called active learning which means well learning from my current status knowledge status okay so the idea is I want to adaptively enrich experimental design in some regions of interest and capitalize on whatever information I have okay and there's a very naive approach for this and given the status we were before I'm producing here on the right basically the Kriging variance which is just the square of this particular of this gray area and obviously the most naive way is okay I take the place where there is the the maximum variance which is about 10 and I added okay to the experimental design hello okay nice so I do it and well as you can expect basically let's see the animations work okay so the algorithm basically adds points always in in the regions where there is maximum variance and it basically targets those region and it enriches experimental design in such a way that our uncertainty the the epistemic uncertainty is it reduced everywhere not that this sampling is not necessarily uniform not at all because the more points you add the better you condition the the random field and I don't have time to go into this discussion but we can talk about this later so let's now switch to the next step so now we learned how to use Kriging to approximate a full model and then to use it to do Monte Carlo simulation once that back why were we even doing this and so the point is what are the typical questions in engineering that kind of define the field of uncertainty quantification and some of them are easy what's a scattering of a quantity of interest or a basic of my model response another one is what was answered in the previous talk which is what are the parameters that drive the uncertainty in my model response that was sensitivity analysis another one is what's the probability of failure of my system so what's the probability that my system doesn't perform where it should or how it's supposed to be performing what is the optimal design that minimizes the costs and still keeps performance to a certain degree or what are the how can I reduce the uncertainty in my input given that I have some observations of the output which are incomplete basically which is by Asian version now this we will be focusing on this particular problem now probability of failure why because I introduced Monte Carlo and Monte Carlo is very nice and you've seen a lot of things about his convergence and you're quite familiar with it and probability of failure can result in one of the nastiest types of Monte Carlo around okay so just very quickly what is what is the limit state function as a product what is a sexual reliability or reliability analysis or rare event estimation which is a close relative of set excursions basically this is various names depending on the community you're talking to basically but it's always the same thing somehow so the point the question is the following if we define some type of failure criterion okay so which means my model performs fine as long as its response is below a certain threshold okay whatever the response whatever the model and so on but typical examples are imagine you have a building so you imagine you have a bridge and basically if there is a lot of traffic the bridge kind of bends a little bit but it's fine as long as it doesn't bend five meters if it bends five meters the thing breaks down so you can basically based on your knowledge you can define some type of threshold and basically your question is what's the probability given a certain model of the traffic that the displacement will be above the maximum admissible displacement or you can have a maximum temperature in a transfer problem in an engine temperature must must not go above certain temperatures correct propagation when you talk about nuclear safety is extremely important and so on and so forth so how do we do it we do it with a with a little trick which is basically we define what is called the limit state function or performance function that probably to your eyes as mathematicians doesn't have any difference with what I defined in the beginning as a model and it's right but for engineering purposes there is a big difference because the model relates to reality this object is a function that is implicitly defined by its sign it's a function that represents some condition of the system so it could be as an example a combination of responses of different models as an example if you have really a bridge could be displacement in one point it could be maximum stress on the on the cables in another point so you can actually define it as kind of a sort of a combination of of different models of different physical models and why is this interesting it's interesting because it's defined basically by its sign so this function must be smaller than zero if the system is failing smaller equal to zero it must be bigger than zero if you're fine if the system is performing in a nominal range and well kind of the boundary of the two which belongs to the failure region is called limit state surface in this particular case is there's a very very simple limit state function which should be just basically the admissible threshold the admissible threshold minus the model response so if your model response is is maximum temperature and you have so it's the temperature maximum temperature in your in your system and then you have a threshold of 100 degrees and your model predicts 100 degrees your objective function your limit state function is basically well 100 degrees minus the actual temperature simulated given the set of parameters you have and it can it's usually represented in these fancy plots where you have kind of a failure domain in one region a safe domain in the other and the limit state surface in the middle okay so what's the probability of failure well the probability of failure is is something very simple it's a probability that given a certain input random vector x okay what's the probability that's this particular given this particular set of uncertainty or of uncertain input parameters the response of the model will be in the failure domain so the probability of failure is just defined basically as the integral of the of the input joint distribution in the domain of failure okay it's nice but we have an issue it's a multi-dimensional inputs and typical the dimension is order of 10 to 100 for an engineering system and the domain of integration is actually implicit that's the big issue because it's defined through the limit state function and what is more important is that failures are generally rare events a very common failure as a probability of 10 to the minus two but a realistic when you talk about nuclear safety or submarine safety we have numbers in the order of 10 to the minus six 10 to the minus seven so it could be very small however we'll see in a minute why this is important we can reformulate the problem by using Monte Carlo simulation we know Monte Carlo simulation that's easy so we can actually justify an indicator function which basically is one if you are failing and zero otherwise then the probability of failure is very simply the expectation value of this operator over the input domain that's nice because now we can use the entire input domain which we know and then we just in the black box approach we just need to know by value this particular function so a very quick and dirty way to do this is to use Monte Carlo okay just take the expectation value of this particular indicator function okay this is good but so well I will probably skip these animations so this is how you do it you just sample sample sample some points will be failing some points will not be failing and the ratio of the number of points failing to the total dimension of your set is the probability of failure very easy what's the issue the issue is the following this estimator is a sum of Bernoulli variables okay so it's basically its distribution is a binomial distribution which is nice it's unbiased because it converges to the actual value and it has convergence because the variance goes like one over n but the issue is this part here so one over n times pf or one minus pf why is this an issue because if you look at the coefficient of variation so variance standard deviation divided by mean well for rare events if pf is quite small it tends to this particular value one over square root of n which we all know and love times square root of pf so what's the issue now well consider that now we want to get an estimate of the probability of failure within five percent okay and we have a probability of failure of 10 to the minus k the question is how many Monte Carlo samples do we need to get these estimates right and well here is the issue we need four times 10 to the k plus two which means basically even for a relatively failure prone system okay one percent probability of failure we need an order of 10 to the 4 runs but we can only afford 100 that's where we have issues okay so just one word more about reliability analysis there is an entire course covering this it's it's been studied for 40 years there is a lot of research in how to deal this problem in a number of different ways what I want to present here okay so there are various families of methods the idea of what I'm presenting here is just a very small subset where I am using plain Monte Carlo simulation and I want to propose you a method based on metamodals plus active learning as a way as a complementary way which is complementary to all those uh pardon fancy other methods that are available also the method that you've studied throughout this school they can be put together to actually accelerate Monte Carlo okay so I just want to make sure that I'm showing kind of a very naive approach to the problem but there's a lot of literature to do something a bit more advanced so let's say let's now use kriging okay step one we just use it as I said in the beginning we create a kriging surrogate of some model of interest in this particular case the limit state function uh yeah the limit state function and we substitute it and then we do Monte Carlo okay that's the very simplest way so we take we calibrate kriging on an experimental design apologies this is the wrong symbol it would be capital NED and we use the mean predictor as a substitute for the model so we substitute basically to the domain of integration instead of g the mean of the Gaussian process and then we calculate the probability of failure through Monte Carlo exactly as we did before and we can even set confidence levels due to the kriging variance basically so we can basically define a conservative kriging model where remember we are looking for values for which g is smaller than zero okay so if we want value for which g is smaller than zero then zero we want to be conservative we take actually the mean predictor minus the variance so it's more likely that you are in the failure region or the other way around if you want to get a non-conservative estimate so we can kind of get confidence bounds approximate confidence bounds on the on the estimate our our Monte Carlo estimate and then we can do an estimation pure Monte Carlo you remember it takes one second to make a million runs so we can do as many as we like and we solve the problem three times one time with the the conservative estimate one time with the optimistic estimate and one time with just a substitution and you can actually get three different probabilities failure that are ordered like this and basically you can say how confidence we are with a particular estimate okay very simply let's see an example what is called the the hat function it's a head it's called head function because there's the safe region down here the failure region up here and then the the real limit state surface which is the boundary between the two resembles weekly a Mexican hat with apologies to any Mexican in the audience so we run it a hundred times as I said so we just take a sampling of the input we run it and we have a reference solution which is 1.07 by 10 to the minus 4 and what do we get well look at the center first so probability of failure it's 4 by 10 to the minus 4 not too bad it's the same order of magnitude and we spent a hundred instead of what we would need it which is in the order of a million but look at the confidence bounds they are basically four orders of magnitude we have an issue okay so how do we improve on this of course spoiler alert what's what do we need okay so as I said we are looking for surrogates that are accurate in surrogate in the model with respect to some to some appropriate accuracy measure and now I come back to this statement so now what's important when we look at the Monte Carlo estimates of PF what's important is that the indicator function is correct because that's what you use to to calculate the Monte Carlo estimate therefore what is important is actually the sign okay what is important that our surrogate has the same sign as g we don't care if it looks weird as long as it has the same sign okay so can we actually use this particular information so in other words that means that it must classify properly the the samples so the question can we do something and that's of course yes it's the same thing as we did before active learning but to do active learning we need to define an enrichment criterion criterion what we did before was well we want to minimize the variance everywhere that's a good enrichment criterion for for let's say for getting an overall model which behaves nicely everywhere but in this case we want to have an enrichment criterion that somehow is related to the sign okay that tells us where the model needs to be better constrained such that we don't get the wrong sign and so what we do is we choose some enrichment criterion we update the crickets surrogate we compute the probability of failure and we get the bounds and we check if the bounds are too wide or not too wide and we decide whether we are satisfied with the results or not okay so again i don't have time to go through all of these there is a lot of literature it's all recent literature we are talking about research about the the early 2000 actually 2005 2010 basically the first real papers coming out around 2011 and a number of enrichment strategies have been proposed and but for the sake of time i will be choosing the simplest but very intuitive you have a question oh no sorry the and most intuitive one which is the learning file which is called learning function u it's from a paper from sharding and not a paper from from phd from a shaw and well it's actually very very simple function remember what's our goal figuring out whether we have a good estimate of the sign of the function or not and how do we do this well let's now take basically this particular ratio remember we can only use the information that we already have at this current stage okay so we can only use the Gaussian process we use the Gaussian process and the information we have from the Gaussian process is the mean and the standard deviation so let's have a look at this quantity it's the absolute value of the mean divided by the standard deviation in other words it's the distance in terms of standard deviations from zero remember what we are looking at is the sign of the function so basically this object has this nice property when remember that for each point the Gaussian predictor has a uh a Gaussian distribution so basically this this object tends to be uh large when you are very distant in terms of standard deviations from zero which means that the probability that you get the wrong sign is very low because you have you are at n sigma from or actually u sigma from the zero either from above or from below on the other hand this object tends to be very small when uh the mean is small so you are close to zero and the variance is large so you are close to zero in this um and the variance is actually quite large so you're really uncertain about the sign of your uh of your underlying function so basically uh choosing points where this is minimum gives the most information that we need in terms of improving our surrogate we want to make sure that the surrogate everywhere has a high u basically it's always very well defined okay and by the way due to Gauchanity the probability of misclassification so the probability of having either false positive or false negative uh is really analytically given by the cdf by the cdf of minus ux that's just by definition okay I won't talk about this but just uh just very quickly we have basically to choose the best points best means optimization so how do we do it again the u function is just a ratio between those two quantities again you can get one million at a time you don't really need to do anything fancy you can do brute force Monte Carlo in most cases obviously during big dimension or you have complex problems it's different but for the sake of today we take the same Monte Carlo points that we want to emulate uh and we just get the point which is the minimum okay let's see in one dimension so uh this is the same function as before I just translated a little bit up so that it doesn't have 50 probability of being below zero and we are interesting basically in uh estimating the probability that the model is below zero basically it's this area and this area divided by 15 okay so this area meaning this segment minus this segment divided by 15 which is the domain okay so what's the first point that we look at as if you remember earlier by just using active learning based on variance the first point what would be somewhere probably either like here or like here because that's the where the variance is the maximum experimental design is slightly different than before but by using this particular u function the most interesting point is actually this point here which is very close to zero and it has very high variability it means that very likely the sign of my predictor here is wrong okay and indeed you add that point to the experimental design and because it's an interpolant basically the whole uh I'm really sorry that it's not very visible the color uh I did not expect this the variance here is very reduced so the next point which is interest you see the curriculum predictor goes to zero around here there is a certain amount of variance and that's the next point and you do it a few times and well very quickly you immediately explore the areas where there is uncertainty and then basically at some points uh you stop I I'm not plotting here the the convergence criterion I will show you in the next slide which comes by now so I want to really conclude uh with a nice example which is again just a toy example because well we need to have a nice solution and to show in the lecture but still it's a nice example because it shows you um what is known as a serious system so imagine you have a you have a system with many components and if any of the components fail the system fails and therefore basically each one of the components may have its own behavior depending on the boundary conditions and different and separate failure regions and in this case it's designed it's it's constructed by basically a set of simple polynomial functions um and what I want to show you is basically those four regions here one two three and four so the input distributions are Gaussians and those regions are actual failure regions okay and what we do is for each iteration we add one point at a time I don't have time to discuss about multiple point enrichment but we have one point at a time and we let the algorithm run until the bounds on pf okay so remember the upper bounds by taking the probability of failure using Krigian plus uh the variance and the other one is Krigian minus the variance actually the other way around gives us nested probabilities and when the nested probabilities tend basically to to collapse we are converging so I'm representing with a nice animation uh last animation of today where you have basically the initial starting experimental design which are I think 12 points for unknown reason on the top right you have the learning function basically so this u function the probability of misclassification here you have the current estimate of the probability of failure so you will see some sort of convergence and down right is the estimated boundary between uh the failure region and the safe region and basically it's conservative and it's not conservative estimate so you start and the first point that becomes interesting is basically uh somewhere if you look at the u function basically given this particular experimental design the current sorrow gate you see that there are points which are very interesting either in this region or in this region okay now you add the points and what happens is now you added the point in this region and now it's again interesting appear and appear and what you what happens is that now you start adding point and you see that the algorithm starts enriching around the region that it knows then at some point it becomes knowledgeable enough it has become knowledgeable enough of this region it has jumped to the next region because now here adding point doesn't doesn't add anything anymore the variance is very small everywhere so the other points become interesting and then you start adding them here after a while you will start adding points here and points here and then here you see unfortunately you don't see the confidence balance but you see the the probability of failure starting to move and starting to converge and down here you also see the actual estimated in green it's the uh conservative estimate so it's higher probability of failure ready is the non-conservative one I know pardon these are just uh yeah conservative and non-conservative ones no actually no this is all the just zero it's failure domain pardon and safe domain and actually you let it run and of course it discovers the other region then it will discover the other region and everyone leaves happily ever after with a total cost in terms of model runs so we said this probability of failure is is again order of 10 to the minus 4 and a total cost in terms of model run of basically 70 80 runs okay and we calculated probability the probability of failure of 10 to the minus 4 which is very very low and we got it with the with the extreme you remember the confidence bounds which are about uh toward four orders of magnitude why don't you allow me to click okay you remember in the previous example just by substitution we had uh four orders of magnitude in the confidence bound here we are basically plus minus five percent that's the thing so with this I am done and instead of reading you the summary uh which is basically what we just said I just conclude with a remark uh what I've presented you is basically very basic of many different techniques but the idea is was to walk you through a mindset on how to solve these problems you can find this funny or interesting there is a lot lot lot of literature about how to combine this fancy technique of active learning for a lot of different problems from uh from structural reliability by improving on the Monte Carlo estimates so using subset sampling and all sort of fancy hybrid sampling techniques to actually using different surrogate models so low rank tensor approximations pc krigging which is which have nicer exploration properties so so this site I hope I didn't uh make it too boring and it's undone okay thank you very much for this interesting talk is there any questions uh thank you for such an interesting talk just two questions I have first uh how does uh like in the last examples that you showed how would the computational time get affected by increasing the dimensionality of the problem like does it blow up or how so uh these are good questions so the problem is is to some degree the dimensionality of the problem it depends on which ranges you're talking about so the predictor it's the surrogate model let's say doesn't change uh the cost increases slowly with the dimensionality of the problem what changes the cost is the number of points in the experimental design so the more points you have the slower it becomes but still remember that we are talking about models which are very expensive so each iteration takes about a second to run in this particular setting and we we tested up to 20 30 dimensions actually we went up probably until a hundred dimensions and each iteration with the surrogate excluding the time to run the model takes at most a few seconds so that's basically we never reached yet the point where the computational costs to calculate the surrogate increase the problem is that you have to be careful with which surrogate so and you really have to be careful because the curse of dimensionality affects everything and in particular it can affect surrogate models if you're not careful in designing sparse basically surrogates that take care of of the risk you have to go to go to get to the course of dimensionality as an example a piece expansion that was shown earlier by Olivier the the number of coefficients increases like binomial coefficients it's huge the more points the more dimensions you have you get exponentially actually factorial growth in the number of coefficients therefore in the costs for calibration and and so on and so forth so it's very difficult to to reply to you in terms of what's the difference in time needed it also depends on the topology of the limit state function so if you have only one failure region that's one thing if you have disconnected this joint failure regions here and there it can become more complicated it also depends on the methods you're choosing here we use in Monte Carlo it's very simple very basic but I have to run a million run a million surrogate modeling evaluations per iteration takes a second or two but you could use much faster ones like subs assimilation where you only need to run thousands as an example so it's it's it's a layered question and I can't give you a like direct answer generally direct answer generally speaking of course it performs worse and worse the higher the dimension but it's also the problem that becomes tougher and tougher so thank you I have two questions the first one is what happened if we insert uncertainty even in in the barrier so if we let the barrier stressed so for example if we this is very typical the threshold is very typically a so the point is your your limit state function becomes the model minus the threshold and the threshold is one random variable that's very typical okay so it's a typical set this problem this kind of problem yeah it's just a matter of defining g the limit state function in an inappropriate way this is one of the things I was saying the beginning m I use two different symbols even though in the end of the day they're both functions of of the inputs because m is very typical in engineering scenarios refers to a physical model a physical quantity that has an interpretation while g is a function that can put together different models including stochastic models of the resistance of your system as an example is the threshold is stochastic and so on and so forth so the easiest is you put the threshold to zero and you just put m minus the threshold and that's your g okay with the threshold is a threshold of random parameters as well so you can always do this this particular game or the threshold is a random parameter by itself okay and the second one is what happens if we insert a different kind of dependency between so you have state that x and y are a Gaussian vector no absolutely not no no wait what do you call what do you call x and what do you call y you mean x and y is m of x is equal to m of x why m of x I don't know nothing about why I don't know anything about the distribution this is what I was saying with this big warning sign okay every Gaussian measure does not mean that y is Gaussian okay so that's okay that's what okay I knew the question actually I have one question your system is pretty static but what if your data is obtained from a time dependent system does your method is straightforward so no there is so there is a whole different layer of different problems you have to define the problem of reliability and of time dependent reliability analysis first you have to define the problem given that you define the problem then depending on how you define this problem then you can either apply directly as it is or depending on the definition you have to change so in particular if you have models with multiple outputs okay and your limit state function is not as in this case as an example your model would have four different outputs okay because the model of your system has different subsystems and you have one output per subsystem you can also think about it as a time series okay if your problem is I have a time series and my system phase if any of the points fails then you can almost straightforward define a limit state function which is still a scalar based on the output of your function if your problem is more subtle then it's the problem that needs to be defined it's not a matter of technique it's a matter of defining the problem so again sorry I can't give you yes or no but almost let's say thank you probably so the can you go back to the slide just before the video just before the video can you go back to that slide they had one-dimensional example of what of active learning of one-dimensional example of like just creating just just the surrogate apologies I'll get there yes like one yeah yeah so my question is if say you initially had an idea that the response function is convex no no I'm saying that if say you already have a idea that the response function is going to be convex but at one of the design points you got an error in your response which led to now since you are doing the way there's no variance at the point at the design points so now you'll always be stuck at non-convex I'm not I'm not sure really I understand your question you're talking about this particular region of the no not this just so my question is say you took a number of design points yes and you already have an idea that your response function is going to be convex say you okay so you it's not this case it's like I have prior knowledge yes that this is gonna happen yes okay so you already have an idea however at one of the design points you got some error in your response variable now since there is no variance at any of your design points because you're pretty much interpolating so and that error led to non-convexivity in the interpolation but error you mean an error like a deviation from like your random noise on the simulation instead what you're talking about is that the error yeah I mean your response variable is such that whatever response you got it's in a manner that it led to non-convexivity because of interpolation yeah now is there a way while you are doing your active learning to take care of that fact so yeah so just just what I showed here is a very basic way of doing Kriging calibration okay in reality there is a much more comprehensive set the actual problem of calibrating Kriging is it can be cast as a Bayesian problem okay it's a Bayesian inverse problem you need to find the Thetas it's basically mapping Bayesian mapping and by mapping you can put a lot of additional information including as an example information about monotonicity you can put information about well random noise in the data information about as an example convexity or non-convexity information about anything that you may have derivatives and all these and you can build an appropriate setting in a Bayesian framework indeed sometimes the calibration or the the the Gaussian the Kriging so the calibrated Gaussian process that you use sometimes it's called the posterior process because you can you can actually derive everything as a in a Bayesian way so yes it's possible it's possible from these equations no not directly those equations make a lot of assumptions which are very basic it's it's the basic thing I could do but yes it's possible in theory it's absolutely impossible okay is there any more questions let's thank the speaker again
Uncertainty quantification (UQ) in the context of engineering applications aims aims at quantifying the effects of uncertainty in the input parameters of complex models on their output responses. Due to the increased availability of computational power and advanced modelling techniques, current simulation tools can provide unprecedented insight in the behaviour of complex systems. However, the associated computational costs have also increased significantly, often hindering the applicability of standard UQ techniques based on Monte-Carlo sampling. To overcome this limitation, metamodels (also referred to as surrogate models) have become a staple tool in the Engineering UQ community. This lecture will introduce a general framework for dealing with uncertainty in the presence of expensive computational models, in particular for reliability analysis (also known as rare event estimation). Reliability analysis focuses on the tail behaviour of a stochastic model response, so as to compute the probability of exceedance of a given performance measure, that would result in a critical failure of the system under study. Classical approximation-based techniques, as well as their modern metamodel-based counter-parts will be introduced.
10.5446/57403 (DOI)
Okay, well, thank you very much for being here this early in the morning and it's a pleasure to be here again. So what I'm going to talk about is project evaluation under certainty. This is a subject that has been around for a while, mostly within the business community, but for the last few years actually there has been a lot of interest on the mathematical aspects of such topics. So, brief outline of my talk, I'm going to give some background. I'm going to discuss what we call the Hedgid Monte Carlo algorithm, some examples, some conclusions, and then what I'd like to do is perhaps take the opportunity that I'm here with to discuss some work with Emmanuel Gaubert that is called the Non-Intrusive Stratified Resampling Method. So this is a method that together with him and with his student, Gang Liu, we have developed and it turned out to be pretty much motivated by this subject that we started in these studies of the Hedgid Monte Carlo method. So this started with a project we had a few years ago with the Brazilian oil company called Petrobras. And so basically they asked us about the possibility of evaluating projects using real-option techniques from data that was observed in the market and that passed through some kind of computational treatment that they had so that they could project the costs and their future gains. So the idea is that they have some big, big system that does some kind of optimization according to the information they have and they do some planning on that and then run simulations based on these possible scenarios and then they get the possible gains. So I'm going to be more precise on that a little bit later. So, but back to the main question of the subject. The question is in real options how to evaluate projects and their optionalities under uncertainty in a consistent way with market fluctuations. And what they mean by market fluctuations is many times you are in a business, in a company and you are trying to, for one side, decide about some investments of your company. On the other side, you have a trading desk that has to make investments, has to perhaps sell or buy calls, puts and forwards and contracts like that. And oftentimes such investments are so big that they have to be taken into account when you evaluate your projects. So you have some kind of hedging and you have also the general aspects of your investment. So for example, this is, for example, building a real estate development and how you kind of hedge that towards with instruments from the market. So this is more or less the general question. So let me give you a brief outline of this field that's called real options. This is paradise for people who study BSDEs, stock price control, the NMP program. That means you, Stefan. Okay. So in real options, we are interested in assigning monetary values to strategic decisions such as create a firm, invest in a new project, start a real estate development, finance some kind of research and development project, temporary suspend operations and some people go even further starting a PhD program, getting married or something like that. But I didn't invest that in my possible serious applications. So usually there are complex claims, barrier clauses, exotic character of such optionalities. There are also cash flows in decision trees, optimal exercise times and a mix of historical and risk neutral measures. So this is the kind of main issue here actually that was behind our discussions. So how to in a simple way balance this historical information, historical data with the risk neutral measure. A little bit of background on the field. Well, everybody knows the original paper of black shows. In fact, in the black shows paper, read there's a mention about warranties. If you look carefully, there's kind of thinking a little bit on this real option direction. There's a famous paper by Myers from 77. There was a Brazilian guy at Berkeley during his PhD thesis that worked on this value of natural resources. MacDonald and Siegel wrote a very influential paper in the field Blumen. So there are lots of things actually. This is a very, very broad field and there is a famous book by Pindic. So in its most basic and kind of let's say simple form and being simple, it's open to many criticism. The idea that people had in this real option technology or methodology is considered that there is a spending asset. So what's a spending asset? A spending asset is basically something that's highly correlated to your project. It could be even the actual value of the project. And this is something that's going to undergo some kind of stochastic process and somehow the project value would be computed by taking some soup. Actually, this is the optionality of entering the project, I should say. So V is the value of the spending asset. You would enter the project with an investment eye and we have a discounting factor and you decide whether to enter or not the project. If this immediate value of entering the project is greater than or equal to the optionality of waiting for the option and not exercising it. So this is a typical American option type of thing and you are then computing this initial soup over this topping times given that at time t the value of your project is V and you can put all kinds of models for your V. So then in this context that I'm writing here, what you have is that P as a function of time and the value at time t satisfies a black-stroll model with free body conditions. But of course this is extremely, extremely simple. It's extremely perhaps unrealistic yet. On the other hand, people get a good mileage from this. They really tend to use it a lot in the industry. So what we want to do is actually go further, go beyond that and try to develop technologies or methodologies that would not necessarily undergo the good old black-strolls model and perhaps have much more complex payoffs and complex optionalities. But just to give you an idea, since this is supposed to be kind of a general seminar, the connection there is a kind of dictionary between the world of, let's say, financial investments and the world of real options. And the dictionary is basically that the underlying price is substituted by the project's present value. The variance of the stock is usually taken to be the variance of the return value. The exercise price is usually the development cost. The expiration date is the time limit for the investment. The risk-free rate is usually the risk-free rate for the investment, which may not be the risk-free rate of the market. The dividend rate is the risk-adjusted return rate of the project. So those are kind of equivalents that became very commonly filled. So there is a lot of literature on that and there are congresses. There are lots of people that are interested in this. And when you discuss with, for example, many companies such as oil companies and with medical companies that do research and development, they actually have a very good grasp of this terminology here. And they want things in this terminology. Okay. Well, so far so good, but things that over-simplify there. Usually, many investments these companies think of infinite time horizon, which is not really the case. This perfect correlation between the so-called spending asset, complete market, perfect hedging is totally unrealistic. And it doesn't take into account competition. So there's a very nice paper, very, let's say, provoking, thought-provoking paper by Walter Schachmayer and Hubelec. The limitations of no arbitrage arguments for real options and that they really touch deeply to the, let's say, the problem. They really pin down the fact that when you don't have a complete market and the computation that we do by all these beautiful technology of financial options may fail badly. Well, still, you want to do something. You want to get a number. You want to get decisions. And so we are kind of motivated, in fact, by looking at this paper and try to at least give some kind of reasonable answer. So back to the motivation that I was talking about. In this problem we had with Petrobras, we basically have a set of traded assets, XI, I from one to N, perhaps the oil price of the gasoline price, the, well, name it. And there are no traded assets, things like production curves and things like that, that are also going to go into some big machinery, which is what we call a norecal for the cash flow generation. And so basically what they would tell us is the following. If we have such and such curves of inputs, then we get that. And then they would give us this raw data and, you know, now you guys take care of it. You are mathematicians, you should be able to do it. Well, not really. So, but the problem then is how to make sense of this. And what I'm going to discuss now is a methodology. But not that to discuss this methodology, what we have to do is actually go back and understand really what is, what's causing, let's say, possible gains and how these things really work. So look at the beast at its teeth. So cash flows and project values are usually highly dependent, at least in this energy industry, with commodity price. So these things are very correlated. It's, as I said, possible. And actually very likely the company is going to do financial hedging, especially when it has to do with several currencies. It's often, the profit in the cash flows come from margins. So for example, from buying brand and selling different products. So there's a margin between the brand and the gasoline price and so on and so forth, or carers and whatever. Usually for evaluating these projects, this oracle that I just mentioned, which is this big system, is adjusted to run with and without a project. So the typical question they would ask us is, is it worth or not to start a refinery? Is it worth or not to expand on a refinery? Okay. And what they would produce is then a set of simulations with the actual values, with the refinery and without the refinery. And the difference of such cash flows then would be basically the output. So they would give us perhaps the difference only, or perhaps both the actual refinery and with and without refinery prices. The evaluation of the cash flow that come from this oracle are usually very time consuming. So they have usually to put a big computer working for several days to produce such simulations. And the reason for that is because usually the cash flows are based on optimizing the full network of the company. So for example, if you start a refinery in one place, then you need to decide how you're going to carry things out of that refinery. And that on the other hand brings in a bunch of different optimization problems. And so these things are time consuming. So we have to use as little information as possible. And of course, in any investment there are windows of opportunity. You know, sometimes you miss the opportunity and that's it. As I said, there are complex optionalities. So all those issues, all those motivations are the ones that are motivating our questions here and our approach. So what I'm going to discuss now is perhaps a methodology which is just a proposal. And it's something that we proposed as a way of evaluating such optionalities in a simple but not too simple way. This appeared in this special volume of the commodities, energy and environmental science at the Fields Institute back in 2015. Looks like yesterday, but time flies. The challenge, as I said, is most of the simulations come from historical measures. We have managerial views that need to be incorporated. There is marketing completeness, unhageable risks, multiple assets. So the problem is not really simple. There are lots of people who have worked on these. There's a classical method. There are Monte Carlo based approach. My friends, Jaymungal and Loshery, my friends, Jaymungal and Loshery came up with an interesting method as well. So there are different approaches, but I don't have really time to go into that. There are the references in the paper. So here's the approach. So in the approach, what we do is, and I'm going to put some notation on the board, given that I find a job. So we have these driving assets, which we call XT. And I keep it on the board because I need it for the reference. We have some kind of, so this in principle, for each T, belongs to some Rn. Might be brand, gas, kerosene, and so on and so forth. We have a hedging portfolio, which is FET. These in principle can be bought or sold. So it's something that belongs to IV as well. And we have this thing that we call VT, which is the price of the option or derivative. And so this is hedging. And this is, let's say, basic asset. Please feel free to interrupt me. I'm missing my teaching duty. So, okay. So, oh, I'm sorry. Yes? In the paper, what they do is actually... I was asking how close or different is that from the paper, my teacher, my years at George University. Yeah. So in the paper that they do, what they do is basically do a very simple example of evaluating a real option in the situation of an incomplete market. Now, in the incomplete market, then you have a whole interval of possible measures of risk or risk premium. And they show that the value of this real option might be very, very... The variation of this price might be very big. So they basically do an example and they explain why the example is so contundent and so serious. They don't really discuss any methodology or so. They just take the general method of standard textbook, like for example, one of those references that I made and say, okay, suppose now the market happens to be incomplete, then things are tough. Okay? So now let me explain to you the idea which motivated this paper of ours and this approach, which actually we implemented. The idea is the following. If I'm at time t, okay, let me find my chalk again. So if I happen to be at time t and I go to time t plus delta t in a discrete way, I can look at the variation of my project, of my price as given by this v of t plus delta t minus v of t with the discount of the risk rate, the interest free rate. I can also do some hedging and the hedging is basically taking the amount of hedged value times the price in a product here and when you go from t to t plus delta t, I'm assuming that this thing varies like that and of course the hedge is something that you keep until t plus delta t minus and so you have that variation and you can then compute the discount of all that and this whole quantity then I can think of taking an average and I'm sure you have seen this in many shapes and forms and perhaps this is one way of putting it that's too complicated but anyway what I'm basically saying is that the variance from this t to t plus delta t is really what you want to minimize. Now, you might also instead of taking this average which is here denoted by these brackets, you might consider some risk measure, you can do more complicated things but somehow the risk is usually taken as a function of the variance. You can also look at it as a one sided variance and all these variations on the theme that you can take but let's for simplicity consider that. If you consider it like that way, what basically you have is a basic recursion where you come back from t plus delta t to t and you want to minimize your risk from here to here. And what's the control here? In this case, the control is nothing more than the hedge in portfolio vector and you want to compute the value v. So you minimize over v and over v given that you already know what happened in the future. Okay, standard dynamic programming. Okay, so then you can actually produce an algorithm. You come backwards, you initialize the project value at time capital T with your payoff given the investment. This investment also might be random if you wish. And what I'm going to describe here is basically the long staff schwarz approach. Actually, I should stop here and make some credits. We had this idea by looking at a very interesting paper by Potters Bouchot and Cesto Vique called the Hedge Montcalo approach. Now what's the difference between the Hedge Montcalo approach to real options in general and the usual approach is that in the Hedge Montcalo approach, we don't use the risk-free or risk-neutral measure. We use the historical measure. We do the computations with the historical measures and somehow is by introducing this fee that you get the risk-free. The fee here is again the hedge in portfolio. So if we're in the situation of the risk-neutral measure, the red part would not be here and you do just general backwards regression and you would be basically doing some Monte Carlo method in the spirit of long staff schwarz. Here what we are doing is we're introducing that control and we are minimizing both on fee and on fee. I'm going to describe now the algorithm. Usually at this moment, I actually connect this with the Fermi-Schweizer approach. It's very well understood and basically for those of you who know it and they have also the slides in the end, basically what we're getting here is the minimal martingale measure and nothing more, nothing less. And that if we are working with the standard option setup and there are lots of results about that. So I can discuss that at the end but I decided to skip that part now and just give you the algorithm which is actually the construction of the Fermi-Schweizer, the composition, but in a concrete way using a basis. So I'm kind of mixing Fermi-Schweizer with long staff schwarz. So backwards, initialize the project at the final time, get the payoff which would be in this example just the standard call because you want to decide whether you invest, let's say value K and you get out of that VT and you do that only if VT is greater than K. So that's the usual initialization of this thing and you define some expansion of your project value and the hedging portfolio which for perversity now I'm using the variable xi instead of V. It's just my perversity. Okay? And so xi here, xi is the value of the hedging portfolio and now we solve a quadratic minimization problem. Ro here is the discount factor and you expand these things in the right basis, you do the computation, you press the quadratic solver of your MATLAB and or your favorite Python perhaps or something like that and you get this admin. It's interesting that I'm minimizing here over the variable xi on a basis and gamma which are the basis expansion of the value of V. Now we check whether it's worthwhile or not to exercise if you have some kind of clause of possible exercise, if you don't, of course you skip that and you repeat that backwards all the way to get the project price and as a byproduct you also get the exercise region. Of course with that you can have many, many variations. Alessandro might not be, ah, Alessandro you're there. So Alessandro you see that you can also do your weight thing here. Just, we didn't know about your work at that point, sorry. But yeah, we have been around for too much time. So anyway, but at this point I'd like actually to see, I never, okay, I learned about their work recently but it turns out that it would be nice also to see variations on that, okay. I must confess the first time I read the paper by Hedmont Carlo I didn't believe it and they sat down and said okay, either it's totally false or very simple and it's going to give me good results. So I implemented it in Matlab and what I'm showing you here is just the example from textbook that convinced me and it's really a good example for your classes. Take for example a standard Magrab option where you have to switch between two different assets. You get one asset in exchange for the other and you have perhaps a strike. For those things you have a very simple computing formula. You can, you know, basically reduce it to black hand shows and you can actually compute it by the Hedmont Carlo and this is my end of afternoon implementation of that. So basically in green here what you have is the actual Hedmont Carlo and I call BS but for black hand shows and the black hand shows meaning the Magrab option computed according to the black shows formula. So you see the agreement is really fantastic and I get the Hedge almost perfect except of course on the edges here and if it's perfect don't believe it, it's not good to be true, you know that. So anyway, so this convinced me that this is actually a nice method worth really trying and we actually tried it for lots of things. In fact what's very nice too is suppose instead of having only two assets you put like ten other assets there and you assume that it depends on the truth of them but you actually do the computation with ten and the results are not very different from that and so anyway this I think is what I'm showing here. So you get this kind of blurred thing because you are coming from ten dimensions and projecting your true. So these are, as I said, the Magrab option. Okay, now some real practical examples that came from data that we got. So the company wants to compute some optionality, let's say that it would last for eleven years. The project value is dependent on twelve different underlines so we are really high dimensions here, we are not in a low dimension situation. The option is exercisable every year for the first five years so there is a window for opportunity and it also has a trading desk that could be used for hedging and for investments on the financial world so they might shut operations and just play in the stock market. And the optionality is, why is it evaluated using several different sets of hedging assets? So we tried different combinations and these are real data so they are provided, we actually don't see anything about the actual machine that's providing this data and we actually used very few paths, like two thousand paths so it looks crazy but it actually gets something. And so in blue here, this kind of optionality for this company is here as a function of the brand price and this is a kind of put in a certain sense, a kind of protection so the refinery here is a bit like placing a put on the market and so the real option value is in blue and the V minus K, the actual immediate exercise price is what you see in red. So in this sense we get as a result that's better to wait for the refinery to be in place. So this kind of example and solution that we get. More examples, 15 years investment, now we are using units here of monetary units, 80% if there's free rate so the cash flow distribution we estimate from the data that they gave us, this is the mean of the cash flow, this is the 95% percentile, this is the 5% percentile and so the things here are wegos there as you know. And the project value distribution is like that, this is already taking the discount to the present value so you see that the project somehow decreased with time which makes sense because this has a finite final time and of course if you have less years of project then of course you're getting less flow and what we show here is the immediate value distribution and so the minimum for the exercise is shown here and so this would be kind of a trigger for you to invest if you are hitting that value and so this is the distribution of project values so as I said the lower line corresponds to the 5% quanta quanta, the top one to the 95% and so 90% of the case you'd be here and so you can compute everything. I have another example which I think I'm going to skip because I need to know how much time I have and until when can I speak until 12.30? Half an hour? 20 minutes, okay. Okay so those examples were based on real data but we decided okay since we don't have control over this real data let's run lots of examples with these codes that we prepared and with those codes actually what we can do is we can run some thought examples one would be some project that would depend on let's say the Google stock price and the gas stock price or gas commodity price and basically we construct a cash flow which we arbitrarily chose as some step function and we now produce a huge number of simulations just to see how it goes and actually we see that the results are not so bad. This is for example the time series that we used for the gas and this is the Google asset price and so we basically played an option between these two assets and so the log returns we do the kind of asset simulations and we get the cash flow simulations of these fictitious oracle now because our step functions now is working as our oracle and description of the statistics under different scenarios and description of the intrinsic value statistics so what I show here is the mean of the intrinsic value you see it's kind of stable the minimum value for exercise is given by this yellow curve and the optionality is shown here so in this case you actually don't exercise on the average but of course if you're in a situation where you are over here you should have exercised it so partial conclusions up to now this is kind of first part of my talk and first so we implemented a methodology we proposed a methodology that would somehow incorporate these managerial views through the oracle it allows to bypass the problem of using risk neutral simulations we are not using anything risk neutral measure doing everything in the historical measure and on the limit actually you can do everything historical if there is no hedging and you're back to some kind of optimization problem and you can take all center account competition games in fact this is the hope that in our project here we can play a little bit with that this is one of my expectations we implemented also deferment options expansion options so it actually works the methodology is somehow model free the sense that it really does not depend on which models you put below of course you have just some some Markovian behind your process and but in principle you can do it even for more general risk measures and in fact we have implemented that sure not not in this paper that in this year okay so now further development so after that okay in the page the thesis of my former student Philip Macias as I said we use a lot of risk measures instead of variance we checked that these things actually give good results there are lots of theoretical aspects that we have not touched and should be very very nice for you know projects in particular for proving conversion things like that so there are lots of topics there to be studied that we did not touch and the issue of calibration and some knowledge generations is also very important the search for optimal basis is pretty much opening the high dimensional case as I know now one issue that actually led to a very interesting discussion with the manual will be and then our first work together is how to handle this car seat of data so you may remember that I said we have usually very few simulations so how to handle this problem of small number of simulations in such high dimension so what we did is we developed what we call the non-intrusive Monte Carlo methods and what is that basically we take the samples from the market and we make some assumptions on the model which are assumptions as a good assumption and based on those assumptions we do a re-sampling with a stratified technology that was already developed by Bobay and Tuch Jeff and so it gives us actually very nice results and so this is what I'm going to describe now is there a question? Okay good very good question I'm going to repeat because it's a very important question so his question is basically the following suppose instead of the variance I'm using all the risk measures well it depends on the risk measure if you get something with tremendous computational complexity you are stuck but suppose for example you have average valid risk or valid risk or one sided variance and things like that then in such cases you can actually simply use the minimization it's basically a quadratic problem and provided you are within the quadratic programming framework then everything that I said for the risk measure actually works if you give me some outer space risk measure then probably I cannot do anything but within the quadratic framework and within the reasonable risk measures that people work actually there are tricks it's not totally obvious that you can implement if you actually do it numerically you see that there are issues there and you have to make sure that you do some kind of regularization and so on so forth so it's not something that you just put in the solving that goes through but yes so your question is very good any other questions? Okay so general dynamic problem so this is now I'm describing the work with Emmanuel and Gangliel so we are in a very general framework where we have some general dynamic programming problem we are again going backwards and we are taking expected values of some function G the YI is this expected value I changed the notation a little bit my X is still my asset prices and we want to estimate this function YX and X is Markov process that has some properties and we are not I'm not going to give the details please refer to the paper however this is related to nonlinear PDEs as we all know through the work of Pardu and you know the people it's also related to optimal stopping problems BSDEs and so this is a very general problem that has important applications and we are interested in situations on situations where some of the parameters of the model are not completely known so we have though the observation of the trajectories but we don't know completely the parameters and in principle of course what you can do is take your statistical two bots do some kind of parameter estimation and run millions and trillions of simulations and do the usual Monte Carlo things but we are trying to not we are trying to bypass this estimation part okay these of course is connected to applications in inverse problems finance biometh and so on okay so examples optimal stopping so we were trying to find the essential soup of some stopping time and you have the expected value you do the usual thing it's connected to the BSD that I just put here on the board I'm not going to go into detail on that it's connected also to this PDE assuming that your underlying process is browning motion if not of course you have to do here the standard variance sigma t sigma stuff okay so you reduce this problem to take the expectations again so again we have this setup so the usual approach the usual approach is you produce lots of simulations of the variable X you come backwards and you compute this expected value as I said use a large number of simulations or simulations of the process X I you then compute this estimator and you do this regression on some statistical dictionary now as I said our problem is what if the models for X are not totally known and we have a small set of historical data and the answer the answer in the sense of partial answer of course because general answer is impossible is path reconstruction so resampling and stratification now just a quick description of the stratified resampling Blamin Jeff gave a very nice talk a few weeks ago about this so I think it's recorded so I refer you to that talk but basically you divide your space into strata and you introduce probability measure so that you take samples on these different things he then describes why the Laplace Pareto and certain distributions are very useful for that and use that as a measuring else so now you work on these different strata and that cuts down severely the problem the complex of the problem and now we use the observations of that we had and now we make our assumptions which is actually specific here we can somehow extract from the data the noise and we can do such noise extraction by means of certain functions that we know and I'll give you in a few seconds examples on that and then we resemble to simulate these trajectories using such extracted data note that to apply this technology you need less information than the full detail of the underlying model we are just using a mix of information about the model and the data that you have okay examples so let me give you a concrete example to fix the ideas and of course you start with the most simple example suppose you have some kind of usual Gaussian process x0 plus integral from 0 to t of mu s ds plus integral from 0 to t of sigma s dws we then consider these increments from let's say I'm using a little bit of a build of notation here so from time I to time I plus one you look at that and basically the function theta that I was talking about to reproduce from the noise your path is just this function here so from that function I can actually generate my path I can actually if I had lots of use regenerate that now if I know that my u is coming from Brownian motion I have lots of symmetries and I can resample my path space from the from the data so we can actually resample it in different parts of the space using this strata and that's a way of doing it well if you know how to handle arithmetic Brownian motion you should know how to handle geometric Brownian motion and that's what we show here same story instead of working with the differences x i plus one minus x i we take log differences and to a certain extent we also can handle or still and back in the case of matrix or still and back we actually have to know the matrix a but that's all we need to know and that's all we need to do this resampling so the basic idea I like this pictorial description especially because I it was very hard to draw it with my software my software in sense the my available software so suppose you have some actual observed samples so here's for example some process another process here that you observe so this is real data what we are doing now is using the data through the function theta that we assume we have within our model to resample other parts of the space and with that we get more information about the full space and therefore we get stable result by the way these are all prove the results that estimates the true estimates with assumption and so on I'm just giving you the idea in the process of course you use ordinarily squares but you can use other methodologies in principle so we basically project on a basis given by certain dictionary we have our samples we have our function and we apply the ordinary square the squares and so basically this is the resampling method we go backwards we suppose as I said that we have an estimation from yi plus one to yi in each strata we sample these according to our law new k k here's the index of the strata we construct the sample paths we do the ordinarily square we may have to do some thresholding for stability and for technical reasons and we get a good estimate for a problem so the final algorithm is then take the sample iid starting points we reconstruct the learning paths we compute the ordinarily squares we do the thresholding and we get an estimation of our value yi which in turn is the solution of your dynamic program problem the cost is n square mk this is not we cannot bypass that but our issue here is not really cost our issue is that we get very good estimates and I'll show you at the end of my of our error and we can control the error in a very good way this can also be parallelized and just to give you a flavor this is a numerical example from an equation that doesn't come from math finance it comes from well you can put it in math finance if you wish but it comes from math biology this is the f k p p equation fischer komogorov pitaevsky and Petrovsky I think equation and so in this equation we have I'm doing it backwards so you have the nonlinear term and this is of course related to branching processes and all that and turns out that this equation has explicit well-known solutions which we can use to check the validity of our computations and we actually can solve it by this method so here on the left what I'm showing to you is a total of 400 simulations in our tool that were used as root paths from there we use the different strata and we managed to get the solution on the left as compared to the actual solution on the right that's showing there and just to answer the issue about error so the quadratic error on each yi is part of an approximation error plus a statistical error plus independence error and all these errors here we can bound in very explicit ways I have more details if you wish so we basically designed this non-intrusive re-sample method it combines we combine these with Monte Carlo schemes and we can solve some certain interesting equations and we illustrate these on several examples so here is the list of collaborators Brigatti Max Oliveira de Souza, Felipe Macias, Manuel Gagnilou, I'm sure most of you know and I thank you very much for your attention. Any more questions? Thank you. There are assumptions on theta but things like lip sheets continuity and I mean technical assumptions that the assumption we have on theta is that we know theta ij okay so but notes that by knowing theta ij in this case is knowing this rule that takes x adds with the increments that's that's all we need to know I'm not assuming I happen to know the mule I'm not assuming I happen to know the sigma okay because all I need are the increments here okay then here again now in this case and actually your question is very good because in this case unfortunately we already need to know the a without the a we cannot do much and of course if you get more complicated models you might need to know more parameters but you know at least we reduced our so what we don't need to know for example we don't need to know the correlation matrix which already is a good hell but of course the mean reversal rate if unfortunately in this case we need to know yeah and if you go to more complicated models we can actually add jumps here too so this is another possibility so so there are things that we can do that things that we cannot do but your question is very well taken it's actually a question that many people ask when we show the method and yeah there's limitations thank you are there any more questions okay let us thank you again our speaker thank you
Industrial strategic decisions have evolved tremendously in the last decades towards a higher degree of quantitative analysis. Such decisions require taking into account a large number of uncertain variables and volatile scenarios, much like financial market investments. Furthermore, they can be evaluated by comparing to portfolios of investments in financial assets such as in stocks, derivatives and commodity futures. This revolution led to the development of a new field of managerial science known as Real Options. The use of Real Option techniques incorporates also the value of flexibility and gives a broader view of many business decisions that brings in techniques from quantitative finance and risk management. Such techniques are now part of the decision making process of many corporations and require a substantial amount of mathematical background. Yet, there has been substantial debate concerning the use of risk neutral pricing and hedging arguments to the context of project evaluation. We discuss some alternatives to risk neutral pricing that could be suitable to evaluation of projects in a realistic context with special attention to projects dependent on commodities and non-hedgeable uncertainties. More precisely, we make use of a variant of the hedged Monte-Carlo method of Potters, Bouchaud and Sestovic to tackle strategic decisions. Furthermore, we extend this to different investor risk profiles. This is joint work with Edgardo Brigatti, Felipe Macias, and Max O. de Souza. If time allows we shall also discuss the situation when the historical data for the project evaluation is very limited and we can make use of certain symmetries of the problem to perform (with good estimates) a nonintrusive stratified resampling of the data. This is joint work with E. Gobet and G. Liu.
10.5446/57406 (DOI)
Thanks, so I'd like to thank Andrea, Href, Cyril and Jean-François for the invitation and for accepting to move my talk on schedule because I was supposed to speak tomorrow, which is also the reason I apologize with you because you will have a second long talk today. So the talk of today, so my talk, sorry, will be somehow different with respect to many other ones we've had here at CEMACS, which we are focusing on probabilistic numerical methods for high dimensional problems or for their mean field limits. I would rather focus on the problem of finding exact approximate solutions for low dimensional nonlinear problems and precisely the nonlinear PDs that appear in optimal stopping. So it will be somehow orthogonal to the direction that many talks had until now. So my motivation comes from mathematical finance. So this first part will be maybe somehow obvious for some of you, but hopefully not for others. So the motivation is the following. In mathematical finance, we have a stochastic process that modifies financial asset, a stock, let's say a stock, so the stock of Google or Societé Générale, for example, where my co-author works. So what we say is that this is a price expressed in euros of Societé Générale stock and we model it with an SDA. So WT is a marine motion in dimension one. And by the way, what do I mean by low dimension? Once a colleague of mine said that high dimension starts from D equal 2, so by contrast, low dimension will be D equal 1. So Brownian motion dimension one and sigma is a function of time and space with values possibly in some, well, with values we're taking some bounded intervals, sigma bar, low and high, plus some standard leapshield conditions in space uniformly in time. So this is our model. What do we have on the market? On the market we have some financial products that are called European options. What is that? Well, we fix a future time horizon, you can think of T equal one year or T equal one month. We fix a certain constant, K, positive. This is a price, so this is expressed in euros, okay? And we can buy a contract on the market that gives us this amount of money in the future. The maximum of K minus the future value of S, okay? So you can think, for those who have never seen this, you can think of this as an insurance product, right? You are protecting yourself against a drop or against a decrease in the value of S. You are thinking that S in the future will be very low, we go from one other to 50 and if you have both this contract you will get some money, okay? Otherwise if S is high you will get zero, but okay? So you're protecting yourself against drops in the values of the asset. Okay, so this we can rewrite in a simple way as positive part of K minus S T. Okay, good. So the theory for pricing and managing these kind of products tell us this object, okay? I'm buying the right of getting some money in the future, a capital T. I have to pay something now in order to have this right in the future, okay? So the price of this contract now, at time zero, is given by an expectation. Expectation of exponential minus R T, positive part K minus X T, where X is another process. That satisfies this S D, where R, small r is a constant that I'm introducing, positive. That represent the interest rate that you get if you invest money in a bank account, okay? So we have to compute, sorry, and maybe we have an initial condition here that is the initial value S zero that we observe over there, okay? So the problem, so the price is given an expectation, the problem we have is computing an expectation, okay? For which you can use, now the numerical problem we have is computing expectation based on stochastic process, solution of an S D for which you can use your favorite numerical method, right? You can use Monte Carlo. In some situations you can maybe compute this explicitly. This is an integral against the law of X. Or maybe you can solve the linear PDE that is satisfied by this expectation, right? Okay. So you can choose your numerical method. And maybe at this level, I'd like to point out that if we call this function, I'm not managing this blackboards very well. This function we would call it P, TK. There's two parameters, T and K. P, you can think P stands for price. Actually, this is a put option. So it's the P for put option. It will depend on sigma as well, right? On the sigma that is inside my process, okay? I'd like to point out that something that is known is this thing is that this function solves a certain PDE. Solves the following PDE. Okay. Parabolic PDE with an initial condition P0K equal to, well, what you obtain when you inject T equals zero there. Okay. So we have a forward PDE, which is nothing but the Fokker Planck PDE for this process written in terms of the function P, okay? You can think of this, so there's many ways of proving. This PDE, you can think of it as the Fokker Planck forward PDE written in terms of the function P sigma. Okay. Good, so you can tackle the problem from the side that you want and everything here is linear. Linear expectations, linear PDEs. Okay, then we come to another type of product, which are this American option. I'm going to focus on... During the talk, okay? So we still fix a future time and some positive price K, okay? And this American option is a financial contract that still gives you some money in the future, okay? But you don't necessarily look at this difference K minus S at final time. You can look at this at any time tau is more than capital T, okay? So at any time tau, you can stop the game and you say, now I want to get this amount of money, okay? Which means that you exercise your option at tau. So this is what you get if you exercise your right at time tau is more than capital T, okay? Okay, so again, the theory, which I'm not going to recall indeed as, but the theory for pricing and hedging, for pricing this kind of object tell us that now the price I have to pay today in order to have this right in the future, okay? This is the American option. The price of this American option is not an expectation but a supremum of expectation. Okay, it is in the end the value function of an optimal stopping problem, okay? So it's the supremum of these expectations over what? Over all, toes that belong to this base capital tau that I denote in a standard way like that, which is the space of stopping times in the augmented filtration of the running motion W, okay, with values in zero capital T, okay? So we have this, yes? So just to remind you that perhaps you are using a different signal from there to there because you changed that from the motivation that the price increases S, you have the, for I think the C, C, C, S, U, U, U, U, U, U, U, U, U, U, U, U, U, U, U, U, U, U, U, U, which, what do you mean, so why might it not be the same function I� Sorry, maybe the dú´ cos hasn�t it fine. So, the variable is just a variable. It has to be changing from the single to having a problem. They're changing variables. I'm changing variables, yes, but you agree that the price of this option is a number? No, yeah. That I can write in this way? Yes, that's possible, but just that you're changing the process is different. But, okay, but you... Okay, so we can discuss about that later. So you agree with this identity and with this signal? I agree with the future. And I agree with the future. Okay. But the sigma in the X is the same as the sigma in the X. Yes, it's the same, yes. This is what I mean, it's really the same. But the sigma in the P is just like a symbol, is it a symbol of Pk? Yes, yes, this is a symbol. So this is an expectation that depends on a lot of things, on T, on K, on R, which I don't put, and it will depend also on the function sigma that I put there. Yeah. Okay. So, George, there's something I missed in what you said, I believe, but okay, if it's okay to go on like that, I would be happy to. So, yes, so now we have this problem to solve, okay? So, I'm going to take a small remark, en passant, which I'm going to go back to afterwards. So you take in that problem tau equal capital T, okay? And what do we get? We get that this value, sorry again, this price now I will denote it as ATK, A for American, TK, and sigma, the function sigma is still there, okay? So, the point is that A sigma, TK is larger than this expectation computed at T. So this will be larger than what I call P, right? Just this expectation, the price of the European option plus something, plus a certain amount that will be positive, okay? So, this amount, this is what we call the early premium in mathematical finance, okay? Early premium because it's what you have to pay in addition to the previous price of the European option, it's the additional price that you have to pay if you want the additional right of exercising your option at any stopping time before capital T, okay? And I will go back to this formulation in the future, so in my talk, okay? So, now the problem we have is to solve numerically if we want to apply the model to market data, we have to solve numerically this problem over there, okay? Which is more complicated than just computing an expectation, okay? So, this is the moment when we wonder about efficient valuation numerical methods for optimal stopping problems, okay? So, hopefully, quickly, I'd like to remind what are standard classes of methods to solve this optimal stopping problem, okay? Well, and now, from now on, I will start denoting UTX for T before capital T and X positive value. So, what's the value function of this optimal stopping problem? So, the price of the American option if I enter into the game at times small t, okay? So, it will be the supremum of these expectations. So, we have now taken over all stopping times, though, the take values in small t capital T, okay? So, we use this notation, which should by now be clear. Supremum over all stopping times in the filtration of W that take values in the interval small t capital T, okay? So, we have here where we, the standard notation again, X T X S, right? Is this solution of the ST starting at small x at time t, okay? So, this value function is in the, a lot of previous talks, okay? So, this value function satisfies what we call a backward dynamic programming principle, okay? Which is suitable for time discretization in such a way that we can actually simulate the random variables, right? So, there's a backward dynamic programming principle. Which if we discretize time, okay, for which we can discretize time, take n dates between 0 and capital T, regularly spaced, so Tk, K, T over n, which means that I'm taking a fixed time grid, T over n. I can consider, and I know how to simulate, the Euler scheme for the process X, right? So, what I can construct is a discrete time process U starting from the beginning, from the end, sorry, okay? And then iterating backward from time ti plus 1 to time ti with this principle, okay? This will represent my gain at time ti, it is the maximum between, so the value of my option at time ti is the maximum between what I gain if I stop at ti and the conditional expectation of what I get if I wait one more step, and if I wait one more step, my value is U ti plus 1. Conditional expectation with respect to my process at time ti, and I have to take the maximum between these two. Possibly I'm forgetting this exponential term here, sorry, and I iterate from time n to time 0, okay? And I'm using the value U0 that I obtain as an approximation of the solution of my value function at time 0, okay? I forgot the plus, yes, okay, thank you, thank you, yes. Did I forget it? Everywhere, okay, everywhere, yeah, thanks. So this now opens the way to Monte Carlo methods, right? Because the problem I'm left with is to simulate this conditional expectation to which I can apply my favorite regression method, okay? Which means using the IID simulations of the Euler scheme, I can switch to regress, and then choosing a basis of functions on which to project in order to replace this conditional expectation with a finite dimensional projection, okay? And we have heard about this in a lot of presentations during some rags, for example, Plum and Trocadius, right? So this opened the ways to Monte Carlo methods, which are potentially applicable in higher dimension, okay? Which means that you have several processes, X1, X2, up to XD. But as I mentioned in the beginning, what I'm interested in now is the analytical counterpart of this problem in low dimension. And this is a free boundary PDE, okay, which is written in the following way, okay? So let me notice, first of all, that this value function over there is always larger than K minus X positive part, right? Just take tau equal small t, okay? It's always larger than this, which means that it can be larger, which can be strictly greater or equal, okay? So it makes sense to define this object, small x of t, this is a function of time, which is the infimum of point X, at which this function is strictly larger than the value function is strictly larger than this fixed function, okay? And if we do this, the picture looks like that. If we do this, the picture looks like that. This is time, capital time, strike is here, okay? This function X looks, has this behavior, okay? So this is small x of t, and I'm dividing the time-space domain in two regions, one in which my value function is larger than this, and one in which it is equal, okay? So we can call this C and this other one, we can call it D, okay? So for the notation capital X is the process, small x of t is the boundary, okay? If you, I have complaints about this notation, I'd be happy to forward them to my co-author who chose it. So if we do this, then the statement is that U and this function of time X is the unique solution to the following problem. Okay. To the following problem. Okay, so in this region, what this identity is telling us is that it's optimal to stop immediately, right? If we look at this optimal stopping problem, UTX equal K minus X means that tau equals small t is an optimal stopping time, right? So here it's optimal to stop immediately, here it is not. Here it's optimal to wait a little bit before exercising, okay? So it makes sense that in this region, the function, the value function satisfies the standard parabolic valuation PD. Okay, what we call C. While in the complement, we know that we have this identity. Oh, the other way around, sorry. Correct. And this we can call D bar, the closure of D, okay? The value function needs to be larger than the fixed payoff, okay? And then we have the finite condition, the terminal condition, right? And maybe a boundary condition at infinity. This function has to tend to zero at infinity, okay? Okay, so I'm looking at the couple value function and this boundary as the solution of this problem, which is a free bound, which is what we call free boundary PD, okay? We're looking for the solution and this boundary at the same time. Okay, so this small x is, oh, by the way, L by L, you already imagine that, but this is the infinitesimal generator of the process x. So this would be Rx, first derivative, plus one-half, second-order term, okay? So this x, the boundary between these two regions is what we call the exercise boundary of the American option. This x is what we call exercise boundary. Why? Because, well, the stopping time tau star, defined by the infimum of times at which I go below the level of this boundary, is optimal. Is optimal for the stopping problem, defining the value function, okay? So it starts from here, if I start from zero, okay, the process diffuses and this is the time, the first time it goes below this boundary, it's the optimal time for stopping. Okay, so if we wonder about the properties of this function, okay, so a lot of properties are known, because this function has been studied in several works, okay? So in this setting, we know that this function is monotone, it is C1 over zero T, opening T, and its limit at capital T is equal to K, okay? So it looks like the function I've drawn over there, okay? So it's defined by a lot of authors, so by the way, this kind of system goes back to the relation of these to the optimal stopping, goes back to one of the usual aspects of these same racks, okay? It goes back to works of MacKinn 65, and then properties of these exercise boundaries have been studied by a lot of several authors between the 70s and nowadays, okay? Which I'm not going to quote so that I don't forget anyone. So the properties that we know, okay? So what I like to use now is the fact that for this PDE, for this free boundary problem, we have an integral, integral, no, an integral representation. Yes, yes. The boundary needs to be equal to minus one, yes, something I forgot, yeah. Good point, yes. This is something I forgot, so I will write it here. In addition, you want that the derivative in space of this function, we can write it like that. The limit from above is equal to minus one, okay? Okay, so this from below in this region, we will have VTX equal to K minus X positive part, and this is actually equal to K minus X, okay? Because we are below K, so positive part K minus X is positive, okay? So the derivative of K minus X is minus one, and indeed a property that function U satisfies and that we need to ask in this system is that the first derivative in space is continuous while crossing the boundary, okay? Yes, thank you. So we are going to, what I'm calling integral representation, we can derive it in this way. We can derive it in this way. We just wrote for the function U, we wrote it on the set C, the function U satisfies this equation, and that on D, which is an open set for what I've just said on the function X, the same thing will be equal to, okay? On this set D, the function U is equal to K minus X without positive part, because this is positive. So if we apply this operator to K minus X, time derivative, zero. In L, there is a second derivative which hits a linear function, zero. So the only part I'm left with is RX, the X of this minus R, K plus RX, right? So what we get, this is the most complicated computation of the whole talk. So what we get is minus RX plus RX minus RK, okay? On the region D, we have that this is a negative constant, okay? Good. So if we can write in one line the PDE, the equation satisfied by U, this is what? This is equal to zero on C, and on D, this is equal to minus RK. So I can write, I take it to the other side, plus RK, indicator of what? Indicator of X in D, so X is smaller than XT, right? I've written the two conditions in only one line, okay? But so we can see this as a linear PDE for U with a source term here, right? With a source term, which depends on T and X, and which is also a function of my exercise boundary that I will put there, okay? So we can see this equation for the couple UX as a nonlinear equation, okay? Because X, the function X of T is there inside the indicator function. In any case, we have a finite-cast representation for U, right? We have a finite-cast representation for U from this PDE. And this goes like that, expectation of the terminal value, K minus X, plus what? Plus integral of the source term, so RK, here I have expectation of what? Well, here I have expectation of F of XS, right? And so this is, according to my F, indicator function of XS smaller than the boundary, right? And I have to put initial condition here, okay? And so, well, we can rewrite this in a shorter way that I will use, okay? So this is exactly the price of the European option, right? This is what I call P, capital TK, and now I'm changing notation a little bit. I don't put sigma anymore. I put the initial condition TX over there, and then I have plus RK integral of the CDF of the process, okay? And then I have this, which I will write like that, where I'm introducing, okay, notation, somehow standard for the CDF of the process, okay? And my notation with F is the, as I said, the CDF, so probability that XS is smaller than Y if I start from TX, okay? So here we, what we have just written is the, what we have identified is the early premium that I introduced before, right? So we are writing the price of the American option as the price of the European option plus a positive correction, okay? Which is the early premium, and this is why we call this equation early premium formula. Okay, this is the early premium formula, okay? So why is this interesting? Because now from this we can get, we can get rid of U in this equation and get an integral equation, equation for the exercise boundary X. So let's say I want to get an integral, just inject, inject what the fact that along, along this curve, the function U, which is a continuous function on my space-time domain, is equal to K minus XT, okay? And if we have colors, if we do that, well, we will obtain here K minus XT, right? And we will get XT here, XT here, and XS there, okay? So we have this equation plus the terminal condition for, the terminal condition X capital T equal to K. This is giving me a way of numerically computing this exercise boundary, right? I start from the end, XT equal to K, and then I discretize time exactly as we did before. So I'm going to use some time TN to time TN minus one, solving this nonlinear equation, okay, for XTN minus one, okay? XTN minus one, XTN minus one, XTN minus one, okay? And then we have, of course, to, well, discretize, approximate this integral in some way, okay? So this is, okay, so this is something that we can do provided that we know how to compute PNF, okay, this expectation and the CDF, which we know only specific cases, okay, not in general. But so why is this interesting for us, for me, for us? If we compute this exercise boundary, okay, then we can re-inject it into the equation, okay? We go back to the white, suppose we have computed X of T, we have evaluated X of T, we go back to the white equation, UTX, PTX, FTX, XS is there, okay? But if I know, suppose that I know X of S, okay? Now what we have, what is left to do, and I can compute F, what is left to do is to compute this integral, this, compute numerically this one-dimensional integral with respect to time, with my favorite quadratore scheme, okay? This is a very simple task. So we have nailed down the problem of compute solving the optimal stopping problem to computing a one-dimensional integral with respect to time. Provided we know these functions, right? So back to my issue, this expectation and this CDF, we know it explicitly, we know that explicitly only specific situations. For example, when my process X is geometric when I motion, this will be written in terms of the Gaussian CDF, but I want to do it in my more general setting with a generic function sigma X, okay? And so this is the moment when we can call upon asymptotic calculus for diffusion processes, okay? And so, last part, time left, what can we work out for the asymptotics of the problem? Asymptotics in which sense? Asymptotics when time goes to zero, okay? So for this, I will consider a slightly simpler setting for the presentation, which is a function sigma that does not depend on time. Okay? Only sigma of X, which is of course a limitation that we don't want to have in practice. But okay, I'll consider this setting in the presentation, okay? So what we know about this exercise boundary X is that it tends to K, okay? What we want to know is more, we want to have a whole asymptotic expansion of X, T as T goes to capital T, okay? Which means that for this function here that I drew, we want to know how it behaves here, okay? We want to refine what is known as asymptotic behavior, okay? And I will just denote, okay, let me notice that X, T is a homogeneous market process, okay? And then I can denote, I can reverse time, I can reverse time and denote U bar, okay? U bar tau of X as the value function in terms of time to maturity. This will be U of T minus tau X, okay? For any T, for any capital T, okay? The same for the exercise boundary. The same for the exercise boundary, so we'll denote X bar of tau, the exercise boundary as a function of time to maturity, okay? What we know now is that X bar of tau tends to K as tau tends to zero, okay? We are sending, tau now is no more stopping time, of course. We are just sending time to maturity to zero. Okay, so our contribution in this setting with my collaborator, by the way, Pierre-Anne Labarder, whom I haven't mentioned yet, our contribution is done in two steps. So step one is to derive an alternative representation for the exercise boundary, okay? Which is written in terms of what? In terms of the density, the transition density of my process, okay? So I will denote, if you forgive me for introducing notation now, I will denote F of T, Y, X. This should be readable. The transition density of the process from point X to point Y in time T. Homogeneous microprocess, we only have a time lapse, okay? And in symbols, this is this low, right? So the alternative representation that we have is the following. I will write it. Okay, so the density, the transition density of my process from X bar tau to K equal to... Okay, well... I may be forgot some exponentials before in the Feynman-Ketz representation. I believe I forgot this exponential in the integrator, but okay, let's say it's not such a big deal. Integral of the density computed at this point. Okay, so... Okay, so what do we have? This exercise boundary here, here, here, and this derivative, okay? So the picture would be like that. T minus tau. Okay, we have an equation that involves this value, X bar of tau, and we are looking at the transition of the process from X bar of tau to K here, and here we have an integral functional that depends on the density of the process computed along this... transition density along this curve, okay? So you might say earlier we had this early premium formula written in terms of expectations, right? Put price and CDF. We might manipulate it in such a way that I might prove, but I probably won't. And we manipulate it in such a way that we write it as an equivalent, actually, representation that makes the density of the process appear, okay? Good. So what did I gain? Okay, what did I gain? Well, what we gained is the fact that now, to this representation, we can apply step two, which is using what we know under asymptotic behavior of the density of the process as time goes to zero, right? So now we open the way to decades of literature about heat kernel expansions in small time, right? And in particular, what we know in this simple one-dimensional case is that the density of the process looks like that. There is an exponential term, okay, which depends on x, y over tau, okay? For Brownian motion, this d would be x minus y square. D would be x minus y. Sorry, I'm not finished, okay? But just let me point out what d is. This d square is, okay, in a fancy way, it is the square geodesic distance of x to y in the Riemannian metric induced by the function sigma, okay? Which means what? This is a Riemannian distance, so it is the infimum of lengths of curves which are integral from 0 to 1 of phi dot t divided by my sigma phi square. The infimum computed over all the phi that are, while in camera or marketing space, of Brownian motion, h1, okay? Absolutely continuous. We dialed two first derivatives and that connects x to y. Okay? So this is really a distance, right? You look at all the curves that go from x to y and you compute the one that has minimum, you consider the one that has minimum distance according to this metric, okay? But then in a much simpler way, in a much simpler way, in this one dimensional setting, this function d x, y is actually just equal to the integral from x to y of... 1 over sigma, okay? Okay, which is an easier way of looking at it. Okay? So this function, we know it and then there's something in front, okay? A one dimensional diffusion has a time scale that is square root of 2, okay? And then we have some coefficient in front which is precisely this, okay? So a certain coefficient depending on sigma divided by sigma y 3 half, why not? Sorry for that. 1 plus big O of tau, okay? So this is an exact expansion for the density of the process and now what you do is to inject this into my new integral representation, okay? So you do this, you inject this expression and now the problem you are left with is the problem of working out the asymptotic expansion of x as tau goes to zero, the asymptotic expansion of x from the explicit asymptotic expansion of the density f, okay? So you are left with a problem of asymptotic analysis, which is maybe not a piece of cake, I mean in which in particular you will need to use Laplace method to study the asymptotic of this integral, okay? But so you are left with that and conclusion, you obtain an asymptotic expansion of this function, okay? Of this function here, which you can push a priority to arbitrary order, which refines the expansions for the exercise boundary that we are knowing in the literature, okay? So here we will stop in one minute, so you obtain an explicit, so the message that you obtain an explicit approximation for this function, you re-inject this function into the early planning formula that we had before and you have semi-closed approximation as a final result for the price of the American option. So for the solution of this optimal stopping problem, okay? So the numerical, which is of course very accurate, right? And the numerical results of which you can look in the mice lights, which will be on the, which will be online, which I wanted to show you, but I think it's better to stop because it's 11. Okay, thank you very much. Thank you very much, Stefano. Is there any question? Just can you recall where does your last representation come from? Yes, this one, so it really in a nutshell, in a nutshell, you basically take the, you start from the early planning formula for the original early planning formula for this function, okay? So here we have what? Sorry, we have this plus I'm changing time, okay, I'm making some time change, so we have this, okay? So we have this, we have this plus I'm changing time, okay? So we have this plus I'm changing time, okay? So this D will pop up from Dupers PD, right? This is the second order term in Dupers PD, and what you're left with is still the integral term that you have when you differentiate with respect to tau, you see you have to differentiate here, the density comes from the derivative of the CDF, and then you have this derivative that appears, okay? So this type is simple, this type is simple, okay? So what is more involved is, than the second one. Yes, please. So you develop this expansion by more, you have only the big old tau down, but of course you can make a complete series for this expansion, and if you inject this expansion in the integral formula, then could you handle this, maybe taking more terms? So potentially yes, potentially yes, then what we notice is that, what we noticed is that actually the asymptotic of the exercise boundary, you know, here this function behaves as what, k minus sigma k times square root of tau log of gamma over tau, okay? So it has a behavior, almost square root behavior in tau times log of one over tau, okay? This is what pops up, and actually these and the further refinements here only depend on what is here. So they don't depend, because there's square root of tau, the square root of tau, so all the terms that come from here, okay, that comes from this big old tau, let's say heuristically, these terms they don't count in the, so I don't know, I don't need this. Any more questions? You have a kind of a integral equation, for your boundary, do you apply a kind of method to solve this equation, for example, if there's a kind of big alteration, something like this, why? Of course, of course, as you say, we could apply methods for this integral equation for this one or for the original one, okay, for this one, if we know the density. So we could, of course, we could. What our aim was to obtain this explicit formulas for the exercise boundary, in the sense that at the end, you know, for you plug in this formula, in the end globally what you have is an almost explicit formula for an American option. So, the only numerical task left is to compute this integral with respect to time, okay. So, for calibration, what we have in mind is calibration purposes that I haven't mentioned here, but okay, so as usual, right, you want semi-closed formulas to have faster computations, and so our aim was really to work out an explicit formula rather than a numerical solution of this equation. I don't know if this answers to. Yeah, but I mean, is this is only up to this error? Yes. So, if your options are not short, might be original? Of course, yeah, of course. If the options are not short, maturity, this will worsen, the quality of the approximation will worsen. So, the options typically are, have no, maturity is no longer than one and a half or two years, for which we tested our approximation, and of course it works very well. You have discrete time on the. You have discrete time on the, so you mean Bermudan? Okay, yeah, fair enough. Then either we just assume that our Bermudan are American or, well, yes, of course, for discrete time American option, this will change. Okay, let's thank Stefano once again.
The valuation of American options (a widespread type of financial contract) requires the numerical solution of an optimal stopping problem. Numerical methods for such problems have been widely investigated. Monte-Carlo methods are based on the implementation of dynamic programming principles coupled with regression techniques. In lower dimension, one can choose to tackle the related free boundary PDE with deterministic schemes. Pricing of American options will therefore be inevitably heavier than the one of European options, which only requires the computation of a (linear) expectation. The calibration (fitting) of a stochastic model to market quotes for American options is therefore an a priori demanding task. Yet, often this cannot be avoided: on exchange markets one is typically provided only with market quotes for American options on single stocks (as opposed to large stock indexes - e.g. S&P500 - for which large amounts of liquid European options are typically available). In this talk, we show how one can derive (approximate, but accurate enough) explicit formulas - therefore replacing other numerical methods, at least in a low-dimensional case - based on asymptotic calculus for diffusions. More precisely: based on a suitable representation of the PDE free boundary, we derive an approximation of this boundary close to final time that refines the expansions known so far in the literature. Via the early premium formula, this allows to derive semi-closed expressions for the price of the American put/call. The final product is a calibration recipe of a Dupire's local volatility to American option data. Based on joint work with Pierre Henry-Labordère.
10.5446/57407 (DOI)
Thank you. So thank you for the introduction and I would like to thank the organizers of this seminar for giving me the opportunity to give a talk today. And of course the organizers of the seminar for giving us the opportunity to work on these very nice projects. So indeed I'm going to talk about some mean field control problems with congestion effect and this is a joint work with Ivaidh Dhu. So first of all I will define mean field control in a very general way and I will compare it with mean field games so that if you know mean field games you will not be confused with what we are going to do in the next sections. And after that I will present mostly two classes of models. So a first class which is local and a second class that is non-local. So I'm going to explain what this means. Actually the second one was done earlier but I feel that it's simpler to present the framework in the local model so I will start with this one. And then I will briefly conclude. So for each model I will give some results of existence and uniqueness and I will show some numerical results based on two different methods. So let me start with the definition of mean field control in a very general way. So what we call mean field control or control of micronvillus dynamics is defined as follows. We want to minimize some function j of a control v. So in other words we want to find an optimal control v hat such that j of v hat is smaller than j of v for any v. So this is just a control problem and the function that we are going to consider j is the expectation of the integral between zero and some finite time horizon capital T of a running cost that I will denote L by L and a final cost that I will denote by H. And note that the running cost and the final cost may depend on the position of some random variable capital X controlled by v so I'm going to define the dynamics of capital V, capital X sorry. And it also depends on this time in red which is the distribution of X at time t. So I denoted this dependence with brackets because it might be non-local so L and H may depend on the distribution of X at any point. So it could be a moment or it could be something more complicated. And the dynamics of X is given by a machine-level equations controlled by v. So here the drift again depends on the position, on the control and also on the density of X at time t. And the initial distribution M0 is given. And for the noise here the volatility sigma is constant but it could be also something that depends on the same parameters. But for simplicity I will keep it constant in this talk. So let's try to compare mean field games and control in case you are familiar with mean field games. So to do that I rewrite control in a slightly more general framework with functional J tilde which is now a function of v and some distribution mu. Actually not the distribution but the flow of distribution. For each time I have one distribution mu t. So it's exactly the same thing as in the previous slide except that now I fix the distribution in the brackets to be mu at time t. So it is a parameter of the problem somehow. So to recover mean field control I just have to prescribe the fact that instead of mu I will take the distribution of X. So this time in red will be the distribution of X controlled by v hat for the left hand side or controlled by v for the right hand side. So it's somehow a fake parameter because we don't really need it. Once I fix the control I have the distribution of X controlled by v. Because in the dynamics I also plug this distribution. So it's really a machine rather than dynamics. Now for the mean field game it's slightly different. When you want to solve a mean field game you want to find a couple v hat and mu such that for this fixed mu v hat is optimal. Right. This is the inequality. And then we have two things. The first point is that X should satisfy a dynamics which is not a machine rather than dynamics. But speaking it's a dynamics where I plug in the drift the distribution mu which is prescribed by the problem. But I also require this is the second point that mu should coincide with the distribution of X controlled by v and parameterized by mu. So it may seem that we are doing the same thing but actually it's different. So the first one the mean field control problem is really a control problem. Whereas the second one is a fixed point with a control problem. And it turns out that in some cases these are very similar but in some cases are different. So I'm going to show one example at the end of the talk but apart from that you can almost forget mean field games for this talk. Just to conclude this introduction these two problems as I said are different but it's not really that one is better than the other it's just that the model different things. So some typical motivations for example could be for mean field control at least two motivations. One is that we are looking for a cooperative equilibrium or cooperative optimum where we have an infinite number of agents and all of them are using the same control. So you have a global planner that gives to everyone some control and then everyone uses this control. So this could be useful for instance in distributed robotics if you want to plan the behavior of your robots. And another point of view is that you have a single agent with the dynamics that I've given before the DX and the dynamics and also the cost depend on the distribution of the single player. And this could be useful in risk management because you might have some criterion that depends on your variance or some other moments not linear in the distribution. Now for the mean field game problem it's different we have in mind a Nash equilibrium with an infinite number of players. So because there is this fixed point argument which is really the limit when the number of players turn to infinity of a Nash equilibrium for a finite number of players. So you fix the behavior of everyone and then you try to find your optimal control but then everyone does the same so there is a fixed point. And here this is the same thing but in infinite dimension because we have distribution let's say a continuous for example a continuum of player. And this could be useful in economics or sociology. So in this talk as I said you can forget mean field games we are going to focus on mean field control. I will present two models or two classes of models for congestion effect and give a distance in the test results provide some numerical results and I will focus on the PDE system and the control problem. I will not talk about the stochastic analysis side but for this I refer to the recent book of Kamala Nairan Deryi and papers of these authors and also papers of many other authors. So first model of congestion. So the model is as follows. Let's say you have an infinity of indistinguishable agents on the torus in dimension D so with periodic boundary condition for simplicity and here new will be a positive parameter although in some parts of this talk it could be a degenerate diffusion but for the sake of clarity here I will take just a positive parameter. So the SDE for X is as before except that now I take a very simple drift. The drift is simply V so the control and so basically it means that the agents control their speed and that's it. And note that the control is of feedback form so it is a function of X which means that the agent at position X just looks at his position and then he knows what to do. He will do speed equals V of X. And again here I'm writing directly the limit equation when you have an infinity of agents but you could start with a finite number of agents all using the same feedback V and then let the number of players or agents tend to infinity and you would end with this equation so here I refer to some recent work of Daniel Leiker to do that in a precise sense. So anyway once you have this SDE you can write at least formally the PDE satisfied by the associated distribution so M superscript V so it satisfies this for Kepler-Plank equation with some initial distribution M0 given. And in the divergence here we have M times V because V is exactly the vector speed, the speed that you should use. Now the goal is to minimize J of V as before but now here I will take for L a function of M but in a local defense. So in other words L depends on M only through M at position X. So I don't denote it with bracket because now it's not a function of the whole density it's only a function of M at position X. So the agent will pay a cost that depends on his position, the density at his position and his control. And there is a final cost and here I take something slightly simpler than in the previous slide, U t is a function of the position at time t and that's it, not the whole distribution at time t just for simplicity. Now this first line would go with the SDE, the stochastic control problem but I can rewrite it as a deterministic control problem so this is the second line and I expand the expectation as an integral against the density because X is a random variable but I know that the density of X is denoted by M superscript C together with the Fokker-Planck equation. So now it's a problem of optimal control of Fokker-Planck equation, purely deterministic and I'm going to focus on this aspect. So it has been shown in the book of Ben-Sousson, Frise and Yam that if there exists a smooth feedback V hat, optimal in the sense that it realizes the minimum of J, then V hat has to be of the following form, it has to be the gradient of h, sorry with respect to the third variable, so here du and dph should be taken at M and du. So when I write this, h is the Hamiltonian associated to the running cost by convex duality. So this is a parameter of the problem, somehow if you have l you know your Hamiltonian. Now what is not known is M and du so I should specify what these objects are. So it's not a surprise, M will satisfy the Fokker-Planck equation as before except that I replace V in the divergence by its expression in terms of Hamiltonian. And u satisfies some a, j, b equation which involves the Hamiltonian but also the derivative of the Hamiltonian with respect to M, the term in red in the a, j, b equation. However, note that here since l has a local dependence on M, h also has a local dependence on M which means that h depends on M at time t and position x. So actually M in this equation is just a real number so this dMh is nothing fancy, it's just a derivative with respect to a real number. Also this term in red disappears in the mean field game so if you know mean field games you can do the same things, you can derive a Fokker-Planck and then a, j, b equations that will be very similar but this term in red will not be here in the a, j, b equations. So this is a system of forward, backward coupled PDEs because the Fokker-Planck equation is forward in time with M, 0 given and the a, j, b equation is backward in time with u at time t given by the final cost. Now let's move on to congestion. Yes. So u is the value function of the problem. Yes, of this problem but it's not exactly the value function because if we go back to, so thank you that's a good question, if we go back to let's say, okay, the first slide, this one, you see that here j of v actually depends on the initial distribution that we choose. So, well as you know if you write the dynamic programming principle you will get it on a value function that will depend on the control and the initial distribution. So it will be a value function on the space of distributions. So strictly speaking if you do dynamic programming principle on this optimal control problem you get the master equation or something similar to the master equation on the space of distributions. But now if you specify it in some way you get this a, j, b equation. So this a, j, b is not really the a, j, b associated to this problem strictly speaking but it derives from this problem. Or in other words if you like the a, j, b equation of this problem will be posed on the space of probability distributions but once you fix the probability distribution to be m following the Foucault Planck equation you have a function of variable x only because it is coupled with the right distribution let's say. So you don't need to solve the problem on the whole space of distributions. So congestion. Actually the model I'm going to present takes into account two effects that are different. The first one, the main one is congestion. So by congestion I mean that agents do not want to move in a very crowded region. So if you have a lot of people around you you don't want to move because it's very costly. You have to make a lot of effort. So there are actually two ways to do that. One is to impose a hard density constraint. So you impose that the density cannot exceed a certain threshold. So some constant which means if the number of people is already at its maximum in this region then you cannot enter the region. But what I'm going to do now is rather a soft way to take into account congestion which is we don't impose a hard constraint so there is no fixed maximum for the density but the agents are going to pay a high price when they move in a region where there are a lot of people. So the price, the running cost they are going to pay will depend on the speed they have and the number of people around them. And there is another effect which is aversion and this means that you do not want to be in a crowded region even if you are not moving. So let's see how to do that. So I will consider the following Hamiltonian H as a function of x which is in the torus in dimension D. M again is just a real number positive because it will be density at time t in position x and p will be some other gradient of u. So the Hamiltonian is of the form some power of p over some power of m plus some l that depends only on m. So stated like this maybe it's not very clear what this means but let's take a look at the associated Lagrangian, so the running cost. So we do the duality, we derive some explicit expression for the Lagrangian. It doesn't look very nice because we started from the Hamiltonian because this is what is going to appear in the PDE system so I start with the Hamiltonian. But actually you could start with the Lagrangian of this form and then derive the associated Hamiltonian. Anyway, this Lagrangian looks like a power of m times a power of psi and think about psi as the control V. So in other words you see that this time in red means that if you are moving psi will be positive and then it will be multiplied by m. So if m is very large you are going to pay a higher price than if m is small when you try to move. Fix psi to some value, in other words V to some value and then depending on m you will pay a high price or just a small price. So it's more interesting to move in regions where m is small because this time in red will be smaller. Now you see that the term in l, l of x and m may encode two things so it doesn't depend on your speed but it will encode a special preference so for instance you prefer to be at this x rather than this x and also it will encode aversion. So think about l as an increasing function in m. So even if you don't move, even if psi is zero you are still going to pay l. So you don't want to stay in a region where there is a high density. And now there will be a trade-off between these two effects, all these two parameters. Actually there are three parameters alpha, beta and l. So let's take some extreme cases just to have some intuition. I recall the notation first and then take special case one. So beta is equal to two, so beta star which is the conjugate is also two and alpha is zero. So in this case you can write the Hamiltonian, you can write the Lagrangian and you have a Lagrangian of the from psi square plus l. So it means that here there is no congestion. You pay your price depending on your speed, it's quality in your speed but it doesn't depend on how many people are around you. Whereas if you take alpha equal to one then in this case it depends a lot on the number of people around you because you have m times psi square. So now that you are familiar with this model we can play a little bit. So the numerical simulations that I'm going to show are actually realized using the methods that I'm going to present later on but it's just for the purpose of illustration. So let's start with the initial distribution which looks like this. So it means here your domain is a torus between negative 0.5 and 0.5 so the two dimensions here are space and this value here is the value of the density. So it's uniform on some square inside the domain. On the right you have the value function at the final time, ut. So you see that this value function is also uniform and positive inside the same domain. So in other words if you stay there at the end you are going to pay this price so one whereas if you stay outside you are not going to pay anything. So of course at the final time you should be outside this domain because you don't want to pay this price. So let's see what happens. Here I want to illustrate the impact of little l so the cost of aversion. So I start with a quite small little l so linear in m let's say with a small coefficient 0.01. So here is what happens. Now it's only the density and the evolution of the density in time. So you see that indeed people, agents evacuate the square in the center and at the end you have a high concentration here around the boundary of the square because people don't have much incentive to go very far. They just don't want to be inside but if the cost of aversion is not very high like this then it's okay even if there are in the cross-disk region it's okay. Now if we take a larger l what do you expect? Yes it will be more uniform so actually it even becomes completely uniform at some point there is a transition and then everyone remembers that they should exit the square before the final time so they do it but you see that we get something much more uniform. Now the impact of alpha so I start with a distribution that is split into two parts each of maths half. One is a very peaky hump and one is a very small hump but they have the same mass one half. And they are attracted towards the center because of this final cost U. So you will pay a small cost if you are around the middle of the domain and a higher cost if you are far from the middle, far from the center. So the evolution will be like this and note that here I rescale the vertical axis because otherwise we don't see much. So after a short time you see that the two humps are more or less of the same size. They look pretty similar and they are moving towards the center. Another remark is that here they do use periodic conditions because you see that there is some mass on the left side here which comes from the other side actually because this is the shortest path to avoid the cross-ed region precisely. And then the mass converge towards the center. And at the end you see that we have something very peaky but we don't have a direct because there is still the cost of congestion and aversion. So this was for alpha equals 0.01. Now if we take a larger alpha what do you expect? So remember alpha is the exponent of m. So if you have a higher alpha you should have more congestion effect. And this means that now you have something like this so if I stop around let's say time equals 0.50 you see that the hump on the left hand side, the left side sorry remains higher for a long time. And this is because agents at the center of this hump are very, they feel congestion a lot. So basically they cannot diffuse somehow as before. So before it was cheap to escape from congestion but now it's not possible because if you are in a congested region you cannot escape. You have to just stay there until the agents in front of you have moved around you. And at the end we have something similar to the previous case. So for this model, so I recall here's notation for the Hamiltonian and note that this Hamiltonian is not well defined when m is 0. And this will be the main difficulty of this problem somehow. So you see that in the PDs I write h but h is not defined when m is equal to 0 but you have seen in the micro simulation that in some regions some even open portion of the domain m is 0. So how does that work? Well so to do that we have to introduce a notion of weak solutions and we can show existence and uniqueness for this kind of solutions. And the way we do that is we need some estimates on the solutions. So we say that phi and m is a weak solution to the PD system provided you have some estimates but the main point is that the AJB equation on phi here because phi is not exactly u is satisfied in the sense of distributions and only with an inequality. Then the Foucault Planck equation is satisfied with inequality in the sense of distributions and we also need some technical assumptions and this identity hold in red here. So it's a bit technical but stay with me there are just a few technical slides in this talk. So now why do we introduce this kind of solutions? It's not obvious why this is the right notion of weak solutions. Actually this approach is similar to an approach that has been used in minifil game so actually this goes back to the first papers of Flavri and Lyons and then Cardaliage, Grabeur, Poireta, Tonon and others where in these papers the authors use the variational structure of the problem. So basically I think that this approach worked because the minifil game had a variational structure which is not true in general but from minifil control it's true in general because we always have a control problem so in general it will have a variational structure. Now this relies on the natural interpretation of the PDE system as the optimality condition of some optimization problem on a Foucault Planck equation. So the problem I've written before. And to this problem we can associate a dual problem using Fentcher-Locatheurase theorem and then we introduce this kind of weak solution because it is compatible with these optimization problems in the sense that you have a correspondence if I give you a weak solution to the PDE system you can derive a couple solutions to the primal and the dual problem or rather a relaxed version of the dual problem and vice versa. If I give you a solution to each problem you can derive a weak solution. And also this notion of weak solutions allows m equal to zero in some regions. And we need that in order to have this Hamiltonian in the system. So I'm not going to do the proof but I'm just going to write the two optimization problems because this would be needed in order to present the numerical method. So the first problem is similar to the problem I started with so if you look at the definition of B it's basically the running cost plus the final cost. And there is a constraint that m and z should be in this space k1 which basically says that m satisfies the Foucault Planck equation except that it's a linearized version because we made a change of variables. So here z replaces m times v in order to have a linear equation. And we have to take that into account in the definition of L tilde so L tilde is basically m times l except that we have this change of variable. So v is equal to z over m. And then this cost is L tilde is exactly what we expect provided everything is well defined. Now if m and z are both zero we just put zero for the cost but in other cases we will put plus infinity because we want to avoid that. We want to forbid that and then since we try to minimize L tilde should not be equal to positive infinity. So this cost is convex which allows us to do convex duality. So we can derive the dual problem which can be written as follows. So it would be a soup over phi which plays the role of u somehow of a phi and a phi is the infimum over m of a phi m with a phi m defined like this on the fourth line. And you see that basically it's the integral of what appears in the HAP equation except that we don't have the derivative with respect to m of h. We can rewrite a of phi in different ways and in particular this term in purple which I'm going to use for the numerical method. So it is an infimum over phi of the sum of two functions one capital G that depends on lambda phi which is the derivative of phi basically plus the term that depends on phi only. So the numerical scheme relies on augmented Lagrangian. So to write the augmented Lagrangian I start from the problem in the previous slide so this term in purple and I'm going to introduce a fake variable somehow. So because it's not very convenient to do this minimization like this. So I introduce a variable q that is required to be equal to lambda phi and then I write that I should minimize f of phi plus g of q. So far I've done nothing I've just introduced a fake variable somehow. But now I can write the Lagrangian associated to this optimization problem under constraint so it's not the Lagrangian of the optimal control problem before it is the Lagrangian of this problem under constraint. And I must find a third point of this Lagrangian where sigma is the Lagrangian multiplier. So this is equivalent to finding a third point for the augmented Lagrangian which is the Lagrangian plus some penalization term which penalizes this constraint. So q minus lambda phi with some parameter r. So think about r as maybe one just one. So now these two problems are equivalent finding a third point for the first one is equivalent to finding a third point for the second one. So the reason why we introduce the Lagrangian is because it will somehow fastens numerical convergence. Now here is the first thing we could do with this Lagrangian. We want to find a third point. So one way to do that is to minimize with respect to phi and q the Lagrangian. And then update sigma using some gradient step. So here we start with some initial phi 0, q 0, sigma 0 and then we construct iteratively some estimates, some approximation of the solution. And the way we do that is first we minimize over phi and q and then we do a gradient step in sigma. So it can be shown that this will converge towards the solution of the problem. So a third point. But actually we can do something slightly smarter. So we can decouple the problem in phi and q. So this is called alternating direction method of multipliers. And the idea is that first we fix, let's say, q from the previous iteration and we minimize in phi. So you see that this is a simpler problem than before because we have just one variable and some terms just disappear because we are minimizing with respect to phi only. Then we do the same thing for q. We fix phi that we have just computed and we update q and then we update sigma by the same step as before. So we split a single minimization into variables into two minimizations. So the first step actually amounts to solve a finite difference equation on phi if you write the optimality condition. The second step amounts to calculating some proximal operator. So this is non-trivial in general. But for the condition model I've presented, we can do it almost explicitly. We have just an equation to solve a root of a polynomial in dimension one to find. So it's not too hard. And the last step is very straightforward. So for the numerical implementation that I've shown before and that we'll show now, we take nu is equal to zero because if you don't do that then in the finite difference equation that phi has to solve because of the first step you will have some B Laplacian and then it seems that it's harder to solve numerically. So we need some methods to do that more efficiently. But if you take nu equals to zero, the matrices are parser so it's easier to solve. Now just to conclude on this section, I would like to present a model, I mean a case where we take into account some other boundary conditions because periodic boundary conditions is not very realistic. So to do that, at least numerically, what we can do is we take let's say the square but we define the Hamiltonian in different ways inside the domain and on the boundary. And on the boundary of the domain we require that the speed should point towards the inside of the domain, the interior of the domain. Once you have defined your Hamiltonian like this, you can derive the Lagrangian by duality and you can redo the same thing as before. So this tells you that when you are on the boundary you cannot take a speed that would let you go outside of the domain. Of course this is for the boundary of this square but you could also put obstacles inside the domain. And then the equation on phi and the maximization on q will be slightly different on the boundaries but it doesn't change too much the ADMM numerical scheme. So let's see an example. Now we start with M0 uniform over a small square in this corner and we take a cost that attracts people on the other side of the domain. So it's minimum on this little square so people should be concentrated on this square at the end of the simulation. And now I'm not going to play with the parameters, I'm going to play with the boundary conditions. So first of all periodic boundary conditions, so what do you expect? So actually the mass uses periodicity to travel very quickly to the other side of the domain. So it never crosses the domain actually. And at the end you have something with a very high value on this side because most of the mass comes from this side actually. Now if we put state constraints on the boundary of the domain then of course the mass cannot cross the domain, we cannot use periodicity so it has to cross the domain. And then the mass, the final mass looks like this so very concentrated on this side because it comes from here. Now if I put an obstacle in the middle so you don't see it here because it's encoded in the Hamiltonian, it's not in the domain but let's say you have some obstacle in the middle, a little square that the mass will have to avoid. So you see it splits into two parts and avoids this obstacle. But the final shape is more or less the same. So very briefly it will be much shorter but I will present a second model which is actually smoother, at least so mathematically it's somehow simpler. So I will write the model. You have the same dynamics so the agents control their speed. Now the running cost is a non-local function of the density. So I use the gain notation with brackets because it may depend on m in a non-local way. Actually it will depend in a non-local way on m. Now the PDE system is very similar to the previous case except that since the dependence is non-local this derivative with respect to m is in a different way, defined in a different way. So here it will be the GATO derivative assuming it exists. And then we integrate with respect to the density m. So this is a system of PDE you get in general for a general mean field control problem with non-local dependence in the running cost. Of course there could be also a dependence on m in the final cost. And then here for the terminal condition you would also have ut at m and x plus the derivative of ut with respect to m. So you would also have an additional term here. But for simplicity I take ut depending only on x and not mt. So I will just give two examples for which we are able to show existence of classical solutions. So here we don't need weak solutions. And actually there is a larger class of Hamiltonians for which this work but for simplicity I take only two examples. So the first one is h1. So both of them have more or less the same form h1 and h2. However h2 is sub-quadratic in P. And both of them depend on some parameter alpha, positive number. And there is some regularity assumption. And also both of them are non-local in m because m is convoluted with some regularization kernel, row 1 and also row 2 in f. But the main part is that there is this regularization with respect to row 1. And also if you look at the denominator it's a constant. So here 1 plus m to some power. So this will never be 0. You could replace 1 in the denominator by some small constant but anyway it will never be 0. So this will never degenerate when the mass is 0. And moreover I will take a positive diffusion, so positive volatility. So the mass will never be 0 basically. And the way we show existence of classical solutions is reminiscent to the techniques of Lathry and Lyons by a fixed point argument. So the same techniques that are used in minfield games. And we also have some arguments to show uniqueness of solutions based on some convexity properties. So now I'm going to show some numerical results related to this model. Actually for simplicity of implementation I will not take a non-local model. I will take a local model but still you notice that the denominator is 1 plus m. So this is never 0 so we don't have the same problem as in the previous setting. And the two constants C1 and C2 are chosen such that the numbers make sense to interpret the congestion model. So here I'm going to represent the evacuation of a room, let's say like this one maybe, where people are sitting on some rows and there are some obstacles in the middle, so like tables. And so they have to exit these rows and then go to the exits which are located in this corner and this corner. So the cost, the final cost will be of this form on the right. And you see that it's very high around the back of the room but at the front of the room it's very cheap. So basically it encourages you to go to the front of the room and we will add some Norman conditions on the boundaries and on these obstacles. And we will also add directly boundary conditions on the doors. So people can exit by these doors. So the mask will vanish at the end of the simulation. Now for the simulations, they are based on Newton's methods that are described in these papers by Aashdu, Kaputodolceta and others that I'm not going to describe here. And it can be applied to this local Hamiltonian but also to a non-local Hamiltonian. It doesn't change much. It's just a bit more tedious to implement so here I took a local Hamiltonian but it doesn't change too much for the numerical implementation. And on the left I will present the simulation for the mean field control problem. So I take the PD of the mean field control problem and I solve it using Newton's method. And on the right I take the PD of the mean field game. So without this term in red that's derivative of m with respect to h. So let's see what happens. So you see that after a short time people start going to the side of the room in both cases but in the mean field game on the right you reach higher values. I think it's even clearer like this maybe. You can see that so it's scaled in the same way 4.5 is the maximum and you see that on the right you have higher values than on the left. And somehow this is because people are more selfish so they try to push and exit as fast as possible but actually maybe it's not the optimal thing to do. Let's say if you look at a social optimum. And then people exit by the doors here and at the end no one is left in the room. And you see that around the end of the simulation both look very similar and this is because the time horizon is the same. So everyone should exit at this fixed time horizon. So we don't consider the problem of exiting as fast as possible. The problem is the fixed time horizon and just mean field control and mean field game. So to conclude I've presented two models of mean field control with congestion effect, existence and uniqueness for the PDE systems and some numerical methods. So some ongoing work. So for the congestion model I'm looking at the ergodic problems for the different models of congestion and also what happens for several populations, two populations. And on the numerical side there are at least two projects in the semiracs related to mean field control. So one is to take this ADMM method and take a positive diffusion. So I've mentioned that for now we take nu is equal to zero because otherwise we have some trouble to solve some finite difference equation. So we try with nu is equal to zero. Nu is strictly positive. And we also work, there is also a project on stochastic methods for some class of mean field control or control of making a lot of problems. Then maybe some perspectives. It could be interesting to know the exact relationship between hard and soft congestion. I think so far we don't have a very precise picture of the relationship between these two kinds of model. We could look at boundary conditions in the theoretical framework. So the simulations work for some boundary conditions but from the theoretical point of view it's still not very clear. And maybe also I have not presented the proof for the existence of weak solutions but the proof is not very straightforward. So we could look for a more direct or simpler proof of existence of weak solutions. So here are some references. Thank you. Thank you. And hero is finished. This is also for storage here. We will now turn this beep. Thank you, Matteo. If you don't mind we will get started. Thank you. Okay, let's thank again, Mathieu.
The theory of mean field type control (or control of MacKean-Vlasov) aims at describing the behaviour of a large number of agents using a common feedback control and interacting through some mean field term. The solution to this type of control problem can be seen as a collaborative optimum. We will present the system of partial differential equations (PDE) arising in this setting: a forward Fokker-Planck equation and a backward Hamilton-Jacobi-Bellman equation. They describe respectively the evolution of the distribution of the agents' states and the evolution of the value function. Since it comes from a control problem, this PDE system differs in general from the one arising in mean field games. Recently, this kind of model has been applied to crowd dynamics. More precisely, in this talk we will be interested in modeling congestion effects: the agents move but try to avoid very crowded regions. One way to take into account such effects is to let the cost of displacement increase in the regions where the density of agents is large. The cost may depend on the density in a non-local or in a local way. We will present one class of models for each case and study the associated PDE systems. The first one has classical solutions whereas the second one has weak solutions. Numerical results based on the Newton algorithm and the Augmented Lagrangian method will be presented. This is joint work with Yves Achdou.
10.5446/57408 (DOI)
Thank you very much for the invitation to give this talk. So this is a joint work with Elisabeth Carlini from the University of La Sabienza. So the aim of this talk is to describe some numerical methods in order to discretize the well-known Fokker Planck-Kolmogorov equation. And in a rather general nonlinear case. And so please if you have any questions, something is not clear, tell me. We have time. So let's go. So here is the plan of the talk. I will give a small introduction to the PDE that we will study. Then we will discuss about the numerical scheme and its properties. And then we will end with applications in midfield games. And also in problems that come from pedestrian dynamics that are known as the huge model. So this is important. So this is the question that we will consider in the whole talk, in the entire talk. So is that working? That is strange because it doesn't matter. So here you see that you have this equation. The equation that you have there is the Fokker Planck-Kolmogorov equation. So you have the derivative with respect to M. So you have the second order term whose coefficients A, I, G depends on M. And then you have plus the transport term, the divergence of B that depends on M, also times M. And the important factor is the dependence of the nonlinearities, the coefficients on M. So B, the drift that is there, as a function of M, is a function of the whole path of the trajectories. So M, you have to see it as a function of time taking values in the space of probability measures. So in fact, this is a question that is rather general because it's not local also in time. Times the space are D and times zero T. So this is B. A, G is the square of sigma, where sigma is as B. So it's a function of the whole path of the M times R D times the time. So this is a rather general equation. We will see later, Miffel games is a very particular case of this equation. It's very easy to see. And the definition of the solution of this, you multiply, you integrate by part formally, and you give a definition in a weak sense that is given by this expression here. So M will solve this Fogger-Planck equation. If and only if for each test function phi, you have that equality. So you have to see that here I put D M zero and D M S, meaning that I'm not looking for necessarily absolutely continuous measure. Okay? This is equation for measures. Is clear now? Okay. So what is the idea is that formally a solution of this equation will be given by the law of the solution M is a measure, it's a trajectory in the space of probability measure, will satisfy formally that M of t at time t will be the law of X t, where X t, where capital X is the solution of this Stoke-Gaston differential equation. This is in some sense the Makin-Blassov equation, but the coefficients again depends on M in the whole history and also in the future. Okay? So DX solves that. So this is formal because we have to establish the existence of solution and all that. But anyway, this gives an idea of the link between the PDE and the Stoke-Gaston differential equation behind. Okay? Good. So let me give some reference, important reference in the case of linear equations where B and sigma do not depend on M. So in that case, if you take sigma equal to zero, so you are in the first order case and B is not regular, then you have several results, one result by the Bernoulli-Leon's and then by Ambrosio, et cetera, about the existence and uniqueness of this equation. And when sigma is not zero, so you are in the second order case and B is still not regular, you have also results by Le Brie and Lyons and by Figalli, et cetera. And also there is this people that are interested in the Fogger Plan equation. I advise to read that paper, to look at that book that I recently discovered that is very important. So this is what they do by Bogachev, Krilov, Röknöf and Schapofnikov, where they do like a detailed account of this equation. Well in the non-linear case, in the generality that I presented before, there are also some results but there are also some interesting results by Bogachev, Röknöf and Schapofnikov when sigma is equal to zero and in the stochastic case by Manita and Schapofnikov in the 2012 and 13. So this is important papers, I think, in this subject. Good. So nothing works anymore. And with this, I will go with the, I don't know who it is. Okay, so now that I presented the equation, now we will discuss about the scheme. So in order to present the scheme, for pedagogical reasons, I will start with the most simplest case that will be with sigma equal to zero, so it will be in a deterministic framework. And also with B, so there is no sigma, but there is B. And I will take B independent of M. Okay? So I will take that case and then we will go forward. Okay? So in that case, under suitable assumptions on B here, I won't discuss the technical assumptions on B and sigma yet because I can suppose that they are very smooth, they have good properties in order to derive the scheme, but then the scheme will apply to more general coefficients. Okay? So I suppose that everything is regular in order to motivate the scheme. Okay? So it's B's regular leap cheats. So you have a solution of the, so you have a solution of this equation, x dot equals to B of x. And when you have this equation, you can define a flow. The flow, it means that as you associate at each initial condition, you associate at time t the value of the solution at time t. Okay? So that is what I call capital phi. So phi of s t x, okay, will be the solution of the equation at time t when I take initial time as small s and initial condition as small x, okay? So I can define this flow. So this is a function of three variables, the two times, the initial times and the time where I look at the solution and initial condition, okay? And then once you have this solution of this equation, this flow, I'm sorry, then in fact you can prove that the solution of the Fokker-Planck equation is given as the E measure, measure also called in English the push forward measure, of the initial measure M0 under this flow, okay? So this is a function of the initial condition, okay? So you want to compute M at time t, you put initial time 0 because you will move M0, okay? And then you compute the flow up to time t. So this gives you a function of the initial condition. And then you transport the initial measure with this flow. I will recall this notation for people that are not aware. So you take this, the function, for a function F, you take the push forward of M0 of a set, this is a new measure, you define the border set, for example, this would be M0 of F minus one of A, okay? So this is this notation. It's like the E-match measure, okay? Good. So this is very important because once you have a, you have a trajectory, a trajectory interpretation of the solution of the Fokker-Planck equation, okay? In terms of that ODE. So once you understand that, you say, okay, in order to discretize the Fokker-Planck equation, instead of discretizing the Fokker-Planck equation, I can discretize the flow, okay? So how do you do that? Well, you take an Euler scheme, for example. So you take a time step delta t, you divide capital T by capital N, N is a natural number, and then you set tk as k times delta t. So you have agreed in time. Good. So that's normal. And once you have that, you can define your discrete flow with the Euler scheme. So, in fact, you can define the small phi at time k. The small k there, in fact, is like you will do only one step, okay, from k to time k plus 1. So you do an Euler scheme. So you have your flow at time k when your starting point is small x. It will be x plus delta t B of x tk, okay? So then you define recursively this flow, okay? And so the natural discretization in time of that equation is that you will say the measure at time k plus 1 will be the measure at time k transported with the flow phi k, because you do only one step. It's clear, right? The idea? Okay. Good. But then you don't have a computable method because you only discretize in time. So now we can decide to choose to discretize also in space to have something that is implementable. So we discretize in space now, okay? So the space discretization is that you take a space step delta x, okay? And you define your space grid here. I am working in Rd. So I am taking all the grid in the whole space, okay? So I define g delta x as the point xi where i is in zd times delta x, okay? So you have a grid in Rd of length delta x. And then you consider p1 basis. So this is a function that I think at each element. So you have to construct, using this grid, you can construct a triangulation, for example, and then you can take this p1 function. But I don't want to make it just complicated. So let's think only in dimension one to make it simple, okay? So in dimension one, my function beta i will be this function like that. So it's a function here you have xi, here you have xi plus delta x, so it's xi plus one, and here you have xi minus delta x, xi minus one. So the function beta i associated to that node will be the function that values one in the node, and it's zero here, zero there, and zero everywhere else, and inside is affine, okay? It's affine here and there. So that's the function beta i. Okay, so good. Now we will use this function beta i to do the discretization in space. So how we will discretize? Now I do the same before, so I have my discrete flow, but now I have discrete space. So I already find my trajectories only starting at the points on the grid. So that's phi i k is the discrete characteristic starting at point xa. Okay? And then I will define my measure at time k, my discretization, as the sum of delta measures, of delta of direct measures, weighted by some weight that I have to compute, so I have to compute the weights. So the weights, so you have weights in each point on the grid in the space, but you also have time, okay? Okay? So how do I compute these weights? So at time zero, at time zero I only, at time zero I will do a picture here. So this is xi, this is xi plus one, and this is xi minus one. Okay? So at time zero, so I have half, so this will be ai, so this will be xi plus delta x over two. Okay, so this will be this small interval delta, we'll call it ai, and so my initial measure, the mass that I will put at time t, at time k equals zero in this point, the weight that I will put in this point, will be the measure m zero of that interval. Okay? So this will be at time zero, this will be the measure m zero that is given of the interval ei. Okay? Okay, so that's how I define my measures. Then I have, then I have, I will use another blackboard, then I have to compute the measures, the weights in the next times. How do I do that? How do I do that? So the idea here, so you have this formula with the beta i that is complicated, but I will try to explain it with a picture, okay? So the idea is that here you have point xi, so I want to compute, I am looking at the second line, I want to compute the weight at that point at time k plus one. So this is time k plus one. Okay? And so here I have my function, so here I have my function like that, this is beta i, sorry it's not very, this is beta i. So I have to compute, I want to compute the mass, the weight at this point. Okay? And so in order to do that, I will look at the past time, so at time k, and I will see all the characteristics, this phi i k that arrive into this support of my beta i. And then I will multiply the weight of the mass that arrives to that point with the inverse to the distance to the point xa. So that's the idea. So if I have point, I have here time kk. For example, if I have here xj, here I have a mass mjk, okay? Then if I want to see how much mass at point xj will be transferred to point xi, I will see if my disc is characteristic, so I have phi jk. So imagine that this is phi jk. Imagine that this is phi jk. So if that's phi jk, so if I compute the characteristic and it arrives to the support of beta i, I will put mass from xj into xi. And how much? It will be proportional to the distance. For example, if this guy, the characteristic, arrives at the midpoint between xi and xi plus one, I will put half of the mass of mj to xi and the other half to xi plus one. Okay? If phi jk arrives exactly to the point xi, I will put all the mass mj in the mass mi. So this will be the contribution of this term. But then I have to see all the other points on the grid. And I have to see where the disc characteristic goes, and so that's why you have the sum of beta i. Okay? So that's the idea. So if this is clear, it's the basis of the schema. Okay. So that's the schema, and well, you can prove that it works. And everything will be the convergence of all the schemes. All that will be a consequence of the general result in the nonlinear case. So I'm only showing the scheme. Good. Now, what happens in the stochastic case? In the stochastic case, we don't have these deterministic characteristics because you have a second order term. This second order term will be linked to a diffusion. Okay? So what is important is, as I said before, is the representation formula. So in the stochastic case, you have that your characteristics will be given by what? I am considering the stochastic case, but linear yet. Okay? So B and sigma do not depend on M. So in the stochastic case, you have that your characteristic will be, your flow will be defined as X at S will be the solution of B of X, X, S. And here I will put S prime, for example. This will be S prime, S prime, the S prime, plus the sigma term, X, X at point S prime, and the Bromley motion, W. And so you have X at time S will be X. So this will be the analogous characteristic, but now it's stochastic. Okay? But in fact, you have the same kind of representation formula. MT at the measure, the solution of the Fokker Planck is a measure. So at time T, I compute it, it will be a measure. So if I want to compute that measure on a set A, what do I do? I solve this equation. So this gives me an stochastic flow. And this, so it depends on omega on the alia. So for each omega, I will have like a deterministic flow in some sense. And then I will transport the initial measure with this deterministic flow that depends on omega. And this gives me a measure that will be stochastic because it will be the transport of the initial measure with this stochastic flow. And then you compute that set A and then you take that mean. Okay? So this is the solution of the Fokker Planck in the case where you have a second order term. So it's exactly as before, a set that, from the fact that you have to take a mean over the alia of these flows. Okay? Good. So this is that phi 0 t omega. This would be the stochastic flow. Okay? That is the final term. So then as before, we saw that we have to discretize the flow. Okay? So here is the same idea. So I will discretize the stochastic differential equation. So here I consider, I also, I am skipping the part where I discretized first time and then space. I will do all in one step because you know how does it work. So I have to compute the characteristics starting from the points on the grid. Okay? And so the idea is that you define, you will discretize your Brony motion from time k up to time k plus 1. You will discretize it with the most simplest thing that will be a random work. So you will have your point x a and your Brony motion can go up with probability one-half and go down with probability one-half. And so you will discretize that with a random work in that step. Okay? So you will define that discrete flow now that have two possibilities with the plus and the minus. So it would be phi jk plus minus dL, I will discuss it now. It would be xj plus delta tb, so this is the earlier scheme, plus a minus or minus, you have to decide. Okay? The square root, that's natural scaling for the Brony motion as you know, times the sigma, the sigma term. So dL here is this because you have R or you have several Brony motion. You have R Brony motion. So you have two possibilities for each Brony motion. Okay? To each Brony motion, you have associated a column of the matrix sigma. So sigma L is the L column of sigma. So for that column, you will move with probability one-half up or probability one-half down. So you have this discrete flow. Okay? So that's very natural. And then you do the same. So the representation formula gives you that m at time k should be equal to the sum of Dirac masses times some weights, and you have to compute the weights, and I would use those discrete characteristics. Okay? So at the beginning I do as before, and then to go from time k up to time k plus one, I do the same, forget R, take R equals to one, so you have only one Brony motion to simplify the discussion. So here you don't have this sum and you have R equals to one. So you have this similar formula than before, but you have to evaluate two characteristics because you have these two possibilities for the random walk. Okay? And you have these two possibilities, and this possibility has probability one-half. Okay? Good. So that's the scheme in the second order case. So it is important from the theoretical point of view to prove the convergence of the schemes. It is important to notice that these schemes, it corresponds to the transition probabilities of a Markov chain. Okay? So the transition probability that is described by exactly the transition probability that I want. So you have a Markov chain behind that is somehow discretizing this diffusion. Okay? So the initial law, so it's a Markov chain that is discretized in space and discretized in time, and the initial law of this Markov chain, it would be this initial distribution, and then the transition probabilities are computed via that formula, and when you define those transition probabilities, if you compute the marginal laws of the Markov chains, you will have that the marginal will be given by this formula. Okay? So if you define those transition probabilities, you will have that. Okay. So that's the point. Now in the nonlinear case, well, in the nonlinear case, you have a problem because all this idea that you are doing this stuff recursively is linked with the Markov property in some sense. Okay? But in the nonlinear case, you lose the Markov properties in those machine class of equations, so you don't have the Markov property, but if you fix a measure on the coefficients, then you will have a regular diffusion for which you have the Markov property and all these schemes I presented before make sense. So the decision in very natural, I freeze the measure, I do my scheme, and then I have to compute a fixed point. Okay? So that's exactly the scheme that at the end, it corresponds to put the measures M's in the nonlinearity in the discrete characteristics. So the scheme is exactly the same, but you have that M also is inside the nonlinearities. Okay. Then you can prove under only the continuity assumption of the coefficients, you can prove that this scheme has a, I admit a solution. This solution is in the linear case was trivial. Of course you have only one, in that case you have only one solution in the linear case because you construct the scheme iteratively. Okay? So here it will be exactly the same if my, so this is important. If my coefficients, as I recall that my coefficient depends on M on the whole interval of time. So if I compute BM at time XT, but what is important here is time, if I compute BM at time T, and if this BM at time T depends only on the past trajectory of the measure, then the scheme will be explicit. Okay? And then you will have only one solution of the scheme. On the other hand, if my scheme might be at time, might be as a function of M at time T, it depends on the value of M in the future, then the scheme will be implicit. And then in that case the solution, the existence of a solution of that scheme comes from the, from a fixed point argument. Okay? And so we will see exactly that this case of an implicit scheme is exactly the case of M in field games. So the convergent result. So the assumption is that, this is the only assumption that you have that your maps are continuous as a function of the measure times an X, so you don't have leap this assumption, you have any assumption, only continuity. Okay? And you have linear growth with respect to X, that is uniform with respect to the other variables. Yeah? I'm sorry? What's the question? Are you using the direction of measure? In M? Yes. Yeah, I'm so sorry, yes, that's an important question, because I will write an, I will write, so I will write then, and then we continue the property that makes use of the distance. So I'm working with the space, the space of zero T, so in P1 of Rd. So well, I have to only to measure side this space because you have to, here you take the sweep. So here you take the, this is the probability space with the one versus side distance. So you have this, so P1, you have P1 Rd, and you endow it with a metric that we call d1, that is the metric of optimal transport, that d1 of measure M1 and M2 will be the infimum over all the measured gammas in the product. There are probabilities in Rd times Rd of the integral in Rd times Rd of the distance between x minus y, the gamma of x and y, and I take the infimum over gamma, so the first margin I have to go response to the measure M1, so I write this as the projection on x. So I take the image measure of gamma under the projection on the x coordinate, it will be M1 and the same with x2, the projection on y of gamma will be M2. So this is the optimal transport, excuse me, the versus time, one versus time, this is by, in fact in this case there is an important probability also for the prudent system of x-point, is that this, it does this space, in fact with this point because you can put here a P, you have the distance of P here, you will put P here and you take the square, the P root, so this is the P distance, but in the case P equals to 1 in this case, this also has an alternative formula, that will be that's the super over the set of leashes function of the integral in Rd now, you are not in the product of Fd, here you will have M1 minus M2 and you take the super over the normal Fr leptis or 1 leptis, so you take the gradient of F less than 1. So you have that formula, it's not exactly the dual of leashes function, but it's somehow like that, but what is important is that here you can embed this metric space of P1, that's a metric space, you can embed it like in a vector space, and like that you can use shorter fixed point theory and stuff like that, well that's the distance, what is important is the distance 1, okay? Good, any other question? Okay, so now we have these two assumptions, continuity and linear growth, and under this assumption the skin has at least one solution, okay? But now the solution, remember I have weights in space, so I have dirai masses in space, and in time I have to extend them, because I will work with this space, this will be the space that I will work with, because this is the space where the solution lives, so I have to work with this space, so I have to extend my measure in time, so I do that with a simple interpolation in time, okay? So now we have to make the parameters go to zero, so the important here, there is the condition for the convergence, the condition is that delta x, delta n, the bad notation, delta n x is a sequence of space steps that are going to zero, and delta tn is a sequence of time steps that are going to zero, but if they satisfy that relation, that delta t is small with respect to the square, no, excuse me, the square of delta x is small with respect to delta t, then you will have convergence, so this is a nice property of this scheme, because you can think or you can use another Markov chain to discretize your diffusion, because the Markov chain that you took is strange, but the Markov chain that we took has this good property that you will get conversion under that condition, so this means that time can be bigger with respect to space, that is contrary to the finite difference scheme when you implement it in explicit form, where you have that time has to be small with respect to space, so that's the CPL condition in numerical analysis, so this scheme has the other, in the other sense, time can be bigger with respect to space, so this is interesting if you want to compute the solution in times, long time horizons, because you cannot take very small time steps, okay, and so here in this interpretation in terms of Markov chains, in fact, gives you this nice property, so you have that the distance, you have the inequity continuity of the sequence of measures in this space, so you have a whole there continuity here with that metric, and you also have that second moments are bounded uniformly, so this is an important property of that space of P1, so if you have a family C, so you have a family C subset of P1 of RD such that you have that the integral of x square d mu is bounded, that is bounded for a uniform constant K is bounded for all mu in C, okay, so you are in the case in the distance, in the one distance, that you bound the second moments uniformly, then that set will be relatively compact, okay, in P1, in P1, not in P2, in P1 it will be relative, so to get compactness in P1 you have to bond higher moments, okay, so here in P2, so here if you see the second line gives me that my oldest trajectory takes values in a relatively bounded set of P1, and I have a continuity because of the first line, so by Arsela's colleague I have a limit measure in C0, T, P1, okay, but then I have to prove that that measure is the solution of the Fokker Planck equation, okay, but I won't do that because it's constant but that's the answer, so under this assumption you have limit points, you always have a limit point, okay, because of this continuity, this Arsela's colleague theory, in particular, so here this is a convergence result, so all the schemes that are constructed have limits point, and each limit point is a solution of the Fokker Planck, Kolmogorov equation, the general equation, okay, and in particular this gives also an existence result for the equation, okay, so in some sense it's like a piano result, remember piano in the ODE's that you only ask for continuity of the coefficients, you have existence, so in that sense this is exactly the analogous of piano theory but for Fokker Planck equation, so I was convinced that this result was somehow new, the existence also, but in fact no, there is this paper that I cited before by Manita and Shapov-Fligov, that is 2012 and 13, where they prove existence of these general equations but also under more general assumptions than the one that I put, okay, so in fact our contribution is more in the numerical part, but anyway, but this gives another proof of the existence in any case, okay, so this is now, this is important, so I will see later, I don't have access always to B and sigma, so I cannot compute the solutions of the scheme, okay, so what do we do in that case? We have to approximate the coefficients B and sigma in order to get an implementable scheme, so that is the second result, so what happens if I take an approximation, if I take B and sigma n, have linear growth independent of n, okay, they have to be continuous only on x in that case, but the important property is that this converges, so there are good approximation of the coefficients, good approximation means that if I take mu n converging to mu in P1 and the C0 t P1, so remember that these are trajectories, then as a function of x and t, this function Bn of mu n, they converge uniformly to B in the same for sigma over compact sets of rd times 0 t, so if you have that property, everything works, so you can construct the same scheme as before, but now with Bn and with sigma n, there are computable things, and then you can take the limit and you recover the solution again, okay, so that's important, good, so now, how much time do I have? Okay, so now let's go to the applications, so me feel games, so here I will consider a particular instance about full course, it can be generalized, but I consider a particular instance of me feel games, so here I have a Hamilton-Jacoby-Bellman equation, co-applet with a Fokker-Plank equation, the Hamilton-Jacoby-Bellman equation has a Hamiltonian that is quadratic, and in the Fokker-Plank equation you have this velocity field that is minus the gradient of the value function V, okay, so the important thing is that here I consider sigma not positive because I have this square, sigma not equal to 0, there's a typo there, sigma is not 0, so this is what people call like non-degenerate me feel game system, okay, and you can prove under good assumption of capital F and G, I will discuss later, and under good assumption of capital F and G that are those these coupling terms, you have existence of a solution, okay, of the me feel game, good, but I don't know if you know the more or less this theory, you have been in the course, so I don't have to remember to recall what was the meaning of that system, but anyway you will see right now with the rewriting as only one Fokker-Plank equation, because here you have Hamilton-Jacoby-Bellman equation, couplet with a Fokker-Plank equation, but you can put everything else on only one equation, so the key point is that you can write all that system and only that equation, the second one, but the gradient of V depends on M, okay, and depends on M trajectorially, because so to compute the velocity field, I have to take the gradient of V of M, but who is V of M, it will be the solution of that optimal control property, where in the coefficients you see appear M in the future, because I want to compute the value of function at time x at time t as a function of M, but then you see appear M at any time, so it's exactly a particular case of the equation that we considered before, okay, but with a time dependence that's only in the future, and you don't have any past here, good, so it's clear or not? Okay, so this is the technical assumptions that you have to take very nice function f and capital F and capital G with bounded, with second bounded derivative, everything with what you want, because this is only technical matters, because you are working on RD in the whole space, you are working a bounded domain, then you have to put boundary conditions, it becomes complicated, but otherwise you can work on the torus, you forget boundary conditions, you unbounded domain, then you can relax those, so how those assumptions, so those assumptions are important because of the unbounded of the space, anyway, they are also important to get convergence also, but I mean that in a bounded domain, I think that they can be relaxed, I think, well under these assumptions you have this very important property that as a function of the three variables, so this is somehow a tonician, as a function of the three variables m, x and t, the gradient of v, m, x and t is continuous, okay, and it's bounded, bounded is not as a surprise because the function b you can prove that it's glitches because everything is bounded, everything is glitches thanks to the first assumption, so that's no problem, but so I have this, so it is continuous, so this is my b, sigma is a constant, so I can apply my preview and it's bounded, so I have in particular has linear growth, so I can apply my theory, so in particular I have existence of a solution, okay, that may be a system and I can approximate it, but now there is a problem that we don't know this vector field, v, because I don't have an explicit formula for v, because it's the value function of an optimal control problem, so then you have to approximate v, the value function v, in fact you have to approximate the gradient of the value function v, so then in order to compute the gradient, we compute first an approximation of v and this we do it with what is called, you can choose any method that you want here, this is not important, you can choose more or less any method to approximate your value function v, here I choose, we choose the semi-lagrangian scheme, so you have this function b that is computed recursively backward, and well anyway, so this here you have a function that is defined on the grid and on the grid points of the, in the north of the grid, okay, and then you have to interpolate in a space to have a function, but it will not be differentiable in the grid points, because you will have a piece y affine function that will not be differentiable, but cannot compute the gradient, so I want to, to approximate the gradient, so then you have this affine functions, so you have to regularize them, you know, and then you compute the gradient, so then is what it, when it enters, like, so the first line is only interpolation in space, it doesn't matter, but the second, the second display here, it only means that I have this affine function that is not differentiable, so I regularize with a modifier, okay, so then, but then I have another parameter that is epsilon, that is the modifier, so then I have to make sure that when I do all the three stuff, delta x, delta t, and epsilon go to zero, it will get convergence again, okay, well, in this case, as you can prove that this, this, this value function regularize value functions satisfy like a key inequality that is related with what is called the semiconcavity of the value function in the, in the continuous case, but in the discrete case, so if you don't have this term here, the second term there, with delta x squared divided by epsilon, if you don't have that term, this will mean that this function is semiconcave, so this means that if you subtract a parabola with an adequate coefficient, you will get a complicated function, okay, so it's almost concave if you want, and once you have that, you, you, you, you, this is the key of the, of the approximation result because you see that if now I may disappear that guy at the end, at the limit, I will get the semiconcavity, so I have to make it disappear, so epsilon, the regularization terms of the modifier has to be bigger with respect to delta x, and the first condition is the first, the same as before, okay, so in that case, I can, we can prove that's an important result that those gradients using this semiconcavity property, those approximated gradients converge to the true gradient, okay, uniformly over compact sets, and so I can apply the approximation result that I put before, okay, and that's good, and so, and then we have the convergence of the scheme, in general dimension, space dimension, so these are related, these are not new techniques, this part, this, this is more or less an, a difference thing that we, what we have done already with Elisabetta in 2014 and 15, but where we prove the convergence only in dimension d equals to one, and the space dimension d equals to one, because we consider the generate meaningful games, we consider the games where you, you don't have sigma, because to have that property there, the, this convergence of the gradient, we only can prove it when, when you have sigma, okay, so if you don't have sigma, you cannot do that, and the scheme is not the same, because then you have to work, you cannot work with Dirac masses, you have to work with densities, so it's more complicated in those papers, okay, so now I will end with another, so then we have a, a, a convergence scheme for meaningful games in general dimension, now I will end with another application, that is SOHA, where it's called huge model, but what we propose here is a modification, that we can prove that it's well-posed, so the idea is the same equation as before, exactly the same, but the V as a function of M is the, is the value function of the same stochastic optimal control problem, except for the fact that if I compute that time t, my, the, the dependence in, the, does not depend on M on the future, but I look only M at time t, I am at time t, I see the population, and I will suppose that all the population remain like that in the future, and I will optimize there, because I don't have time to do computations, so imagine that you have like more to, to modelize, to model a panic situation, there is a fire, so you have to escape, you are not going to compute an equilibrium to see what people will do then, and then you will optimize, you will see the situation right, or now, you will try to escape, so then that means that you fix M at this time, at this moment, and then you compute, okay? So in, well, you have with this new velocity field, you have again that, or assumptions are satisfied, but you have to, again, to use the approximation result, because you have to approximate V again, because of value function, and then we also have convergence of the scheme, so this, you cannot write it as a mean field game system, because you have the Fokker-Plan equation, but at each time t, you solve a different Hamilton-Jagoby-Belman equation, so you have like a Fokker-Plan equation with an infinite number of Hamilton-Jagoby-Belman equations, each one corresponds to the optimal control problem when you fix M at that point, okay? But anyway, the system is, we prove with this thing, either the system is well-posed, and we have convergence also. What do we have to, and what is important is that, in some sense, it is easier than mean field games, in some sense, numerically, because numerically, you have to solve like, you will disk a tight time, so you have to solve N, Hamilton-Jagoby-Belman equation, so you have a lot to solve, but what is important is that at each point, it's more, it's easier, because this is an explicit scheme, it's an explicit situation, because I don't look the future, I look my measure now, so it's an explicit problem. So these are the numerical simulations, so I don't explain the details on people, these one D because it was amplified, so here people, these are the initial distribution of people, they want to reach these doors, okay, and these are, these are charts of the, of the evolution, so this is a time, 30 times h, h I don't remember, I don't remember how much is h, h is the time step, so this is an intermediate time, and this is the final time, so you, these people in the second model, in this model of evacuation, that people will more or less distribute like that at the end, okay, so people rather go, because they're like somehow blind, they rather go rather quickly to the doors, they concentrate in the red parts on the doors, rather quickly, so within the same data, the same data, for me field games, and you see the difference, because here you have the same nature population, and at the intermediate time there are people that remains, you have this peak at the middle, that people that remains, that do not, that wait there, so they forecast somehow the behavior of the crowd, and then they will move, and you have a very, a different distribution, the final distribution is the red one, so then you see this rational behavior, I think, so okay, so I will skip that, so this is the real huge model, but the problem with the real huge model that is there, so you have a stationary, excuse me, you have a stationary, having to Jacobi-Berman equation, but the problem of that, of that system is that it's not well-posed in general, instead that the one that we propose is well-posed, so perspective, all the talk is, was based on a particular Markov chain approximation of the diffusion, by saying that we can do the same with a general Markov chain approximation of the diffusion, so then for a particular problem you choose the Markov chain that you want, and you implement it, okay, now for the implicit scheme, as an example the Miffel game, when you had the discrete system, you have existence of a solution, but it's not clear how to compute it, because another fixed point of Banach, so you cannot do this sucessively iterate, so we have to, and then we solve the discrete system in the Miffel game system, we solve it with a learning procedure, but it's only realistic that people use in game theory, so for the implicit scheme it's not clear how to provide, for the discrete system, how to provide a conversion algorithm, and then there is all this point about the boundary condition, and of course there goes this case that we didn't consider, so in the long time we're here, so these are the reference, so the first, the old, the old I gave today is based on a, when I preprinted that we, so it's available in archive, and the other papers are more ancient about Miffel games, so thank you for your attention. So just if I understood correctly, to solve the Miffel game, first you fix a distribution, you plug it in V, and you solve V by a semi Lagrangian, and then you apply your method for, that you've described before, exactly, but you could do that for another system of optimality, right, I mean it could be, doesn't have to be Miffel game, it can be some Fogger plan coupled with some AJB, and you do the same trick. Like a Miffel time control program. For example, yeah. Yeah, yeah, I think so, yeah, of course, of course, yeah, I think that, yeah, I only put one example, but of course I think, yeah, I think it should work, I have to think about it, but I think it should work, but then I repeat, I took a simple example where I have a nice property, I think that this should be checked in case by case, in fact, but you're right, what idea? Question? If not, let's thank Francisco again.
In this work, we consider the discretization of some nonlinear Fokker-Planck-Kolmogorov equations. The scheme we propose preserves the non-negativity of the solution, conserves the mass and, as the discretization parameters tend to zero, has limit measure-valued trajectories which are shown to solve the equation. This convergence result is proved by assuming only that the coefficients are continuous and satisfy a suitable linear growth property with respect to the space variable. In particular, under these assumptions, we obtain a new proof of existence of solutions for such equations. We apply our results to several examples, including Mean Field Games systems and variations of the Hughes model for pedestrian dynamics.
10.5446/57409 (DOI)
Thank you very much for the organization and for the introduction. I'm going to talk about landscape of results on stochastic variational inequalities that appear in random mechanics. The plan of my talk is the following. I will start first with a fundamental problem of lestoplasticity, which is one-dimensional with noise and I will explain the model given by engineers in the 60s. Then I will move to the mathematical framework of stochastic variational inequality proposed by Ben Susson in 2008. And then after those two points where we essentially have the background, I will explain some results about the notion of long cycles that I have proposed in 2012, plus an application to the risk analysis of the system. And then I will move to the chromograph equation in the backward sense to compute quantities that we cannot get from the long cycles. And for the two last parts, I will discuss about ongoing research. First what we are doing at SEMRAX. This is essentially problems coming from control, where the underlying process is a solution of stochastic variational inequality. And then at the end, I will discuss further problems where we can have also a variational inequality structure. Okay, Bob, let me start first with explanation of the lestoplastic problem. It comes from Carnop and Sharpton, 1966. So before I explain the model, let me show you an example of a mechanical structure that can be modeled by such a problem. So what you have here, so you have a pipeline and this pipeline is on the table and this table mimics an earthquake. So you have the table moving and then the structure responds to this excitation and there are some deformation in the structures that are reversible and some are irreversible. And for engineering purposes, it's very important to identify the amount of irreversible deformation because that's an indicator of fragility. So what you have on the left here is a forcing and what you see on the right here is a juxtaposition of the response from the mechanical test in black and this one-dimensional model that I'm about to explain in red. And even if this model is one-dimensional, it's still relevant. It's not a toy model. Okay, so here is an explanation of the model. I start with a very simple elastic behavior. So we will consider some coefficient for damping, a seam out and case for the stiffness. We have an external forcing. We take it white noise first in a sense that W is a vener process and XT will be the response of the oscillator. Okay, so in the linear case, you have a restoring force here and this restoring force is proportional to the total displacement. Now if you move to the case of elasto-plastic, so we are discussing in particular the elasto-perfectly plastic case, the force is not linear anymore with respect to the displacement. It's nonlinear and the nonlinearity comes from this bound. Force is bounded, remains bounded at K times Y, where Y is called an elastic bound and the notation Y means a yield for the phase transition. And if we look at the behavior of the system, so in the abscissa, you have the total displacement and the ordinate, you have the force. So you look at the relationship between those two quantities. There is a result, there is a time, so set R1, the first time when the force, which is one of those two bounds. And right after that time, the force remains bounded in a sense that FT remains at K times Y. And there is a plastic deformation or irreversible deformation that is called delta that appears here. And this process absorbs all the deformations that are irreversible, but remains the force constant. So let me just explain briefly delta. If you look at a time that is before the first time that F is minus K capital Y on this figure, then we can write delta in an explicit way. And delta is the running maximum of the positive part of XS minus capital Y. So here you see from this writing that XT minus delta T is smaller than capital Y. And then there is an interval of time where this force remains bounded. And when XT stops increasing, then delta T stops increasing as well, and the force goes back into the elastic phase. So now let me introduce a notation Y for the velocity of X and Z for the distance or the difference between X and delta. And the problem that I just explained can be written as follows, where the restoring force is actually KZT. But we have two phases. One is called elastic when Z is strictly bounded in absolute value by capital Y. And we have this equation like linear oscillator. And the other phase is called plastic when Z is equal to capital Y with a positive velocity or Z is equal to minus capital Y with a negative velocity. And here what you see is that the non-linearity comes from the switching between the two phases. And you cannot anticipate those switching in advance. Okay, so I want to illustrate this phase transition. So before the first time that the force which is the bound, you have a linear dynamics. So this is typically the blue curve that you have here. But it is important to emphasize that what we have to see is we have to have in mind the velocity field. So when Y is positive, the particle moves from the left to the right. So it goes toward this right boundary. When Y is negative, the particle moves from the right to the left. So it goes to the left boundary. So let me write something that we will need later. Typically, the sign of Y when Z is on the boundary is equal to the sign of Z. When Z is on the boundary. Okay. And then we have a similar phenomenon after the first plastic phase. So now between the time tau 2 and tau theta 2 and tau 2, we have a second plastic phase. Okay. So we have the model in mind. But what want engineers? What do they want to compute? They have a clear idea of what they want. And they want to, one thing that they want is the risk of failure. And the risk of failure for given maturity time capital T, for given threshold B, can be written as what is the probability that the maximum of the irreversible deformation goes above the threshold B. And the plastic deformation or the irreversible deformation is even, is recalled just right there. Okay. So I gave the explanation of the model. I gave the motivation. And now comes the mathematical framework of torcastic-varsional inequalities as proposed by Ben-Sousson and Thurie in 2008. And the point of that is that it's going to be helpful to improve engineering method. Okay. So first, how to describe the models that we have seen just right before. So the noise is given by W. So we will use the notation FT for the filtration given by this noise. And now we can observe that the triple Z, Y, delta satisfies these four conditions. So the first condition tells us that the copper Z and Y remain in a convex domain of R2. And it's a continuous trajectories. And it's measurable with respect to FT. The second condition tells us that Y satisfies with stochastic differential equation. But Z satisfies this differential equation with a perturbation that is D delta. And delta satisfies the point three and four. That tells us that delta has continuous trajectories, remains measurable with FT as bounded variation. And also, delta does not remain constant whenever Z is strictly inside the domain. So what we have here, we have a reflected diffusion. They generate with respect to the component Z. And delta is a process in charge of the reflection. So in terms of reference, we refer to the book of Ben-Sousson-Lion's 1984. So in this book, the framework of stochastic varchional inequality has been introduced but in a purely mathematical motivation. So now let me explain how do we get rid, how do we obtain this varchional inequality structure? So we are going to get rid of delta. And this is quite paradoxical because we want to understand delta. But we are going to have a formulation of the problem that does not make appear delta explicitly. So what we are going to use, we have three ingredients. The first ingredient is the sign of the velocity on the boundary. The sign is equal to the sign of Z on the boundary. The second ingredient is the convexity property that tells us that, so here the convex is interval minus yy. And for all phi in this convex and for all Z on the boundary of this convex, phi minus Z times the outward normal vector remains negative. And the last ingredient is just the formulation of d delta in terms of y and z. Okay, so what we have, if we take an arbitrary phi in the convex and we multiply phi minus zt with d delta, then we do this manipulation and we see that using the property a, we obtain the blue quantity. But now this blue quantity using the property b becomes negative. So finally, if we replace d delta t by y t dt minus dz t, we obtain this variational inequality structure. So now the problem can be written only in terms of Z and y. So we got rid of delta. We had a tripper. Now we just have a pair. And what I've shown in 2008 is that the couple Z and y is Markovian and it can be formulated only with these two conditions. Okay, so now we don't have delta t. And I would like also to put a reference concerning deterministic infinite dimensional variational inequality to say that somehow it's not surprising to see this type of structure in this type of problem because they were known for deterministic problems. For instance, we can refer to, we have this very precise reference in the book of Duvallions that can give you some information about that. Okay, so what else they obtain? They also obtain that the couple is an Argodic Markov process. That means there exists a unique invariant probability measure for the couple Z and y, we'll call it nu. And for large time, the law of Z and y converges to nu. Okay, so we are going to decompose this measure in three parts. So we have the domain. So in the abscissa we have Z. In the ordinate we have y. Here is d plus, here is d minus. And inside is d. So I recall the stockastic variational inequality. So what do we have here? We have the, in blue we have the infinities imaginary generator related to the couple satisfying this problem. So here we have in d, when Z is strictly bounded by capital Y in absolute value, we recover the diffusion in y, we recover the transport in y, and we also have a transport in z. When we are just inside the domain. When we are on d plus, so Z equal to capital Y and small y with the velocity is positive, we only have diffusion in y, transport in y, that means the particle is exactly on the boundary and evolve on the boundary, but there is strictly no transport in z. And similarly we have the same thing on d minus, there is only diffusion on the boundary and transport in the y direction. Okay, so now I am going to move to the notion of long cycles. So essentially what I am going to present are, come from those two references, so one with Ben-Sousson and Niam, and the other one with Faux and Mathieu-Laurier. Okay, so an important quantity that is helpful to go to the risk of failure is to be able to compute the variance of the plastic deformation for large time. And one thing that could be helpful is to look for a pattern, repeating pattern. Engineers, they already had this idea. They already had the idea to split the trajectory in portions of trajectories that have no correlations. But actually what is really powerful with the rational inequality is that we can split the trajectory properly. And this is the point of the long cycle. So let me define the first time, t naught, is the first time when z, which is one of the two boundaries with a velocity that is zero. So here, say for instance, we have an arbitrary initial condition that is y zero, z zero, y zero, and the particle evolve and eventually touches one of the two boundaries, that is the left one. And so this is t zero. And after t zero, we look at the next time s zero when the particle goes to the opposite boundary. And we look at the time when the velocity is zero on the other boundary. So we have labeled delta. So delta is a sign of the first boundary that has been touched. So minus delta is a sign of the second boundary that is touched at s zero. And then we look at the next time after s zero when we go back at the original point. And the point of doing that is that we have identified a portion of trajectories that is called, well, long cycle between t zero and t one. This type of trajectory can be defined in a recursive way. So we can define the ns long cycle. And the thing is we have an IID repeating pattern because at each beginning of a long cycle, the velocity is zero and it always starts at the same position. This is one thing in terms of trajectory. And the second thing is that the noise is white. So it has independent and stationary increment that gives the IID nature of this problem. So this very simple remark tells us that finally, if we look at the growth rate of the variance of the plastic deformation that is only given by this formula, that is the variance of the plastic deformation over a long cycle divided by its mean time. Okay, and now we can explore this, actually we can go further and we have a functional central limit theorem for this plastic deformation. Actually, Francois, it comes from one of your questions in one of the seminars I gave in this. And yes, indeed, we have this type of result and what we can tell about the risk of failure is if we are able to take, so we rescale the interval of time on which we look at this probability and we rescale the threshold of agility, then when n goes to infinity, this probability is actually known explicitly and it uses the coefficient coming from the long cycle. So here gamma is, there should be a gamma square here. So gamma square is the variance of the plastic deformation over the long cycle and mu is the frequency of the long cycle. So asymptotically we have an explicit formula for the risk of failure when n is large. Okay, so where it comes from? How do we have this result? We look at the plastic deformation and we split it, so say we look at the time tau, so we look at the integral between 0 and tau and we split this integral in terms of the number of long cycles that we have up to the time tau. So we have this first sum and the last sum is a remaining term where we just have the integral of y on the boundary between a long cycle that has not been completed. So delta tau is the sum of delta tau hat plus epsilon tau. So here we have delta k, our whole iid and they all have these properties, they are of mean 0 and the variance is given by gamma square. And luckily there exists a result in the literature that is an extension of Don's-Ker principle, it's called Ans'Kon's Don's-Ker principle that tells us that in this context here we have, so the delta j are those guys and nt is the number of long cycles up to the time t, then we can rescale the process delta hat in terms of delta hat n as follows. And Ans'Kon's Don's-Ker principle tells us that the law of delta hat n converges in law to a linear process. And this is interesting because Ans'Kon's Don's-Ker principle does not need to assume that the number of long cycles up to time t is independent of the delta k. And then using the fact that the remaining term goes to 0 as n goes to infinity, we obtain that doing a bit of algebra, we have been able to identify that this probability converges to something that is known explicitly. For n large, now it would be interesting to discuss with engineers how it can be helpful, but that's the first step. So what we had just read before, we had a result for asymptotic time, but for large time. But how do we do if we want to compute something at the final time, say for instance, by the long cycles they say nothing about the variance of the plastic deformation for a fixed time. So one natural thing to do is to look at the backwards homomorph equation because we can translate those quantities in terms of the homomorph equation. And the typical quantities that we have in mind are the probability to be in a plastic state at a given time or the variance, or could be the plastic deformation or the total deformation. So what I'm going to present next, I will do it in a very simple framework. I will assume that the coefficient is zero and the plastic boundary is one. So we only have to, we will look at the homomorph equation of this problem. Okay so define gamma as follows, f and g are nice functions. And if we look at the function u defined by the expectation of gamma, at least formally, it should satisfy this problem. So we have this equation satisfy inside in d and on the boundary, we have the blue equation. But then using u, we can also look at the PDE satisfied for v that is the variance of gamma. So v will satisfy the same problem except that at the right hand side, instead of having g, we will have the square of the derivative of u with respect to y. And f, the terminal condition is zero, where it comes from, it comes from Lito's lemma, plus it isometry. And using itometry, the right hand side that we have here has a natural interpretation in terms of v. It's very simple, but I seldom see this type of trick in the literature. Okay, so how do we solve this problem? We proceed with a simple implicit Euler method. So we use the notation u n ij for the approximation of u at the point x i y j in the discrete version of the domain at time t n. And we have the problem is solved in forward in time, so we have an initial condition written in terms of function f. So we proceed with a very natural approximation of the differential operator. So this is done at the black point. And at the red point, that are essentially the, sorry I had to rotate the domain because of problem of space on the slide, but so what we have here is d plus, and here we have d minus. So what we have on d plus here, this is exactly the same as here. We have no, at the red point, we have no transport in z. So we only have to discretize this operator. And at the gray point is just truncation, so we just put an unman-matter condition. Okay, so let me show you a comparison between Monte Carlo and this finite difference scheme. And if we look at the probability to be on the plastic boundary versus time, and we obtain using basic techniques, but very good result. So in the first case here, we have taken f is indicator function to be on the boundary, and g is equal to zero. If we look at the other figure, we have computed the variance of the total deformation. So again, we have done a comparison between the PD technique and the Monte Carlo method. And we have chosen f is equal to zero, and g is equal to y. So this is, what we have here is variable to compute statistic at the finite time for those problems, for less to plastic oscillators. Okay. Now I move to the problem of what we are doing at SamRacks. So in the team, we have a material over here, a Joe Wally, Ada, Duo, and Jonathan. So we are working on the AGB equation and a free boundary value problem that are related to the optimal stopping and the stochastic control where the underlying process satisfies the variational inequality. Okay, so the problem is as follows. I use again a notation u, and we look at the supremum over all stopping time between small t and capital T. All these functions is almost the same as gamma, except that we have the stopping time that appears as an argument here and here. So at least formally, this function must satisfy this free boundary value problem. So again, we have exactly the same operator as before inside the domain, and on the boundary, we have the same operator as before. Typically, we are targeting this type of quantity that tells us how to estimate the probability to have been through the plastic behavior between in interval of time t, small t, capital T. We have also the stochastic control problem. So if now we change the noise, we assume that there is a component in the noise that is controlling the system. And we would like to understand what type of noise will maximize this type of quantity. And again, this is exactly the same operator structure that we have here, except that we have one term that makes appear the control here. And the quantities that we are targeting are typically, for instance, it's an example, what type of control would maximize the time that you spend on the boundary. So we are doing the numerics. So typically, what we do is a combination of how word algorithm and the numerical approach that I've described before. And the algorithm that you are using is typically the one that is presented in the paper of Bokanovsky, Maroso and Zidani on a word algorithm. Okay, so that's it for SEMRAX project. I will not discuss more for the time being. I will finish with ongoing research. And in this part, I will discuss what can we do beyond the long cycle framework. What type of other mechanical behavior could be interesting to discuss? And quite surprisingly, another problem, which seems to have no connection with what I've discussed before, that also has a variational inequality structure. Okay, so what's beyond the long cycle? So is it possible to generalize the notion of long cycle? If we remain in a framework of elastic perfectly plastic oscillator, then as long as we keep a noise that has stationary increment and independent increment, it seems that the long cycle property can also be obtained. But if we want to study the problem with a colored noise, for instance, take for instance XC, is a Norrstein-Ulembic process, then the long cycle approach may not be, seems not to be doable because you have a third dimension in the problem. You need to take the noise itself as a state variable of your system. And also, if you look at the integral of the noise, this process does not have stationary increment. So it's not clear how to identify a repeating pattern for this type of problem. There is an alternative at this moment, it's just a conjecture. If we want to obtain the asymptotic growth rate of the variance of the plastic deformation for this problem, for instance, for the colored noise problem, what we could do, we could combine the solution of the Poisson equation with the invanmeasure in this blue expression. I've done some numerical tests, worked very well. The idea behind is to use the central limit theorem for martingale, but I'm not going into the details. But what I want to say is this is one way to go beyond the notion of long cycle. So here is another mechanical behavior that could be also interesting to study. Now in the drift, in the velocity that would make appear explicitly one part of the total displacement, total deformation that we didn't have before. When alpha is equal to zero, we have the model that we saw before. And the important difference that this model has with the other one is x is also ergodic before it was not. So this is an interesting model to study. And I want to finish on this experiment that has been... So I met Jun Jong at Courant with the professor there and also his PhD student. And this PhD student is Mark Huang. And Mark explained, introduced me to this experiment and gave a seminar about it. And what we have here, we have a water tank and we have a moving plate at the surface of the water. And below we have a heater and the heater gave some volume to the fluid and the fluid tends to raise to the surface at a given point, yt. So this is called the Rayleigh-Benaar instability. And here we are going to use a notation yt for the location of this upwelling flow. We are also going to use a notation xt for the center of the plate. And what they have done in their experiment, they have studied the motion of the plate, especially with the look at the impact of the size of the plate, the size of the plate on the motion, what type of motion they have. And they have observed that there are two types of states. One is called oscillatory state that is given in the first figure. So in the abscissa you have the time and in the ordinate you have the displacement of the plate. And it is oscillatory state in the sense that sometimes the plate stick to one wall and then suddenly sometimes the plate moves to the other wall. This is the case when the size of the plate is smaller than the critical value for the size of the plate. And there is another state called trap state when the size of the plate is larger than its critical value dc. And this is what we have in the bottom. And the plate never touched the wall. Those figures come from the experimental data. And they tried to explain, when they provided the model to explain why they had this phenomenon. And they came up with this piecewise smooth system. So when there is no interaction with the noise and using physical approximation, relevant physical approximation, they have been able to capture the physics in a one dimensional model. So we have a differential equation for x. U is a potential. And we have also there is a differential equation for y. So the point is if you look at the potential, all the information about the two states are in the function alpha. And the function alpha tells you when this potential is convex or concave. So what you have here, for instance, in the case of a oscillatory state, your potential is concave. But in the case of a trap state, your potential is convex. And this function tells you, explains why they have these two states. But they also describe the interaction with the walls. And these come from the paper, they tell the floating boundary arrived at the side, boundary x t equals plus or minus l. It is said to be at rest at the side wall, x dot is 0. And the floating boundary remains immobile until the net force from the floor switches direction and starts to drive the floating boundary away from the side wall. OK, but I look at that. And how do I interpret this passage? OK, what we have here, we have a balance of momentum where the inertia has been neglected. And we have the three quantities, the sum of A, B, C, 0. So what we obtain, we have the variation of the displacement of the floating boundary that is equal to minus the variation of momentum of the forces from the floor applied to the boundary minus something that is the variation of momentum of the forces from the wall. This term behaves exactly like the plastic deformation that we have discussed in the beginning. So the problem now can be reformulated in terms of a variational inequality. And on top of that, we can also add random fluctuation effect in this problem. So this is the main observation from the experiment. So the point here is there are variational inequalities, there are problems that have a variational inequality structure, but they are all not from the same, they do not come from the same origin. And that's all for now. Thank you very much. Is there any question? I guess. At some point you want to compute the variance of the displacement. And so if I understand correctly, you represent this variance in terms of the integral of PDE, of the gradient of the PDE. So this is the explanation of the integral of the square of the gradient. So you first discretize the PDE. So we are here. Okay. So you first discretize the PDE. And then you run Monte Carlo simulation to compute the expedition. Is it correct? No. What I... Here we are in a purely PDE framework. We solve first U. And then from U, we solve V. So it's a two-step approach. And then we compare the result with a more simple Monte Carlo simulation. This is what we had. Okay, okay, okay, okay. We reinterpret the expedition as an inter-example PDE, of course. Any more questions? Another question. What is the magnitude of the displacement you are interested in? At some point you compute the probability that the maximum displacement is greater than something. Yes, yes. So what are the typical values? And what is the typical value for the probability this is 1% or...? I think I can't tell you an answer. I should discuss with Cyril for this type of problem because so far I don't know how to apply this result in practice. So I haven't discussed the type of parameters exactly. I don't know exactly what they need because I wouldn't have discussed this problem if I don't have something to show to them. Now I have this method, but how do I employ it? I need some help from them. But I can't really tell... I can answer your question. Because the PDA uses centripetal for discussing the conditions. So you have an estimate of the probability that if you write this in a really large edition of the equation, you will obtain a complete determinant. And it departs really on the range of probability you are witnessing if you write this above the table. It's beyond my knowledge. I was a bit hesitant of asking this question, but let me ask it. In the second equation that you have on the board, yes. So what do you mean by dZ minus Y dt times something greater than... Actually, dZ minus Y dt, this guy, is a bounded variation. So it can be understood as a still-just integral. Oh, okay. So you actually write it as a still-just integral formulation. I see. So that's really the point. So I see the question just that I was nervous with that. Any more questions? Thank you very much.
The mathematical framework of variational inequalities is a powerful tool to model problems arising in mechanics such as elasto-plasticity where the physical laws change when some state variables reach a certain threshold [1]. Somehow, it is not surprising that the models used in the literature for the hysteresis effect of non-linear elasto-plastic oscillators submitted to random vibrations [2] are equivalent to (finite dimensional) stochastic variational inequalities (SVIs) [3]. This presentation concerns (a) cycle properties of a SVI modeling an elasto-perfectly-plastic oscillator excited by a white noise together with an application to the risk of failure [4,5]. (b) a set of Backward Kolmogorov equations for computing means, moments and correlation [6]. (c) free boundary value problems and HJB equations for the control of SVIs. For engineering applications, it is related to the problem of critical excitation [7]. This point concerns what we are doing during the CEMRACS research project. (d) (if time permits) on-going research on the modeling of a moving plate on turbulent convection [8]. This is a mixture of joint works and / or discussions with, amongst others, A. Bensoussan, L. Borsoi, C. Feau, M. Huang, M. Laurière, G. Stadler, J. Wylie, J. Zhang and J.Q. Zhong.
10.5446/57369 (DOI)
So first I would like to thank the organizer for the organization of this beginning SamRacks and for all of this. So I went to SamRacks for the first time nearly 20 years ago and I was there and I think I prefer to be in the audience than there today. So I will try to give you some, so I will speak of some fleet structure interaction problems and I will try to give you some insights of what are the mathematical and numerical difficulties linked to this kind of problem and what are also the link between the mathematical analysis, the proof of existence of weak of strong solution and the design of numerical scheme accurate stable numerical scheme. So my talk will be in two parts. The first one will be on the mathematical aspect for this kind of problem and the second one will be on the numerical aspect and I will try to show you that mathematical aspect can influence the numerical one and respectively the numerical one can give some hints on how to prove existence of solution. So what are the motivations for studying this kind of problem? So you have a lot of physical phenomena where a fleet structure interaction appears. So where you have a fleet interacting with a structure and for instance you have the wind around an aircraft. So those are simulations by Serge Pipernault. So here those are in aerodynamics. So you have the wind that interacts with the wing for instance. You may have also more complicated problems where you have the air, the water and the boat and the sail. So those are were done by the team of Alphio Quartieroni and this was I think for America's Cup and they win America's Cup because of this beautiful numerical simulation. I don't know if it was the case but. And those are from bio-mechanical applications. Here you have the blood in a cardiac-artic valve. So I am more interested in bio-fluid and physiological flows. So I will focus mostly on those kind of applications. So you already saw this image in the presentation of Alberto. So this is a cut of the outer and you see the displacement field of the outer wall. So you start from a physical problem. You model it so you have to know what are the relevant parameters, what are the relevant parameters for the purpose you are focusing on. So once you have your model, so those can be PDEs, ODEs, coupled PDEs, non-linear ones and stuff like that. As we are some mathematicians, we study this problem from the mathematical point of view. This gives us the frame in which we can perform numerical analysis and numerical simulation. So the second step will be the mathematical and numerical analysis of the problem in order to perform numerical schemes. So this is a numerical simulation on a really, really simple case but if it doesn't work on this really, really simple case, it will not work on the more complex case. Here patient-specific geometry and blood flow and stuff like that. So you have to be sure that your numerical algorithm is stable, accurate in order to go from your model to numerical simulation and to compare it with experimental data, for instance. And to make a loop, if this model is not accurate enough, then you have to go back to your model equation in order to enrich or change the model and change the parameter and stuff like that. So I will focus. So we had some insight from the lectures of Alberto on this part and this one. And I will mostly focus on this one. So and I will focus on this one by presenting some really, really specific case. I will not cover all the free structure and direction problems you may find. But I will focus on a specific one, a simple one, or not some simple, but in order to give you some idea on this kind of problem. So here are the assumptions we will make. We will consider that the free is Newtonian. So as we saw in Alberto's lectures, this is not really true for the blood, in particular in small vessels and stuff like that. So nevertheless, we will consider that the blood is a Newtonian flow. So that is viscous, incompressible, and so we will consider the Navier-Stokes system to describe the velocity field and velocity pressure. For the structure, we have an elastic media in large displacement because as you saw in the, in this movie, the displacement of the arterial wall is rather large. So we will consider large displacement. So we can consider a thick wall, a model by the 3D elasticity, or reduced model in order to model the evolution of the displacement of the wall, such as shell or plate models and stuff like that. And these two equations, so the fluid equation and the structure equation, are coupled at the interface between the fluid and the structure. And this coupling has two, takes two forms. First we have the equality of the velocity at the interface. Since we have assumed that the fluid is viscous, it sticks to the boundary. So you have equality of the velocity and you have also the action-reaction principle that states that the normal component of the stress tensor are equal at the interface, so that the force applied by the fluid on the structure is equal to the force applied by the structure on the fluid. And from those two coupling conditions, what we will see is that we have an energy balance at the interface, so that the power of the fluid at the interface is equal to the power of the structure at the interface. And it will be a key issue from the mathematical point of view as well as from the numerical point of view. Because from the numerical point of view, what we, a good scheme, a good scheme will preserve this energy balance at the interface. Shall preserve. So for the fluid part, here you have the Navier-Stokes system. So u is the fluid velocity, rho f is the fluid density, we will assume rho f to be a constant. Nu is the viscosity and p is the pressure. So you have the Navier-Stokes equation, which is nonlinear because of this convection term. U is divergence free because we have assumed that the fluid is incompressible. And those equations are set in a domain omega eta of t. So the configuration we will consider in the lecture is the following. So we will consider a really simple case where you have a 2D fluid, so omega, so here you have the fluid, here you have a rigid boundary, here you have one inlet and one outlet. And here you have an elastic structure that can move in the transverse direction or in the longitudinal one. So here, I will call this, I don't remember. So here is a stigma, the interface between the fluid and the structure. So you have a deformation. So you assume that you have some external forces applied to this system or that you have some fluid entering gamma in and coming out there. And so the structure will, you have a displacement of the structure and it will define a new domain, which is the deformed domain, omega eta of t. So here in this one, I consider that longitudinal displacement is equal to zero. You have only a vertical displacement of the elastic boundary. So the fluid equations are set in this deformed domain. Since the fluid equations are set in Eulerian coordinates. And so here you have already a coupling between the fluid and the structure because the domain depend on the displacement of the structure and so may depend on time when you have an unsteady problem like this one. But even if you have a steady state problem, then you will skip the time dependency there but you will still have the dependency of the domain with respect to the displacement of the structure. So those are the equations we have for the fluid. You have, so here is gamma zero, the bottom of the cavity. And for the structure, I choose to consider a beam equation. So eta is a transverse displacement of the structure and it is done not to set the structure equation in the reference domain. So what we are describing is the displacement of each physical point of the reference configuration. So here the reference configuration is only, so this is a square, zero one, no. So omega f, zero one. So this is the reference configuration of the fluid and the reference, the equation of the structure are set in this reference domain. So eta is the transverse displacement, for rest is the density of the fluid and here you have the mechanical part of the beam equation. So here you have a second order operator because this is a beam equation and here I add some damping terms and there will be epful from the mathematical point of view as we will see afterwards. So what are the coupling conditions between those two equations? So first what we can say is that we have two different kind of equations. The first one is a parabolic type equation and the second one if I skip these those viscoster, additional viscoster are some hyperbolic equation. This one, though, these one are set in a non-domain depending on the displacement and these one are set in given domain, reference domain. So what are the coupling conditions between those two systems? The first coupling condition which I will call the geometrical nonlinearity comes from the fact that the free domain depends on the displacement of the structure. So here I can write it really easily because I assume that we have only a transverse displacement for the structure. So the free domain is easily described here and as I said before, since the fluid is viscous, it sticks to the boundary and we have the equality of the velocity at the interface. But here the fluid unknown live in the deformed domain and the structure unknown live in the reference domain. So to write this equality of the velocity, we have to map the fluid velocity onto the reference configuration in order to write the equality of the velocity in the reference configuration. So this equality of the velocity is a nonlinear relation between U and eta. So you have several nonlinearities in this system. And for the force, so here you don't have exactly the equality of the normal component of the stress tensor at the interface, but since we have considered a reduced model for the elastic part, the force applied by the fluid on the structure appears in the right hand side of the structure equation. So Tf is the force applied by the fluid on the structure. And this force can be defined in this weak way. So here you have the stress tensor of the fluid, you have U the velocity of the fluid, P the pressure of the fluid. And so this equality in fact states that the balance of the work at the interface between the transfer of energy and the interface between the fluid and the structure is well balanced. So but you can also define Tf in a strong way. What you have to do is to map this quantity into the reference configuration. And so you will see appearing in this quantity, so N of T is the normal of the reference configuration. So here you have Nt. And this normal will be equal to minus dx eta 1 over 1 plus dx eta. So here you have the full unsteady couple problem, which is really nonlinear. So you have the geometrical nonlinearity, you have the equality of the velocity at the interface, you have this coupling, the force applied by the fluid on the structure, and you have I will say parabolic-hyperbole coupling. Yes. And the last question is force applied. Yes. And the second definition of D as definition of U. Here this is the symmetrized thonsaw. Thank you. And if you see the definition of D, it's one half of the gradient of U plus the transpose of gradient of U. You, it is a linear relation. It's not for large deformation. So if you come back one slide. Yeah, U is the velocity of the fluid. So this is not linked to the structure. Yes. And here, yeah, you are right. Here I took a linear relation, linear behavior for the elastic structure. So this is a first step. This is not, I will say from the modeling point of view, this is not a right choice, because as I said before, we consider large displacements. So we shall consider nonlinear behavior of the elastic structure. But for the time being, I consider only linear equation for the structure for the mathematical analysis and to simplify the presentation. But from the numerical point of view, to have some nonlinearity here, will not be the issue of the presentation. So we assume that we cancel the fluid part and the structure part, even if the structure part is nonlinear. But I will mostly focus on the coupling. But you are right. The structure is linear and if we consider large displacements, we shall consider nonlinear behavior of the structure. But here, the large displacements are, in fact, in the dependency of the fluid domain with respect to the displacement of the structure. So this domain is unknown. We take into account the large displacement in this model, in the dependency. So we say that the displacements are so large that we cannot neglect them in the fluid equations there. So you have a few nonlinearities. So Navier-Stokes nonlinearity, geometrical nonlinearity, this coupling between the fluid and the structure, which is nonlinear, and the equality of the action-reaction principle. They have some questions on this model. So I will work mostly on this model. So I will consider the steady case, the unsteady case from the mathematical point of view. And next, I will also consider this kind of model to present you some numerical schemes in order to discretize in a stable and accurate way this kind of problem. So yesterday, Alberto told us a little bit about boundary conditions. So here you have an inlet and here you have an outlet. So you can assume first from the mathematical point of view that you have a non-close cavity, so that the fluid velocity is equal to zero all over the boundary. So from the physiological point of view, this is not really what you want to do if you consider a blood flow in arteries, for instance. So we can also impose the reclaimed boundary condition at the inlet if we have measurement of the velocity at some part of the domain. And we can also consider Neumann boundary conditions at the inlet or at the outlet, so the flow will be driven, if you consider Neumann boundary condition, the flow will be driven by the pressure jump. And we can also assume that we have all the kind of boundary conditions. So here's those mixed Neumann-Derekley boundary conditions where you impose the fluid velocity to be normal at the inlet and at the outlet. So and as Alberto said yesterday, from the modeling point of view, those kind of boundary conditions are not really representative of the physiology. So often you have to couple the fluid structure interaction problem with reduced models such as 0D or 1D model. But if you do so, so if you have the 3D model coupled to the 0D or 1D model, the coolant coupling, when considering the coupling of those two models, you have some Neumann-Bondari condition appearing. So the coupling is made through Neumann-Bondari conditions. Because for instance, if you have a 0D, yes, so I will maybe explain that. So here if you put some 0D models, the 0D model, the unknown are the flux at the interface, so it will be this quantity and the average pressure at the interface. So two scalar quantities. So those are the unknown of the 0D model or some 1D model. And so what you will, how you will couple this 3D model and those 0D models is through Neumann-Bondari conditions. You will apply this pressure to your fluid equations. So you will see some Neumann-Bondari conditions appearing. So studying the Neumann-Bondari condition makes sense even in the whole setting 3D plus 0D or 1D. So the first question is, do we have some energy estimates for this kind of problem? Can we derive at least formally some energy estimates? So what is the answer? Yes? Okay. So maybe I will. So can we derive energy estimates? So maybe I will go back to... So in order to derive energy estimates for this kind of problem, what we shall do? We multiply the Navier-Stokes equation, so the conservation of momentum, the first equation by the velocity u. So Navier-Stokes equation multiplied by u, integrate over the domain. So it gives us... We will take rho equal 1. So you have the acceleration of the fluid multiplied by u plus the convection term. So after integration by part, the viscous terms gives you this quantity. So d of u is a symmetrized gradient of the velocity. Next, you integrate by part the gradient of p minus p divergence of u. This term will be 0 because u is divergent free. And you also have some boundary term coming from the space integration by part, and they write minus t equal the external force. So the first remark here is that... So here we have the dissipation of the free. Here what you see appearing is exactly the term... This term will be equal... So to simplify, I take u equals 0 on gamma in, gamma out, and gamma 0. So this term reduced only to the term on the interface. And this term is exactly equal to the definition of the force applied by the fluid on the structure. So in the weak way. Why is that? So here what you have is... Exactly the integral over the reference configuration of the structure of this term, which is the force applied by the fluid on the structure, multiplied by the structure velocity. And here I use u of tx1 plus eta of tx equal 0 eta t of xt. What I shall say maybe is that in this definition here, we have the test function v. And we have mapped the test function v that lives in the deformed configuration into the reference configuration. Once again we have the difference between the fluid and the structure on this difference. So you have... The fluid is written in Eulerian coordinates and whereas the structure is written in Lagrangian coordinates. So you have to map either the structure in the deformed configuration or the fluid in the Lagrangian... The reference configuration. So here you use the equality of the velocity at the interface and the definition of the force applied by the fluid on the structure. So this term will exactly compensate the term coming from the structure part if I multiply the structure equation by the structure velocity. So we will see that right after. But here what you have... So usually when performing energy estimates... If you don't have this dependency of the domain with respect to the displacement of the structure, then this term... So here you make... So maybe I will write it down there. So usually you don't have this dependency on time and on the displacement. And what we write is that this term is exactly the time derivative of the kinetic energy of the fluid. But here we can do this because of the time dependency of the domain. So here if we want to make the kinetic energy of the fluid appearing, then we have to do something. But remember that we have the Navier-Stokes equation. And in the Navier-Stokes equation we have this convection term. And this convection term is exactly... it is obtained from the modeling point of view. It comes from the fact that we are following time-dependent domains. So here u is the velocity of the domain because of the equality of the velocity at the interface. So this term is exactly equal to... I will write it like that. The time derivative of the fluid kinetic energy. So it comes from the... following property that states that the time derivative of the integral of omega t of quantity k is equal to the integral of omega of the partial derivative with respect to t of k plus a boundary term that writes k w n w u the velocity. So to obtain this, I use this transport theorem. And in a way this is quite straightforward to obtain the derivative of the kinetic energy of the fluid because the convective term in the Navier-Stokes equation comes precisely from the derivation of the domain of the fluid domain. So maybe I... Do I need to prove this equality? Yes, no, no, yes. I want to prove this question because I actually think you will. No, about that. Do I? So maybe I should give an explanation before. This term can be written thanks to an integration by part like this. Is it clear for everyone? So to make the link between this property and this equality, you have to write this term thanks to an integration by part. Why you have that? So this term is equal. If I use the Einstein convention on the repeated indices, you have this equality. And since you... So it writes like this. And since you is divergence free, you can write it like this. So here it's because divergence of u equals 0. And so you can perform an integration by part to obtain the boundary term. So this boundary term represents the flux of kinetic energy on the boundary. So here to use this transport theorem, I use the fact that the free domain moves at the free velocity because of the equality of the velocity at the interface. So how to prove this property? So proof of the transport theorem. So this quantity will be equal to this one where the mapping fee is the deformation mapping that maps reference configuration on 2. So this is... It depends on time. And this is the flow associated to the velocity w. So it's satisfied. It can be chosen as the flow, at least at the interface. It's enough to have that only at the interface, not necessarily in the whole domain. And it will be used from the numerical point of view. So you write that like that. And so after you take the time derivative, you take the time derivative, you use the chain rule, and you use the fact that the time derivative of the determinant of the gradient of the deformation is equal to the divergence of the velocity w composed with the deformation times the Jacobian of the deformation. So the determinant of the gradient of the... And you have all the ingredients to prove this transport theorem. I have a question. You have only... Here you need only to have u dot n equal to the displacement of the velocity. So you only need to have the normal component of the velocity to be equal to the velocity at the interface. You should have that the u dot n is equal to... U dot n is equal to the structure velocity at the interface. So but there are some recent results I think considering... Sleep, sleep, Navier-Bondary conditions. So here we can obtain energy bonds. So but we will obtain them assuming that we have either an enclosed cavity, so u equals zero on the boundary which is not the free structure interface, or periodic boundary conditions at the inlet and at the outlet, or modified Neumann-Bondary conditions at the inlet and at the outlet. Why considering Neumann-Bondary conditions? So I skip the structure part, no? Yes. So I didn't finish the energy bond. So for the structure part... So now for the fluid part what you have? You have the time derivative of the fluid kinetic energy, the dissipation of the fluid. Plus the boundary term corresponding to the force applied by the fluid on the structure. So the work of the external force. So here I assume that u equals zero on gamma in, gamma out, and gamma zero. If we consider Neumann-Bondary conditions, then what you have here? You have some additional term which write like that. So we have two additional terms at the boundary which are the flux of kinetic energy at the inlet and at the outlet if we consider Neumann-Bondary conditions. Those two terms come from this term. Here assuming that u equals zero on the boundary, the only term that is left is the term on the fluid structure interface. But if you don't consider u equals zero on the boundary which is not the fluid structure interface, then you have two extra terms and those extra terms represent the energy, the kinetic energy entering the system. And those are quadratic terms. You don't know their sign. So you have some energy that is coming into the system. And so if you consider Neumann-Bondary conditions, you will not be able to have energy estimates because of this energy entering the system. And from the theoretical point of view, you may have some trouble to prove existence of weak solutions for instance if you don't have energy estimates. And from the numerical point of view, to have energy entering the system that you are not able to control may lead to some difficulties. But in particular, you may have some stability issues coming from those Neumann-Bondary conditions at the internet and at the outlet. So for the structure, what is? So you have the structure equation. You multiply it by d theta and you integrate over the reference configuration of the structure. And what you obtain here, you don't have any issue about the time-dependent domain and stuff like that. So what you obtain is the time derivative of the structure kinetic energy plus the time derivative of the mechanical energy of the structure wall to the work of the applied forces plus the work of the forces applied by the field on the structure. And this term will exactly composite the term coming from the freed equation. So by adding the energy balance of the freed and the energy balance of the structure, those two terms will consolidate each other. And so what you will obtain is the time derivative of the freed kinetic energy, the time derivative of the structure kinetic energy, the dissipation of the freed plus the time derivative of the elastic energy. So I skip the dissipation term in the structure part. So here I take mu equals 0. And so if you equal 0 all over the boundary, which is not the fluid-sortar interface, then you will have an energy balance. So from the mathematical point of view to have an energy balance may enable us to prove existence of free solution, for instance. And from the numerical point of view, we are sure that we have some bondness of the kinetic energy of the dissipation of the freed and stuff like that. And so from the numerical point of view, we like to recover this energy balance. And this energy balance comes mostly from the fact that this term is equal to this one. So from the numerical point of view, one key point will be to ensure that this equality is still satisfied at the discrete level. Or if it's not satisfied, then you don't have sparse energy coming into the system and that makes the system unstable. So that will be a key issue to have an energy balance at the interface. So but when we consider Neumann-Bondari conditions, then we have these two additional terms that does not allow us to obtain energy estimates. And from the numerical point of view, it may be a big issue. And in most of the numerical simulation, I know for blood flow where you have those artificial boundaries because you are cutting, you are only considering one part of the system and you have some blood entering the system, so some external energy coming into the system. You have to stabilize the energy coming into the system in order to obtain stable scheme. And it comes really from the energy equality of the continuous couple system. So one way to avoid to have those two terms, if we consider Neumann-Bondari conditions, is to modify the pressure at the interface and to consider the total pressure. So to consider the pressure plus the kinetic energy of the fluid. And then by changing a little bit those boundary conditions, I say a little bit, but from the physical recall point of view, it may change a lot the profile of the velocity at the interface. So from the mathematical point of view, it will help us to obtain energy estimates. But from the numerical point of, physiological point of view, what is the meaning of such boundary conditions? It may change a lot the velocity profile, so you may be far from what you expect in reality. So in these conditions, you have energy estimates. And with Neumann-Bondari conditions, you have no energy estimates. So here are the energy estimates you obtain. So you obtain that U is bonded in, so those are the standard space for the Navier-Stokes equation. So U is in L infinity of L2, gradient of U is in L2 of L2, and eta is on those spaces. So first, those are the standard energy space, but here you have a time dependency, so they are not so standard. So the question is how to define properly, correctly, these functional spaces. And here, if I consider A equals 0, so wave equation instead of a beam equation, I will only have eta in H1 of 0, 1. So H1 of 0, 1, eta will be continuous, but eta will not be lip sheets if I consider only this line. So you will not have a lip sheets domain for the freed equation. So you may have trouble by considering that you have a wave equation coupled to the Navier-Stokes system, so a 2D Navier-Stokes system coupled to a 1D wave equation. You may have trouble to define properly the domain, the trace of U on the domain, because the domain is not lip sheets. The regularity of the domain depends on the regularity of the structural displacement. It depends on U because it depends on eta, and eta depends on U. So you may hope to have an extra regularity coming from the freed parts. But the freed, the unknown, are the velocity, not the displacements. And the displacement is the trace of the velocity, so you will lose some things, some regularity. So from the numerical point of view, you never make the space, the space discretization parameter goes to zero. So you don't really see this appearing, in fact. Because from the numerical point of view, you will have a lip sheets boundary. In fact, from the numerical point of view, you have seen the numerical simulation Alberto showed us yesterday, everything is quite smooth. The main problem for this problem, so there is a problem at the interface for the balance of energy. So I will speak about that later on. But you may also have instability because of the amount of kinetic energy. The lack of regularity of the structure displacements, I think we never saw it from the numerical point of view. So do you see that the thickness of the numerical equation is the linear factor, or is it inside the numerical equation? After what we can, in most of the case, so those are the bonds we obtain thanks to the equality of the balance of energy. But you may have some existence of strong solution if the data are smooth enough. So in this case, if you have strong solution, you will not see anything. So my guess is that when the structure displacement is not regular enough, and so that you are not able to prove that you have existence of weak solution because you don't have enough regularity of the structure displacement, you may be able to prove that existence of strong solution, so for regular data. And in fact, in the 3D, 3D case, or 2D, 2D question, when you have a 3D elastic structure coupled to a 3D field, then the regularity of the displacement at the interface is even worse than that. So you have to consider strong solution in order to give sense to every term in your problem. So you are not able to give sense to every term in your problem in the 3D, 3D case, for instance, in a weak way. You have to consider some regular solution. Each H1. Yeah, you are right. Yeah. If I try to work in the energy space, I have a lack of regularity. I may have a lack of regularity in the energy space. From the numerical point of view, you are discretizing the displacement. So you are leapshitz. So you have, in a way, you are regularizing your problem by discretizing it. You are not dealing with PDE, but only ODE's when you are considering the numerical part. Yes. In some cases, we are able, me or some, people are able to prove that there exists regular solution. But in some cases, we are not able to prove existence. I don't know. My guess is that we are all, I think that we shall be able to prove that there exists solutions. Or this is not the right move. So the weak formulation. So what is the weak formulation for this kind of problem? So take a free test function and multiply the Navier-Stokes equation by this free test function and integrate by parts. So what you obtain is this first two, three, so those three terms coming from the freed equation, you will have some boundary term once again. But those boundaries are coming from the integration by parts of the viscous part and the integration by parts of the gradient of the pressure. But those boundary terms will cancel each other if we choose a structure test function that matches the free test function at the interface. So the structure test function is B. And so here I choose free test functions that are divergent free. So you don't see the pressure anymore in the equation because I took free test functions that are divergent free. And if I take a free test function and structure set functions that match at the interface, you don't see the boundary terms in the weak formulation. So this is the weak formulation. And here what you see is that this is not a standard weak formulation. Because here in a standard weak formulation, you can choose a test function to be independent of time, right? Because after you applied your favorite Gallerkin method and stuff like that. But here if I choose to cancel the boundary term at the interface, then I have to require that the free test function and the structure test function match at the interface. And this relation, so I may choose B not depending on time. But phi will depend on time because here I have the displacement of the structure which depends on time. So the free test function depends on time and depends on the solution, which is not quite standard. Yes? So you may have some trouble here because of this non-standard formulation for the weak formulation. And once again, it comes from the geometrical nonlinearities of the problem. So what are the mathematical and numerical difficulties? So you have a fully coupled and unsteady nonlinear system, nonlinear because of the convection term, because of the geometrical nonlinearities, and maybe because of the structure behavior law. Here I took a linear behavior law, but in real life you should take some nonlinear behavior law. You may have a hyperbolic, parabolic coupling, so you may have a gap between the regularity of the free and the regularity of the structure. In particular, the regularity of the structure velocity is not, may not be guaranteed by the structure equation. So what you obtain are, so if you don't take this additional viscoster, what you obtain are some regularity of the displacement. For the velocity, you have a really little. The velocity is in infinity of L2 only. And so you may have some troubles to deal with this hyperbolic, parabolic coupling. You may have also to deal, at least from the mathematical point of view, to deal with the lack of regularity of the interface displacement, because if this displacement is not regular enough, you should look to a strong solution or to define a way to define properly all the terms appearing in your system. And you have also this uncompressibility constraint, which is rather a problem in this kind of couple system. And we will see why also. So first, I will try to review some of the difficulties that may appear from the mathematical point of view when considering the existence of weak solution or strong solution. So first I will consider a steady state problem. So in the steady state problem, you eliminate a lot of the difficulties of the problem. And the main difficulty that remains in the problem is the nonlinear geometrical part. Because so we will see. So it will answer to the question how to deal with this geometrical nonlinearity. So we will try to, I will try to show you how to prove existence of strong solution for this steady state Friedman interaction problem. Next I will present you some results on existence of weak or strong solution for the full and steady problem and show you the difficulty and the way one can overcome them. But I will not review all the theoretical results that exist, but just try to point out what are the difficulties and how we can solve them. And I will try to make the link between the numerical analysis. And first I will present you the ALE strategy that Alberto has shown us yesterday. But I will try to explain this strategy and this ALE strategy is linked to the motion of the free domain. So this is really linked to the first point, the geometrical nonlinearity here. So next I will speak about stability issues and in particular what is called the added mass effect. And this added mass effect comes from the incompressibility constraint. And I will present some numerical scheme, implicit one, explicit one, semi-implicit one, and discuss the stability and accuracy properties. And try to make the link between those schemes, the strategy to discretize the problem, and the strategy of proof. Because here you have seen that the problem is nonlinear. So the first idea is to perform fixed points. From the numerical point of view, you solve the free, you solve the structure, and you perform also fixed point or Newton method and stuff like that. So you have iterations. And so I will try to make the link between those two questions. So from the mathematical analysis, so here I review some of the, I think not all of them. So there are many, many results on existence of weak or strong solution for this kind of problem. So there is a huge literature on existence of strong solution for Navier-Stokes equation coupled to rigid bodies. So when you consider rigid bodies, the difficulties are quite the same except that you have ODE. So to describe the evolution, the motion of the rigid body, you don't have PDEs, you have ODEs. Because the motion of the rigid bodies are described by a finite number of degrees of freedom, translation and rotation. So in a way, this is more simpler than the coupling between PDE and PDE. But still, we still have some non-linearities. So there are a lot of paper concerning this question. So the Navier-Stokes equation coupled with a plate inflection or beam inflection, you have two results and I will present briefly the results of Julien Lecoeur for the Navier-Stokes equation, the 2D Navier-Stokes equation coupled to the 1D beam, a viscous 1D beam. So here to prove existence of strong solution, you have to consider some additional viscosity on the beam. Otherwise, the strategy of proof doesn't work. So concerning the Navier-Stokes equation coupled to a 3D non-linear elastic system, you have results in the steady state case and you have results in the unsteady state case in all of these. So this is the complete system with all the non-linearities considering a 3D-free coupled to a 3D structure. Those results are valid for small time and small data with a lot of compatibility conditions on the data and in particular on the initial data and initial pressure and stuff like that. So a lot of, I would say, unphysical compatibility conditions. So and those papers, there are two, just awful to read. Those two are some simplification of this paper but in special cases where, for instance, the interface is flat and you have periodic boundary condition at the inlet and at the at the inlet. And concerning the existence of weak solution, you have also a lot of results, mostly in the rigid body case. In all these results, you assume that the rigid body that do not collide except in this paper where the prove existence of weak solution after contact. So they give a sense to the weak solution after contact. And after contact, the solids evolve and they stick together. So this is not a really physical solution but this is a mathematical solution. So concerning elastic media, you have results with a finite number of eigenmodes. So you project the elastic equation on a finite number of eigenmodes. And so you have to deal with Navier-Stokes equation coupled to a system of OD. So it simplifies a lot the analysis and the analysis is really close to the rigid body's analysis. And you have results for a plate inflection with viscosity, additional viscosity, without additional viscosity. And recently you had some results for coitre-shell. And even so the last one here, this is submitted paper, I think it's for nonlinear coitre-shell. And those two, one of them is also for non-Newton flow coupled to coitre-shell. So, maybe I will. Yes. So we should have questions. Great. That's the speaker for this week's panel. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thanks. There's no song added because of banana.
Many physical phenomena deal with a fluid interacting with a moving rigid or deformable structure. These kinds of problems have a lot of important applications, for instance, in aeroelasticity, biomechanics, hydroelasticity, sedimentation, etc. From the analytical point of view as well as from the numerical point of view they have been studied extensively over the past years. We will mainly focus on viscous fluid interacting with an elastic structure. The purpose of the present lecture is to present an overview of some of the mathematical and numerical difficulties that may be encountered when dealing with fluid–structure interaction problems such as the geometrical nonlinearities or the added mass effect and how one can deal with these difficulties.
10.5446/57370 (DOI)
So, next I will try to present to you one simple steady state case where the difficulties lies mostly in the dependency of the free domain with respect to the displacement of the structure. I will consider an elastic structure and I will not neglect longitudinal displacement. So we will have two displacements, longitudinal one eta 1 and transverse one eta 2. And so here you have the steady state version of the free day equation. So I skip the nonlinear part for simplicity. And so the only nonlinear term in the coupling is the term coming from the geometrical part here. So the fluid equation are set in a non-domain depending on the displacement of the structure. And the displacement of the structure satisfies two equations. So for the longitudinal part you have Laplace equation and in the right hand side you have a term coming from the force applied by the field on the structure and you have the second equation for the transverse motion of the beam. So here the difference here is that you have a second order operator and here you have a fourth order operator. So you have more regularity on the, you will have more regularity on the transverse motion than the one you will have on the longitudinal motion of the beam. And we assume that the beam is, I didn't say that before, but the beam is attached at the extremity of the structure domain. So the coupling condition right now like that, you have, so if the notation are not, so you have the reference configuration omega at and you have the deformant configuration omega of eta. And omega of eta can be defined as the image, so this is the identity mapping, plus lifting of the displacement. So the displacement is defined only on the interface and you consider a linear continuous lifting of the displacement onto the domain omega at. And so you can define omega eta as the image through this mapping I will call phi of eta which is a deformation. And so omega eta is the image by phi of eta of the reference configuration omega at. Okay, and it depends on this linear mapping. Okay. And this linear mapping from those who are familiar with the numerical part of free structure interaction, this mapping shall be the ALE mapping between the reference configuration and the current one. Okay, so this is the analog of the ALE mapping. And it match the displacement at the interface only. Okay, and you still have the equality of the velocities, but since we consider a steady state case, the fluid velocity at the deformant interface is equal now to zero. So in a way we have some decoupling of the fluid and the structure because here you don't see the structure velocity appearing in this equality. And you always have the equality of the forces at the interface which is written through this weak formulation. Okay, any question on this system? So this is a simplified version of the full and steady case. And what we have to deal with is this dependency. So we work in a non-domain depending on the structure displacement, itself depending on the fluid and the pressure. And so this is a strongly coupled system, but it's slimmer than the full and steady one. So difficulties, geometrical nonlinearity, lack of regularity of the free structure interface. We may have some problem with that, and it will be even worse in the 2D-2D case. So if we consider 3D-3D case, if you consider 3D elasticity, for instance, the free structure interface may not be regular enough to define everything. And in this kind of problem, so here I will consider that u equals 0 all over the boundary. So I will consider that the fluid is enclosed in the cavity. So u equals 0 on gamma in, gamma out, and gamma 0. So yes. And so we will have to deal with some corners there. So the ruin I made is not right, because we have assumed that the, we have Dairy-Claire boundary condition, Homogenius Dairy-Claire boundary conditions for the beam. In fact, you, through the deformation, you keep here the right angles. But nevertheless, we have to deal with corners in this kind of situation. And if we consider Neumann boundary conditions at the inlet, you have to deal with corners and with mixed Neumann-Dairy-Claire boundary conditions at corners, because you have Neumann boundary conditions there, and Dairy-Claire boundary conditions there, and Dairy-Claire boundary conditions there. So you may have some trouble to prove the regularity of the fluid and the pressure. And the regularity of the fluid and the pressure imply that you may have less regularity on the force applied by the, by the fluid on the structure. So you may have less regularity on the structure displacement. And so you may not be able to perform some fixed point CRM. So to simplify, U equals 0 on the boundary, which is not the free structure and surface. The first remark is that on, for this kind of problem, so it, it was also mentioned by Alberto yesterday, you have to define properly the pressure, because the pressure comes into the force by, by the fluid on the structure. So it's really an important quantity in order to, to, to prove that you have existence of solution. And if you look at the Stokes equation. No, no, no. Here it seems that the pressure is just defined up to a constant. Yes. And since we consider Dairy-Claire boundary conditions, we have no boundary conditions to fix this constant of pressure. So like that, the problem is not closed, because if the pressure is defined only up to a constant, you will, by changing this constant, you will change the force applied by the fluid on the structure and you will change the displacement of the structure. Yes. Yes. Yes. Yeah. So if, if you, if you, if you look at that, you have P or P plus C, which is, which, and you have P, P plus C, and so you, you will have, you, you will lose the unicity of the displacement with this system only. So you have to add something to fix this constant of pressure. It will not be the case if we have Neumann boundary conditions. And the, the constant of pressure is naturally fixed by the Neumann boundary conditions. But here, so here, from the physical point of view, the pressure is not defined up to a constant. So you are, we are missing something in this system to fix this constant of pressure. And what we are missing is that, in fact, during this deformation, so you are an enclosed fluid, you apply external forces, and during this, the fluid is, is incompressible. So you, during this deformation, you should have the, a preservation of volume of the fluid cavity, of the global volume of the fluid cavity. So what is missing is that the volume of omega eta should be equal to the volume of the reference configuration. So this is an additional constraint on the system. And this additional constraint will fix the constant of pressure. And this additional constraint may be written thanks to the, so I can write it like that. So I should have that the integral of the, of the reference configuration of the determinant of the gradient of the deformation. This is equivalent. And this can be written thanks to an integration by part like this. So eta 2, so where, so the normal component of, of the, the normal unit vector is equal to the deformation configuration is equal to this quantity. So this is equivalent to this condition. So you, you may write this condition in terms of only the, in terms of the displacement of, of the, of the structure. And this is a nonlinear, a nonlinear relation between eta 2 and eta 1. All right, if I had, I have only if I assume that eta 1 is equal to 0, then it's only, that says only that the average of eta 2 is equal to 0. So if the beam moves only in the transverse displacement, what, so this, this drawing is not true, okay, because here you don't preserve the, the volume of the, of the fluid cavity. And the constant sum pressure from the mathematical point of view is the Lagrange multiplier associated to this, to this nonlinear constraint. But in fact, this is equivalent in the insted case. So to the condition u dot n over gamma of eta of t equal 0. This, in the, this condition in the steady state version of the fact that the flux at the boundary of the fluid velocity is equal to 0, which is equivalent to, so in the insted case, this is true because u is divergence free by integrating over the full domain, the constraint divergence of u equals 0 and integrating by part, you obtain this relation. So this relation is the insted case is exactly, so by deriving with respect to time, this condition will recover this one. So with this additional volume preserving constraint, you fix the constant pressure, the constant, the, and so the physical pressure is not defined up to a constant. It's uniquely defined and it's really important from the numerical point of view because the pressure is the, the force that is driven the interaction between the free and the structure. So the simpler way to prove existence of solution, if I look at this problem, is, will be to consider that we have a fixed domain. So if we have a fixed domain, then the fluid and the structure are decoupled. So you have the Stokes equation with Derrick-Homogeneous Derrick-Leibondary condition all over the boundary. So you can solve the fluid equation because we are in the insted state case and so the fluid and the structure for a given domain are decoupled. So you solve the fluid equation, you recover the force applied by the fluid on the structure and you obtain a new displacement. So the first idea one could think about is to do a fixed point. I have a fixed displacement. I compute the velocity and the pressure and I recover a new displacement and I try to perform a fixed point on this, on this mapping. So we assume first that the fluid geometry is given. So from the numerical point of view, it's an explicit treatment of the geometry. Okay. And we solve the free problem for a fixed domain. We prove that this problem is well posed and maybe that we have some regularity of the velocity and the pressure. We solve the structure problem with a given force and we perform a fixed point theorem hoping that this fixed point theorem will converge. Yes? It's fixed from procedure which will converge. So when the geometry is fixed for this steady-state case problem, the fluid and the structure are completely decoupled. So we are just looking at the geometrical non-linearity. So by fixing the geometry, we fix this mapping. Okay. And so we fix eta and we build the mapping phi which maps the reference configuration onto the deformant configuration. And so as I said before, this mapping from the numerical point of view will represent the ALE mapping. And we will rewrite the equation, the freed equation in a fixed domain by performing a change of unknowns and by considering a new velocity and a new pressure which leave in this reference configuration. By doing so. So here we are looking to, I didn't say that, here we are looking to strong solution. Why we are looking to strong solution? Eta 1 will be, if I look at the equation satisfied by eta 1 and that the energy estimates, the formal energy estimates satisfied by the structure displacement, eta 1 will be in H1 and eta 2 will be H2 0 of 0 1. So here H1 can be embedded in 0 alpha of 0 1 with alpha less than 1 alpha. Okay. So you will not have, with the energy bond, you will not have a Lipschitz domain. So we have to have some additional regularity of the structure displacement in order to prove existence of solutions. So we will consider strong solution. And also in order to perform the change of variables, because we need to perform the change of variables, we need to, the deformation to be in C1 of omega hat. And here the regularity of phi is linked to the regularity of eta and we don't have C1 for eta. So we will not have C1 for this mapping. So we have to consider stronger solutions. So in the reference configuration, the system writes as follow. So the Laplacian part is transformed. So this is an elliptic problem, a stokes-like problem. So it is set in a reference configuration, but you have a term here that depends on the displacement of the structure that appears. Yes. Those are nonlinear terms. These come from the mapping. So those are matrices depending on eta and they are defined like that. So g of eta is the cofactor of the gradient of the deformation and f of eta expressed like that. So to be able to define properly those quantities, we should have phi in C1, but we should also assume that the beam doesn't touch the bottom of the fluid cavity and that you don't have self-contact. So we will assume that the external forces are small to avoid that. Okay? In order to recover displacement, that is small enough in order to avoid self-contact of the beam or that the beam touches the bottom of the fluid cavity. So you have to study this kind of stokes-like problem. And here I wrote for the first time, I wrote the forces applied by the fluid on the structure in a strong way. Okay? So here you see that this is the normal component of the stress tensor of the fluid and you see here some terms coming from the geometry and from the change of variables. And we have this constraint here. So the constraint I wrote before, right like that in terms of the mapping g. Here the normal component of the deformant configuration is expressed thanks to the co-factor of the gradient of the deformation. So those are quite standard change of variables for elliptical equations. So you have to study a stokes-like problem. And as I said before, to be able to define everything, we shall have a C1 deformorphism. So we shall have that the displacement of the structure is in HS for S large enough. And in the energy space, S is not large enough in order to have this C1 embedding. So we need to obtain a structure displacement that is in H3 over 2 plus a little something. And to have that, we need to prove that the velocity and the pressure of the fluid lies in H2 times H1. So we have to have some additional regularity for this kind of problem. But here we are not in a standard framework because when considering this kind of problem, you will have the H2 times H1 regularity of the velocity and of the pressure. If the matrices here are in C11, C11 or W2 infinity, but we are not in those spaces. We are a little bit less. So we have to do something to prove that the velocity and the pressure are in fact in H2 times H1. Okay, so we will consider a displacement in this domain that is small enough to be sure to define a C1 differomorphism thanks to this interface displacement. So M will be chosen such that this, the gradient of the deformation is invertible in this space. So here the key point is that H1 plus epsilon is an algebra in 2D because if you take V in H1 plus epsilon, G in H1 plus epsilon, the product is still in H1 plus epsilon. So this will be a key point of the proof. So like that, we have a deformation which is a C1 deformism and the matrices will remain in the neighborhood of the identity matrix because if we have here the identity matrix, then for an external force that is in L2, you will have the H2 H1 regularity. So if we are close to this situation, we may still have the regularity that we need. So we fixed a delta in this ball and we consider the freed equation for a delta fix. So V and Q will depend on delta but we can solve this independently of the structure equation. We recover the force applied by the freed on the structure and so we recover a new displacement of the structure and we would like to perform a fixed point on this mapping we just built. So the first remark is that now for a delta fixed delta, the system is linear. It will not be the case for the full and steady program because of the Navier-Stokes non-linearity for instance. The freed on the structure are completely decoupled and we treat the geometry in an explicit way and the trouble is that the resulting displacement is not volume preserving. So we will have to correct this displacement in order to have a volume preserving because the volume preserving constraint here is now right, so it was eta2 dx of delta1. So in this procedure we have linearized this non-linear constraint. So the resulting displacement is not volume preserving because it should satisfy to be volume preserving, it should satisfy a non-linear constraint and we will consider the mapping delta gives eta of delta. So first we study the Stokes' program. So this is a Stokes' like program with matrices that do not have the standard regularity in order to use standard regularity, elliptic regularity results for the Stokes' program. So we assume that this matrix is in this space which is an algebra that the matrix is symmetric and elliptic and that B is also in this space, that B is invertible and we will moreover assume that B writes as the cofactor of gradient of phi. Why that? Because we will need, there is a, I will write it down because this is really useful to use. So we have that the divergence of the cofactor of gradient of deformation is always equal to zero. So in fact the relation divergence of B transpose B equals zero is equivalent to BT phi equals zero. So in fact this relation, so written like this, this relation involves, so it will involve gradient of the deformation but second order also by taking the divergence of B, it will involve also second order derivative of the deformation. So it's too high. So here it involves only, by written like that, it involves only first order derivative of the gradient of the deformation. So first you study the Fritz problem. So to study the Fritz problem, you assume that you have, so what you will study is the Fritz problem but with a non-homogous divergence constraint. I will explain later on why you don't put zero here. So with F in L2 and the right hand side in L2, we can prove that there exists VQ in H10 times L20 and the existence of Q comes from the unsupercondition which is satisfied, belongs to the invertibility of B. So is everyone familiar with the unsupercondition? Yes, no, no, yes, yes, maybe. No, no. So the variational formulation, so what are the steps to prove the existence of Fritz solution for such problem? So first you write the variational formulation in a constraint space. So you take W. So take the first step is to lift the non-homogenous constraint and to consider C equals zero. So you will have for the variational formulation, you will have something like that. So this is omega hat for W. So first you prove the existence like that, you apply Laxmigram-Lehmann and you prove the existence of velocity V. Next you have to recover the pressure because here you don't have the pressure in this system. So you have to recover the pressure. And the unsupercondition is a way to say that if you have a linear form that satisfy, that is zero for every W in H1 zero. So this is on the omega hat here. Divergence of W equals zero, then F is the gradient of the pressure. So this is a step where you have to, you want to recover the pressure and it gives you a, this is a step that enables you to obtain the existence of the pressure. But maybe I will skip this. So you need to have the invertibility of B in order to prove that the unsupercondition, to recover the existence in fact of the pressure Q. Moreover if you have more regularity and the right hand side of the divergence constraint, then and if A and B are close to the identity matrix in this space, then you can prove that VQ is in H2 times H1. So the argument is quite simple. And in fact you write the problem like that, minus the application of V plus gradient of Q equal F plus divergence of A minus identity gradient of V plus B minus identity minus B gradient of Q and divergence of V equal divergence of identity minus B transpose V, which is also equal to identity minus B transport contracted product, you can enter F. So you write the problem as a, as a perturbation problem. Okay, so you, if, if the right hand side are regular, then V and Q are regular. And here if V is in H2, this one will be in H1. This one is in H1 plus epsilon. So the product H1 plus epsilon times H1 is still in H1. So this will be in H2. So you have stability of, of you, you keep the right regularity in this, in this process. And if this is small enough, then you will be able to perform, to, yeah, to perform a, how do you say, Pica? No. Yeah. So the idea is to, to write the problem as a perturbation of, of a known problem. So here you have this which is small, this which is small. And here, if you write it like that, you will not have the right regularity to apply standard results. But if you write it like that, then you will have the right regularity of the right-hand side to apply as a standard regularity results. Because here you have something in H1 plus epsilon, and here you have something in H1. And when you multiply a function of H1 in H1 by a function in H1 plus epsilon in 2D, then you are still in H1. So the right-hand side will be in H1. So thanks to this perturbation argument, you can prove that the velocity and the pressure are in H2 times H1. And to, to, to, to be sure that the matrices remain in the, a neighbor, a neighborhood of the identity in this space, you have to constrain the external forces to remain small enough. Okay? So, next, you have to consider the, the structure, the structure equation and to do so. So we have to, so we have a velocity, we have a pressure, but this pressure here is, is just defined up to a constant. And we will fix the constant of pressure by imposing to the new displacement to satisfy this now linear constraint. So lambda is really the Lagrange multiplier associated to this linear constraint. And since everything is linear, we can use a superposition principle and decompose, so, decompose the displacement of the structure as a displacement, as a displacement associated to this force. We just compute. And plus lambda and the displacement which is associated to this force. Okay? So we, we are fixing the level of pressure. And in fact, this eta zero there, this is a displacement that does not preserve the volume. So this is a displacement. So imagine you apply a constant pressure to your, to your beam. And so it, it will inflate like that. And so the integral of zero one of eta zero g delta eta will not be equal to zero in this case. And so like that you can compute the, the, the constant pressure to fix the average of the pressure. And so you can easily verify that eta zero, verify this by taking eta zero as a test function in the problem associated to, satisfied by eta zero. So next we can derive bond on v delta, q delta, eta delta, lambda delta with respect to the data and those bond will depend also on m. And m we have to choose it to be small enough in order to have a C1 dephomorphism and in order also to have the matrices A and B close to the identity matrix. And so f and g will be small enough. And we can, so the fact is that we start from, we start from a delta in h3 over 2 plus epsilon of zero one. And we have a tf which is the normal component of the rest rest tensor. It will lie in h one-alpha of zero one. So by solving the structure equation you will have delta belonging to h2 plus one-alpha of zero one times h4 plus one-half zero one. So here we have gained a lot of regularity. So there is a huge space to perform our fixed point theorem. If you consider 2D coupling with a 3D fluid, so a plate or shell or stuff like that, then we have less, we shall assume more regularity at the beginning and we will arrive to this regularity at the end which is enough in the 3D 2D case to perform the fixed point theorem. So this will work also in the 3D 2D case. But nevertheless, we are just below the regularity needed in order to prove existence of weak solution. So we have a huge margin for strong solution. But we cannot, I don't know how to prove existence of weak solution in this case because of the lack of regularity of the displacement of the structure. So we can also do this with other kind of boundary conditions. So as I said before, with other kind of boundary conditions you have to deal with Derry-Clayne Neumann boundary conditions. So a way to avoid that is to consider this, those are not Neumann boundary conditions, but to impose that velocity is normal at the inlet and outlet. And this condition, if P here is equal to 0, enable us to do some symmetry of the problem. And so you skip the corner difficulty by doing some symmetry. And so you can perform exactly the same steps. And you can also remove the assumption on A and B. So here what I said is that A and B have to be close to the identity matrix to have the H2, H1 regularity. But this can be removed. You can have the H2, H1 regularity for any A and B in the right spaces. Okay. So in most of the case we can extend those results for the Navier-Stokes equation by some bootstrap argument or stuff like that. So what for the 3D case? For the 3D case, if you have a 3D field coupled with a 2D plate, for instance, it will work in the same way. In fact, the main difficulty is to have enough regularity of the structural displacement and to have compactness in order to prove existence of the fixed point of the mapping songs to show the fixed point, for instance. If we consider Neumann-Bondari conditions at the net and lab, then I don't know. Because even for the Stokes problem, with Neumann-Bondari conditions, homogeneous Neumann-Bondari conditions and the at-lay. And directly Bondari conditions with a corner, a right corner, you don't have the H2, H1 regularity. You have less. So I don't know if it works. And the same kind of results are valid for the 3D Navier-Stokes coupled to the nonlinear elasticity, for instance, St. Ankerchoff media. And so we can prove existence of a strong solution in spaces like that. And the key point is that W1P with P greater than 3 is an algebra. So I said before the key point is that some A and B lives in an algebra. And this is also the key point here. Okay. Yes. So those equations can be obtained thanks to asymptotic analysis. If you consider the 2D elastic case and you make the thickness of the elastic part goes to zero. And so you recover two equations which are decoupled. So an eta1 verify a second order equation and eta2 verify a fourth order equation. In the case of Shell model, in general, those two equations are not decoupled. But mostly you have a fourth order term for the normal displacement to the mid-surface. So my guess is that for Shell equation, it will work also. Because if you have the same kind of regularity, it will work. And the key point here is the fact that you can treat explicitly the geometrical part. There is no problem to perform the fixed point theorem provided you have enough regularity and stuff like that. You can treat explicitly this geometrical nonlinearity. So the unsteady case. So in the unsteady case, so we will keep in mind that from the numerical point of view we can, we may be able to treat explicitly the geometry. OK. Now, if we consider the unsteady case, then what you have is, so u at the interface t, I will write, so d is the displacement of the structure. You have something like that. And you have the equality. So you have the relation tf, which depends on tf v bar on, so it was 0, 1. Gamma it of t, sigma f up and tf. So you have those two boundary conditions. And so the idea to treat the existence of, to prove the existence of solution of the full unsteady case is the first idea is to perform a fixed point theorem. So to decouple the fluid and the structure, to consider the fluid plus the reclab and boundary conditions coming from the structure, and to recover the force applied by the fluid on the structure, and so to recover a new displacement. So to decouple, to perform the same, the same scheme of proof, decoupling the fluid and the structure and using what we know already on the Navier-Stokes equation and use, putting this in the structure equation and using what we know already on the structure equation and perform a fixed point. So the first decoupling strategy will be to consider Navier-Stokes plus the reclab and boundary conditions with something that is given. So you give yourself a deformation and the velocity of this deformation. And so you solve, so delta is given. So you will solve the Navier-Stokes equation. And so you will recover this and the next step will be the structure equation plus the forces applied by the fluid on the structure. So this is a Derrick-Leigh-Neumann decoupling. You solve the fluid equation plus the air-clicked boundary condition and the structure plus Neumann boundary condition, in fact. So this example, which is a really, really simple example, will show you that it may not work. And so this is an example that has been considered in a paper of Causine, Gerbo and Nobile. So this is a really, really simple example where here you don't have the geometrical nonlinearity. So what you consider is a given fixed domain. In this fixed domain, you consider an unviscid fluid. So here you simplify the fluid equation. You just have, you remove also, so you don't have the viscous part. You don't have the convective part. And you write the fluid equation in a fixed domain. And it is coupled to the structure through the equality of the normal component of the velocity at the interface because here the fluid is in viscid. And the displacement is only a transverse displacement. And it will solve this equation with, on the right-hand side, the pressure coming from the fluid. Okay? So this is a simplified version of the full and steady case where we have removed all the nonlinearity and we have removed the viscosity of the fluid. Okay? Now we can, by taking the divergence of this equation, we obtain only a Laplace operator on the pressure. And if we take the normal component of this equation, we recover a Neumann-Bondari condition for the pressure. Here, and this Neumann-Bondari condition involves the acceleration of the fluid. But the fluid velocity is equal at the normal component of the fluid velocity is equal at the interface to the displacement of the structure. So here what we have is the acceleration of the structure multiplied by the density of the fluid. And so we have the other Bondari condition on the other Bondari of the domain. And so by introducing what is called the added mass operator, so this is a Neumann-Tuderichler operator. So you have G which is given, you solve this problem and you recover DQ over DN. And this is the force that you apply, you know, you recover Q and this is the force that you apply to the solid equation. So by writing the system like that, thanks to this added mass operator, so it takes the Neumann-Tuderichler operator, it takes the Neumann-Bondari conditions and it gives you the Derichler trace on the Bondari. So now P, the pressure, can be expressed thanks to this operator, so this is the added mass, and to the acceleration of the structure. Yes, is it clear for everybody that the problem can be written like this? So remember here what we have. Yes. So the fluid pressure satisfies this problem, so P is in fact can be expressed thanks to this Steakloff-Poincare operator at the interface and to the acceleration of the structure. And if I put P like that in the structure equation, the problem reduces to an equation on the interface only. The fluid part is condensated in this added mass operator. Yes? So why added mass? Because here you have the mass of the structure and here you have an added mass effect coming from the fluid. And this added mass effect comes from the uncompressibility constraint of the fluid. If you don't have the uncompressibility constraint, you don't have this writing here. So if you perform, so from the theoretical point of view, if you try to perform a fixed point. So you have a delta which is given. You solve the fluid part. It gives you P. P will be minus for F ma of the acceleration linked to this displacement. And you will solve the structure part. And the structure part is rho s equals, so P is minus rho f ma. So it will be. So next you have to prove that the mapping that to delta gives D as a fixed point. And here you have treated the added mass in an explicit way. And by doing so, so assume that D, I don't know, is in, I will take C2. Okay? So that the acceleration is in C0. What you will recover is only a displacement in C2. You will not gain some additional regularity in time, in particular. So C2 in time, C2 in time. So if you start with some regularity in time, you will still have this regularity in time. So you will not gain regularity in time by performing this fixed point. So in fact, we cannot prove existence of solution with such a strategy of proof. To see that, in fact, you can assume that the operator of the structure is only a multiplication by a constant. A, B. A constant, strictly positive. And you project the equation on the eigenmode associated to the added mass operator. So by doing so, you have a simpler coupled system of ODE that you can solve. And you can easily see that if, so if rho F is, so if the quantity rho F over rho S is too big, say, bigger than some constants, that will depend on the eigenfunction of the added mass operator, then so if the fluid added mass is too large compared to the structure in Asia. If the structure is not an heavy structure, then you will not be able to perform the fixed point theorem. So this model was, in fact, in a numerical paper. So this is, so this paper here, but I will speak about it later on. It was to make, to understand these added mass effects because at first, so in aerodynamics, you don't have incompressible flow. You have to deal with compressible flow. And people were using explicit scheme. They were solving the Fritz equation with derrick-lebandari conditions, computing the Fritz force, then computing the structure force. And they, okay, it works like that mostly with some correction, prediction and stuff like that to have accurate schemes. But there was no stability issue by doing so. So the schemes were stables. But when people try to perform, to use those scheme in the context of blood flow, then it appears that the simulation were unstable. And nobody understands why. So the reason why is because we have to deal with incompressible flow and we have to deal with parameter, with physiological problems where the density of the structure is close to the density of the Fritz. Yes? Yeah, maybe he write it like that. Yeah. But if you don't decouple the Fritz and the structure, so if you don't try to perform a fixed point theorem, then you can prove that there exists a solution just like that because you have a compact perturbation of the identity here and so you have existence of solution. My point was, I try to prove, if I try to prove existence of solution by decoupling the Fritz and the structure just like that, then it will not work unless the density of the structure is large enough. And this is the reason why it doesn't work. And so you have to find a way to treat this added mass effect in an implicit way or to prove existence of solution without decoupling the Fritz and the structure to keep the whole system coupled. Okay? So not to consider the Navier-Stock's equation plus something and the structure plus something and perform a fixed point to keep the system coupled. Keeping in mind that we can decouple the geometry. So the unsteady case. So we go back to the unsteady case, the full one. So those are the equation you have already seen. And so once again here, so if I consider that you have a Derrick-Lebender conditions, it's seems that the pressure is defined up to a constant. But once again, it is not the case. So here we don't have to add an additional constraint. In the steady-state case, I have to add this additional constraint. Here the constraint is in the system. As I said before, this constraint is only the fact that you have on the interface that the flux of the velocity is equal to zero due to the divergence-free constraint. And this is equivalent to... So here I consider only a transverse motion. So here I have removed the second equation and the longitudinal displacement. So this is only a transverse motion. So in this case, this constraint right like this and it says that the volume of the cavity is preserved during the evolution in time of the whole system. Okay? Is it clear for everybody that this is exactly this? Because u is equal, so u1 is zero, u2 is equal to... The partial derivative of eta with respect to time and the normal is equal to minus the x eta1 over 1 plus tx. So this is the normal to the deformed interface. So here you use this and the fact that eta1 equals zero and eta2 of 1 plus eta equals eta. Okay. So here the pressure and so this is physical. The pressure have to be uniquely defined. Moreover in this type of problem, you have formally that... You have formally that... You can in fact... So when I wrote the variational formulation of the problem, I wrote the dissipation term like that. But in fact, for u, that satisfied u of 1 plus eta equals zero, eta transpose and v of 1 plus eta equals zero b. I will take some... Then this term is exactly equal to... So maybe there is a two there. Gradient of u, gradient of v. So in this really special case, because we have this only transverse motion of the beam, then instead of the gradient of u, you can take the non-symmetric part only. Which simplifies the analysis because then you don't have to use a corn inequality and stuff like that. So in fact, you have corn inequality. You have corn equality. By taking u equal v, you have corn equality here. But this is only because of the transverse motion of the beam. So it's simplified a lot the analysis. And here also because of the transverse motion of the beam, you have a linear constraint. This volume preserving constraint is linear in the general case. This is not linear. This is a non-linear constraint involving all the displacement as a component of the displacement. So here I would like to prove existence of weak solution. If I look at the energy estimate by performing what I did before, at least formally, what I have. So if I assume that what I will do will be valid even in the case of a equals 0. So here I consider only a wave equation. So take for instance a equals 0. Then the energy space in which the displacement eta lives, right like that, and they are embedded in such a space. So it's an easy calculation. And once again, you see that you are not a leap sheet. So I just told you I will consider weak solution. And just before I told you that with non-leap sheets boundary, I'm not able to prove existence of weak solution. But here in this very special case, the fact that the boundary is only C0 will be enough. And in fact, this is once again mostly due to the fact that we have only a transverse motion. So here the domain is a subgraph. Yes, the free domain. And so an eta, so the boundary is in C0 or C0 alpha with alpha less than 1 over 2. And even if the eta is not C1, I will be able to define weak solution. But to do so, I shall not go back to the reference domain. But because as soon as I go back to the reference domain, I require a huge regularity of the displacements. So what I shall do there is to keep the setting Eulerian for the free, then Lagrangian for the structure. So to keep the first setting, not to write the freed equation in the reference domain. And so as I said in the first lecture, it is not clear how to define those spaces because of the time dependency here in the H1 space. But a simple way to do so is to consider the non-silendrical space time domain. So you have the freed domain that it will evolve in time. And you consider this non-silendrical space time domain. And you define the spaces just like that. So what we have to keep is this Eulerian setting for the freed in this case. Nevertheless, the displacement is continuous. Otherwise, we will have some trouble to define properly the freed domain. So here, the freed domain is simply, and y is 0, 1 plus eta of t of x. So here you see that the freed domain is correctly defined as soon as this has a meaning. So eta is in C0 of t and x. So it's OK. And as soon as 1 plus eta is not equal to 0. So as soon as the beam doesn't touch the bottom of the freed cavity. So one of the first points is to give a sense to this quantity because you don't have a leap sheets domain, so the freed velocity is in H1 of the deformed domain. But how to define properly its trace on the deformed boundary, knowing that this deformed boundary is not leap sheets, is just C0. OK? But here once again, due to the only transverse motion of the beam, in fact, we can define quite easily what I will call the Lagrangian trace of the freed velocity. By just saying that vx1 plus eta is equal to the integral from 0 to 1 plus eta of the, so this is not z, this is y. The partial derivative with respect to y of v of x of u. OK? So knowing that v is in H1, this one is in L2, and so this one will be also in L2. So we can define, we cannot define the Eulerian trace on the boundary, but we can define the Eulerian trace on the boundary, because of the only transverse motion. So next we have to build some lifting of the structure's test function. So you take a structure's test function, 0b, which is defined on the reference configuration of the structure part. And what you have to define is a lifting of this function, 0b, onto the freed domain, which is divergence free. But once again, you don't have this, the boundary, which is Lipschitz. So what you have is a domain with a non-Lipschitz boundary, and you have a function, a vector, which writes 0b and which is defined on this boundary, and you would like to lift this function. And to do so, the really simple way to lift it in a constant way. So here this is only a function of x. So you lift it in constant way till a distance alpha from the bottom, and this function is divergence free, because here the first component is 0, and this one, that depends only on x. So this is a really simple way to lift structure test function onto the freed domain. And then here you solve the Stokes problem, but you don't have any more of the problem of the regularity of the boundary since you have a square or... And to extend functions that are defined in the freed domain, you can also extend it by considering also the constant 0b function, and then you will have h1 extension, which are divergence free. So for that, you can prove that there exists a weak solution, and the key point, so the weak formulation is just awful. So this is the weak formulation of the problem. Here you can replace this by only the gradient of u, gradient of phi, and phi and b will satisfy those two constraints. So you will take phi to be divergence free, and phi equal b, so phi map in the reference configuration equal b. So once again, the freed test function will depend on time and will depend on the displacement, which is rather unusual for a weak formulation and will be also unusual from the numerical point of view, because you will have to have some, I don't know, finite element basis that follow the motion of the domain in a way. Here the freed test function have to follow the freed domain motion. So this is the weak formulation. So one word we have integrated by parts here, we will take to time the acceleration of the freed. And so what we will consider is an approximate, sequences of approximated problems where we start by regularize the domain. So the domain is not regular, so we start by regularizing it. Hoping that making this regularization parameter goes to zero, everything will work well. And it will be the case. So by doing so, so remember to make the time derivative with respect to the time derivative of the kinetic energy of the freed, we needed the motion of the domain to be equal to the velocity of the domain to be equal to the velocity that appears in the convection term here. Otherwise you lose this property. So if you regularize the domain, you have also to regularize the convection term. And you have to regularize it in a way that it match perfectly the domain motion. And we don't want to, if you don't want to do so, if you want to regularize independently the domain motion and the convection velocity, a way to write the convection term here is to write it in a screw symmetric way. So you keep one half and you integrate one half by part. So you integrate this term by part. You obtain this term plus a boundary term. And in this boundary term, you should have you here, this velocity here. But you replace this velocity by the velocity of the domain. So if you take phi equal to u in this formulation, this term will cancel with this term. And here you will have exactly the right boundary term to make the time derivative of the kinetic energy appear. So by doing so, you can regularize the domain motion and the fluid velocity independently to ensure that the energy estimates are still satisfied. So next, remember, we can decouple the geometry. So we linearize this part. We assume that the fluid, the domain, the evolution of the domain of the fluid is given. We assume that the velocity, the convection velocity is also given. And so now the coupling of the test function at the interface, still it depends on time, but it's a linear coupling because this is given now. So you have a linear coupling. And so here you have a linear problem of one linear. And with this problem, you can prove that there exists a weak solution, so I don't know, a Galerkin method, choosing a right Galerkin basis. And now what we have to do. So we use a Galerkin method for the linearized approximated problems. Next for this, so for epsilon fix, we perform a fixed point. So we have v delta. We regularize them. We obtain u and eta, and we perform a fixed point for this regularized problem. And we obtain that there exists a solution for now. This program that has been regularized, but which is nonlinear because here you have the nonlinearity coming from the convention term. Here you have the nonlinearity, the geometrical nonlinearity. And here you have the equality of the velocity at the interface. And now the question is how to make epsilon goes to zero. So we need some compactness results. I will not detail that. So we have that, thanks to the energy estimates, we have that the free velocity. So the trouble is how to pass in the limit in those nonlinear terms and in the domain motion. So you need some compactness in L2. So you know that gradient of u is bounded in L2 of L2. And you know that u is bounded in L2 of H1. So you have some compactness in space, but you don't have compactness in time. So you need some compactness in time in order to pass to the limit in this term in order to prove that u epsilon goes to some u strongly in L2 of L2. So this can be done. The key point is, so this gives you the needed compactness in time, in fact. So in a way, you are trying to estimate u epsilon of t minus u epsilon of t minus H with respect to H. So have some more regularity in time of the velocity u. And the key point is that you obtain this additional estimate on u together with the estimate on the structure velocity because everything is coupled. And to have, in fact, to have compactness on u, you need to have compactness of the structure velocity. Otherwise it will fail. So OK, maybe I will skip that. So you can like that prove that, so it was for a dumped beam equation. And so doing so, you can make epsilon go to zero. So you can pass to the limit in the domain and you can also choose test function and dependent of epsilon. Because one of the key points is that the test function depends on the solution. So it will depend on the regularization parameter. So in order to pass in the limit in the weak formulation, you need to pass also in the limit in the test function. And one way to do it is to choose test functions that are independent of epsilon. And in this part, once again, it's really important to have a transverse displacement of the beam. So the question is how we can pass to the limit when the additional viscosity goes to zero? The answer is yes. And it's because of the dissipation of the fluid. So the fluid dissipate energy, so the fluid velocity is in H1 in space. And the structure velocity of the structure is the trace of the fluid velocity. So in a way, the fluid velocity, the dissipation of the fluid dissipate the energy of the beam also. So we can pass to the limit by using the fact that the fluid dissipate also the energy of the beam equation. And this dissipation coming from the fluid enables to control the high space frequency of the wave equation, in fact. So you have some damping of the wave equation coming from the fluid. So I will stop here and we'll... No. Yes.
Many physical phenomena deal with a fluid interacting with a moving rigid or deformable structure. These kinds of problems have a lot of important applications, for instance, in aeroelasticity, biomechanics, hydroelasticity, sedimentation, etc. From the analytical point of view as well as from the numerical point of view they have been studied extensively over the past years. We will mainly focus on viscous fluid interacting with an elastic structure. The purpose of the present lecture is to present an overview of some of the mathematical and numerical difficulties that may be encountered when dealing with fluid–structure interaction problems such as the geometrical nonlinearities or the added mass effect and how one can deal with these difficulties.
10.5446/57371 (DOI)
First, since last time it was, I think, really quick at the end, I will just do some summary of what we did and so try to give you the main point. So I have presented you this system, so this is a coupled fluid structure system which is nonlinear. And we have identified one difficulties linked to the geometrical nonlinearity. And this geometrical nonlinearity is due to the Eulerian Lagrangian coupling because the fluid equations are written in Eulerian coordinates whereas the structure equation are written in Lagrangian coordinates. So we have a mismatch and we have to follow the particles in order to, for instance, to write the equality of the velocity at the interface. So and this equality is a nonlinear relation. We have also the nonlinearity coming from the domain where the fluid equation are set. So for this geometrical nonlinearities, what we have seen on an example is that it can be, so it can be treated explicitly, at least from the mathematical point of view. If you want to prove existence of solution for such problem, you can decoupled the nonlinear, a geometrical nonlinearity from the rest of the problem and it will work. Provided that everything is regular enough and the displacement is regular enough and you can do, if you want to do the change of variables that you can do the change of variable, if you cannot do the change of variables, then you stay in the Eulerian coordinate for the fluid but you have to define everything properly, okay? So it can be treated explicitly and we saw an example of that. So for the proof of existence of, here it's strong solution but it could be weak solution, strong solution of steady state case. So and the other point is that we have to deal to a coupling and this coupling is from two forms, so we have the equality of the velocity at the interface and the action-reaction principle which will write at the equality of the normal component of the stress tensor at the interface if you deal with 3D-3D coupling or 2D-2D coupling but it will, if you deal with 2D-1D coupling for instance, the forces applied by the fluid on the structure will appear in the right-hand side of the structure equation. The question is how to manage, so we know some properties, mathematical properties on Stokes equation with Derrick-Leibbender recondition or with Neumann-Bundler recondition and we know also some mathematical properties on the structure. So we know quite well each part and the thing is how to deal with these two parts when they are coupled and how to exploit the fluid property and the structure property in order to prove existence of weak or strong solution. So what we saw is that by, so one cannot decouple the fluid and the structure by solving for instance, so one cannot decouple the system in any way. So for instance by solving fluid plus Derrick-Leibbender reconditions, then the structure plus I will say Neumann-Bundler recondition or with a given force at the interface. So and this I gave you an example, a linear one where everything where linear, no geometrical nonlinearities, no convection term and stuff like that. Everything is linear. So even in the linear case we may have trouble to deal with those two coupling conditions at the interface. So on a linear case we saw what we call the added mass effect. And this added mass effect is linked to the uncompressibility of the fluid. Okay this is really linked to this uncompressibility of the fluid. Here the fluid was non-viscous without the convection term. So what remains is only the pressure force. And we use strongly that divergence of u equals 0 to write the added mass operator. So one way to remove I will say this difficulty may be to add some compressibility, artificial compressibility to the system. So this may be an idea to build approximated solution, to add some additional compressibility to decouple the fluid and the structure to prove that this approximate solution exists and then make the parameter of additional uncompressibility goes to 0 in order to have the full couple problems. So and from the numerical point of view we will see that we may add some additional. So by changing in fact the system we will change the added mass operator and so we will change the spectrum of the added mass operator and so we will gain some stuff in order to prove existence of or to design efficient schemes. So considering the full problem what we did after say okay, yes I would like to prove existence of weak solution and I saw that I cannot decouple the fluid and the structure like that. So on an example I showed you that by linearizing the geometry and the convection term but by keeping the whole system coupled I can manage to prove existence of weak solution like that. So I keep the whole system the boundary condition at the interface are treated implicitly okay and you keep the energy balance at the interface. So there are no additional energy coming from the decoupling because when you decouple the system you have you create energy at the interface in fact. And so I showed you last time that we can manage to prove existence of weak solution in force for such a system so the the the main difficulty in this kind of system is to pass to the limit in the nonlinear term so to obtain compactness as for instance Didier showed us this morning. So in particular we have to pass to the limit in the convective term. Yes thanks. You graduate you. So as Didier mentioned this morning thanks to the energy estimates you have some compactness in space because you know that gradient of u is bounded in L2 of time and space so u is bounded in L2 of time and h1 of space. You have some compactness and you want to have some compactness also for so some bond on in fact on DTU. So this morning Didier said this is given by the equation but here since you have a couple system this is not you have to work a little bit because this is not given by the equation and you cannot for instance when considering Navier-Stokes equation in moving domain depending on time you cannot apply the standard Au-Balions-Slema. You cannot satisfy the assumption of this Slema. And this is mostly because you have the time-dependent domain together with the uncompressibility of the free. So both of them so once again it's in a way linked to this added mass or something like that so you cannot apply for Navier-Stokes in a time-dependent domain directly the Au-Balions-Slema. It's not feasible. But there are ways to prove compactness so the Le-Mah I showed you last time which was close to the more complicated one we saw this morning. But so we can manage to prove some regularity in time in fact of the velocity U and we do that on U and on the velocity of the structure together because the system is couple. So but this was a way to prove existence of weak solution. From the numerical point of view we will see that we have some way to decoupled the fluid and the structure and remain stable. So by taking those scheme one may be able also to prove existence of weak solution for instance taking ideas coming from numerics we may be able to prove existence of weak solution. So keep the whole system couple is not the only solution. You can decouple the system but you have to be careful. Maybe the fixed point wouldn't work and so you have to build an approximate problem really carefully. And this is linked to the physical property of the system because this is linked to the divergence frequency strain here. So I would like to show you some one results about strong solution so for the previous problem there. Okay the same. So in this case the proof rely on the fact that there is an addition of viscosterm that has been added to the beam equation. So we have a parabolic coupling in this case. So you don't have a gap between the hyperbolic regularity and the parabolic regularity. And so the idea to prove existence what the first idea to prove existence of strong solution is okay I know that with certain assumptions I can prove that the stock system has a strong solution. I can prove also under other assumptions that the beam equation has got a strong solution and how to use this together. So on this problem there are mainly two or three results. One of Bayer-Divega where he proved existence of strong solution locally in time and for small data. So this is a really, so you have to have small data and existence for small time. And recently Julien LeCœur proved existence of locally in time strong solution so he dropped the assumption of small data which is more satisfactory. And here he decoupled the freedom of the structure. But he cannot decouple it by considering the Navier-Stokes equation plus Derek LeBondari conditions and the structure with a given force. So I would like to explain how to decouple the freedom of the structure, how one can decouple the freedom of the structure but by treating the added mass effect in an implicit way. So the first step for the proof of existence of Julien LeCœur is to, he considers strong solution so he can rewrite the freedom equation in the Lagrangian coordinates. He will have a lot of nonlinear terms coming from the geometry the same as I showed you in the steady state case. But if eta, u and p are smooth enough then you can define easily all these terms. And so he writes this system as a perturbation of the Stokes problem. As I did to prove existence of strong solution in the steady state case. So here you have a Stokes problem and in the right hand side you have identity minus a, identity minus b and stuff like that. So a and b depends on the displacement of the structure. So you have here perturbative term. So you have the Stokes problem and you have perturbative term. The same for the divergence. And here you have the structure and you have p and the perturbative term also here. And so he cannot decouple like that by considering this system plus the equality of the velocities and next the structure plus the external force applied by the freedom structure. So the idea is to study the linearized system there. So you remove all the nonlinear terms. By changing of variables by writing the freedom equation in the Lagrangian coordinate, the relation between the velocities at the interface is now linear. You don't have the geometry here. You don't have the geometry here. All the nonlinear terms are in those perturbation terms. And the idea is to study the linear system there and to prove that the nonlinear term are small for small time. So the same idea was used in the steady state case when I said if a is close to the identity matrix. And so for small time you will have a close to the identity matrix. Because of the continuity in time of all the quantities. So if you succeed to study the linear case, but even in the linear case you can decouple like that. Then the nonlinear terms will be small for small time. So you will have the existence of strong solution locally in time. So the first thing to do is to lift this divergence free constraint. Because otherwise, for instance, when taking you as a test function here, if divergence of u is not equal to 0, you keep the pressure in the equation. And you don't want to have the pressure. So you have to lift this non-homogeneous divergence. So it can be done because in fact this term is average as a zero average on the domain. And next here, if I treat the pressure in an explicit way, then I will have this added mass effect. So I have to manage to treat at least one part of the pressure here in an implicit way. And the other part in an explicit way. So the, in fact, you write the freed velocity as the projection of u in space I will precise, right tick like that. So the space in which we do the L2 projection is phi in L2 of divergence of phi equals 0 and phi dot n equals 0 and p is the L2 projector on H. So this is the standard L'ore projector for the stock system. And p, so this is u explicit and this is u, well, us for implicit. And so here this function is necessarily a gradient. Because here this function is orthogonal to every divergence free function such as phi dot n equals 0. So us here is necessarily a gradient of theta and theta will solve this equation. So in fact, us is a gradient of theta divergence of us equals 0. u is divergence free u is the divergence free and so us is also divergence free and us equal us dot n equal the normal part of the, the, the stutter velocity. Okay. So in a way us is a lifting of the normal component of the velocity at the interface. And next we define a pressure like that and this pressure is really, so for instance take f equals 0, the right hand side of, so here what you see is exactly what I call the added, what enables us to define the added mass operator. Remember? So p s will be treated in an implicit way because this is the, this is the, the, the problem, the elliptic problem that define the added mass operator. And next you are in p u e and p e solve, so you have to, you project the, the, the stock system on, on h and you find that u e and, and p e solve this equation. So you have the projection of the right hand side and you have that u e is divergence free, the normal component is 0. And since the tangential component of u is equal to 0 because I assume here that I have only a transverse displacement, u e is only, u e, the, the tangential component of the velocity u e correspond to the minus the tangential component of u s. So u is equal to u e and plus u s, p is equal to p e and p s. Okay? And so we decompose like that the velocity, we decompose like that the pressure. And we will, so in the right hand side remember we add p, p is equal to p e plus p s. p s, I can express it, it's thanks to the other mass operator and I put it in an implicit way in the equation of the structure. So I modify the mass of the structure. Okay? But p I will be able to, to, I, I, I keep it in the right hand side and I will do a fixed point. Say, I know that p is given. I solve this equation. It gives me eta and then I solve the fluid equation and I recover some p e. Okay? So then I can decouple the fluid and the structure because I have treated the added mass in an implicit way. So I split the effort in two parts. The parts coming from the added mass effect and the part, the viscous part I will say, okay? And by doing so, I can study this couple problem using standard results, regularity results for the stock system in particular because the main part here is the stock system, not the structure equation. And moreover, we have an additional viscous term that makes the coupling parabolic, parabolic. So one of the difficulty of the system is skipped, in fact. Okay? So, but even in a, with a parabolic, parabolic coupling, we have to, so the, the, the fact to, that we add some viscosity to the structure will not arrange us a lot when they couplings of freedom in the structure. Okay? This is not because of the parabolic, parabolic coupling that we have, parabolic, hyperbolic coupling that we have trouble here. This is because of the divergence, free constraint and the added mass effect. So you study this, it sounds to a fixed point, so you decoupled the freedom in the structure. And then, so, maybe, and then you prove that the linear system has got strong solution and that the right-hand side are small for small time. So you can perform a, once again, a p-k-r fixed point theorem. Okay? So you will have existence and uniqueness for small time, in this case. And this could be an idea to design some numerical scheme, because we have split the, the, the, the problem, one part is treated explicitly, the other part is treated implicitly, but the part that is treated implicitly, in this part we have to solve a Laplacian of p. So this problem is rather fast to solve. You have fast solver to solve this, this kind of problem. So what are the conclusion of the, of this part? So one can prove existence are weak or strong solutions. The way you prove it can be, can give some insight of how to design a numerical scheme. So the stability problem you may have. The question is, okay, we have proven some existence of weak and strong solution for, for this problem, but for the time being, when considering the unsteady state case, so far the results, the results concern only vertical displacement of the plate. So how to deal with longitudinal displacement, it's not clear. For strong solution, the, the existing results concern only additional visc, the, the beam equation plus additional viscosity. Nevertheless, in the 3D, 3D case, there are some results where you don't have to consider viscoelastic structure. So what is really the, the, the role of this, this causity term? Is it only because of the proof once again? Or is it really needed to prove existence of weak or strong solution? Because with weak solution, we can prove that there exists solution without this additional term. We have considered on the other boundary condition, although then the interface, there are clear boundary conditions or periodic boundary conditions. So what if we have more realistic boundary conditions such as Neumann boundary conditions? Because if I think about blood flow, I cut my arteries and so I can put given velocity at the inlet, but at the outlet in general, you have Neumann boundary condition. So what's, what, what can I prove in this case? And if I cannot prove anything in this case, what does it mean from the numerical point of view? Will I be able to, to, to solve numerically this system for which I don't know that there exists a solution? I have some stability troubles, for instance, and I will have from the numerical point of view some troubles. Stability troubles. It's quite known that for Navier-Stokes equation with Neumann boundary conditions, you have stability troubles and you have to have some stabilization term. So once again, the fact that I, I don't know how to prove existence of stronger weak solution in this case is, will be seen in the, in the, in the numerics. Okay? You will have some stability troubles and you will have to stabilize the system. So take into account the, the, the longitudinal displacement. So there are many, many things to be done. And in particular, every results assume that, so they are locally in time or they assume that the, the beam doesn't touch the bottom of the fluid cavity. The question is, does the model allow contact or or not? And can we define solution after contact? Okay. So I will not develop this point, but okay. And from, so I will, I will now speak about numerical issues. So what I would like to underline, that's a geometrical nonlinearity seems to, I can treat them in an explicit way from the mathematical point of view. And we will see that we can treat them in an explicit way as well in the numerics. But I have to be really, really careful when dealing with the coupling condition and the way I decouple the problem in order to, to, from the numerical point of view. And if you don't do, if you don't pay attention, you will have some energy appearing at the interface, porous energy, and then you may have some instability. Okay. So the coupling condition are to be treated implicitly or we have to find other strategies. Okay. And in fact, here, when I decouple the system, in a way, this is a semi implicit scheme because I trip one part explicitly and the other parts implicitly. So we have a semi implicit scheme. It's not totally explicit. So we will see that we can design semi explicit scheme, but they are still costly. So the question is how to completely decouple the Fridt and the structure in order to have, to solve once per time step, the Fridt and the structure. Is there some questions or comments before I move on? No, everything. Everyone is sleeping. But I can't. So how can we discretize in time a Fridt and structure interaction problem? So the assumption is that as in the mathematical studies where one wants to use the already known results for the Fridt and for the structure, here, this is quite the same spirit. We assume, I will assume here, that we have a Fridt solver. Okay. So that works and that have all the better numerical scheme accurate and stuff like that. So I have a Fridt solver. I have a structure solver. And those problems are completely different. You have a mixed parabolic problem. You have a niprobolyc nonlinear problem. And so they require, each of them require development. The Fridt solver required a proper development and the structure solver required a proper development. So I assume that I have two solvers and the question is how to efficiently couple them in order to have a stable scheme. So the first question will be the stability of the scheme because you saw that you may have some problem with explicit scheme. And so I don't know, for instance, fixed point procedure or Newton iteration may fail to converge in some case. So the first question will be stability. Can we design cheap, explicit, stable scheme? And then the next question, when we have those cheap, stable schemes, are they accurate? Because you can stabilize whatever you want by putting dissipation and stuff like that. You can kill what is bothering you. But the issue after is, OK, I have this beautiful numerical simulation, but does it represent reality? OK. Maybe there is too much damping in my system and I lose everything, so everything is stable. But, OK, moreover, so we have this, the fact that the freed equation are set in a moving domain, a known moving domain depending on time. So we have to find a way to follow this domain. OK. So there are many strategies to take into account this, I will say mesh movement, if I consider, for instance, financial element discretization. And what I will present is the ALE formulation for the freed equation. ALE means arbitrarily Lagrangian, Eulerian formulation. So what does it mean? It means that you have Eulerian formulation, Lagrangian formulation, and here we are in the middle of the two. We are not Eulerian, we are not Lagrangian, but this is a mix of the two formulation. So the goal is to design efficient, stable, and accurate numerical schemes. So in the equation, you have Vtu minus plus u grad u minus, I should not have u, plus grad p, equal zero. In, this is now omega f of t. OK. I changed the notation. The question is how to discretize in time the time derivative of the freed velocity. OK. So I take delta t, the time step strictly positive, and my favorite scheme, Euler scheme. So I say, OK, I will discretize it like that. But x, x there, lives in omega f of tn, tn plus one, a configuration in the middle, I don't know. So here, the trouble is that u at time ts plus one is defined on the domain at time ts plus one, tn plus one. And u at time tn is defined on the domain at the time tn. So if the domain moves, and if I have a point like that, and at the next time step, the domain is like that. And this point is not in the domain anymore. So how can I define properly this difference? The way to do it is to follow the point. So one way to do it is to follow the point with the freed velocity. So use the flow associated to the freed velocity. But you want to discretize your system in space also. So take the finite element discretization and take a velocity of the freed that makes some a lot of, that have a lot of, oh. So then the mesh will be just a mess after a few time step. And you will have to remesh. So you don't want to do that. But you want to follow the interface. So at the boundary, you want to follow the interface. And in the domain, you can do whatever you want to follow the mesh. What you have just to keep track is the interface. So what you will do, so your point is there. So I will define some displacement associated to the interface. And this displacement is quite naturally defined because this is the displacement of the structure. And in the domain, I will define a mapping that is not, that doesn't match the freed velocity. And this is the analog of the mapping I called phi eta, which was equal to identity plus a lifting of the displacement. Here, in fact, I choose the lifting. And for instance, from the numerical point of view, I can choose to solve an elliptic problem in the reference configuration of the freed domain. Yes? And like that, I will be able to track every point of the mesh. And so to define, if I have a nix there, to define utn of x bar, where x bar will be the image of x through this mapping, this ALA mapping. OK, but if I do so, I am not approximating this quantity. Yes? No? Why? I have to remove the convection coming from the mesh velocity, OK? Because this is not true anymore, OK? I have a convection of the particle to take into account in the equation. Yes? So and I have a continuous formulation of this A formulation of the freed equation. So I define, so I define the mapping that maps the reference configuration into the configuration item t. And I define the velocity associated to this flow, OK? And I rewrite the Navier-Stokes equation, thanks to this mapping. And I make the velocity, the convection velocity, linked to the mesh motion appear in the, in the equation. And so now what I call here, this will be approximated by this quantity, really, OK? Here this is just a change of unknown in, you just have to rewrite the equation if you take the definition of du over dt taken in x hat is equal to the particle, the total derivative but associated to the flow phi. Then you rewrite the Navier-Stokes equation and you have that. I have done nothing there. This is just rewriting the Navier-Stokes equations. OK, but this is the ALE formulation and like that I can approximate this quantity which has a meaning because I follow the node of the mesh. Is it clear or no? Yes? So once again the coupling condition, so eta becomes d, d is the displacement of the structure. And here I have written the action-reaction principle but in a strong way. In the first part I define it in a weak way but I can define it in a strong way, thanks to this mapping, this ALE mapping. So sigma f is the free stress tensor and I map it in the reference configuration, thanks to the mapping phi of t and so it makes appearing the deformation gradient and Jacobian matrix and stuff like that. OK? OK. So now I would like to discretize in time the full-couple system. So here this is the continuous system that I have written in a, I have written variational formulation but I, in a way I decouple the freedom and the structure because, so you have your df. df is the displacement of the mesh. OK? df is an extension of the displacement of the interface but an arbitrary one, not linked to the freed velocity. You define the velocity associated to this displacement. OK? And you have that domain of the freed is the image of the reference domain through this mapping. The freed problem from the variational formulation, the variational formulation of the freed problem writes like that. Why? So here the space of test function vf is the space, maybe I made a mistake there. So I think that the vf is the space here. This is h10 of omega at f. So this is not vf that belongs to vf but the transported vf onto the reference configuration that belongs to vf. So you have to compose with the ELE mapping to, and you have that. So this is for all vf, I call it df, so this is vf in vf. OK. So compared to the strong formulation here, so what I do, I multiply by a test function vf like that. OK. And next, what I see here is that I have invert the time derivative, so the time derivative here is outside the integral. OK. And so by a simple calculation, you see that you have this, so you will have some additional terms, and this additional terms is the divergence of w there. OK. And here the main point is that d over dt of vf, so to write the weak formulation like that, we use that. OK, because v here composed with vf is in a space that doesn't depend on time, so the time derivative of this quantity is equal to 0. So in fact, what we have is that vf will solve this transport equation with, it's convicted with the velocity of the mesh. OK. And so by using this, you can obtain this weak formulation. This is a formulation which is written in a conservative way. OK. And this is the one I will use for the finite element discretization. Why? Because I will consider a finite element discretization of this space, which is a reference space, and I will map this finite element space onto the finite element space defined at each time step. OK. So this is the formulation. So I multiplied here the Frid equation by test functions that is 0 all over the boundary of the Frid. So I don't see the structure there, because this is 0. OK. Next, the variational formulation of the structure takes this form. So this is a mechanical energy. And you have a residual tear that represents the force applied by the Frid on the structure. And from the numerical point of view, I will write it through its variational form. So here. So this Vs is the test function of the structure, and L of Vs is the lifting of the test function of the structure into the Frid. So this is an admissible test function for the Frid equation. And thus this residual is in fact equal to, so I will write for instance, you will have some terms like that. So this is U L of Vs plus blah, blah, blah, blah, blah, and all the term coming from the variational formulation of the Frid. OK. So instead of writing Tf in a strong way, I write it in, thanks to its variational formulation. So you will have also DT, U, L of Vs, and stuff like that. OK. Coming from the Frid equations. Last time I showed you a variational formulation where everything was coupled. But if I consider that the Frid, I decompose the Frid test function in test functions that are equal to zero on the interface and test functions plus test functions that are lifting of structure's test function. Then I will recover those two equations. Or by adding those two equations, I will recover the variational formulation I showed you before. OK. So now time discretization. So I have, in fact, I have one unknown, one supplementary unknown, which is the mesh movement here. OK. And so first, I may choose to extend the displacement at time n, at time n plus 1, at an intermediate time, or some extrapolation. And I don't know. OK. So I defined my, we will see how, my deformation of the mesh. The velocity of the mesh is defined thanks to a backward Nolas scheme. And next, I have that u on the boundary is equal to the deformation of the velocity of the boundary, because the velocity of the boundary corresponds to the next tension of the displacement of the structure on the boundary. And next, I may choose this velocity at time n plus 1 to match the mesh velocity at time n plus 1. And here I used only, so here are the two first terms. The first integral is on omega f n plus 1, and the second integral is on omega n. OK. And I didn't put vf n plus 1 vf n. This is, here this is not in these, those two integrals, this is not exactly the same function vf. This is vf transported, you have some, some mapping. OK. Is it clear for everybody? Yes. So, I choose a backward layer implicit scheme. OK. That's for the fluid. And for the structure, I choose a leapfrog scheme. This is a real bad scheme for structure equations, because it has a lot of dissipation. Usually we use new mark scheme. OK. And the property of new mark scheme is that it conserves the energy. You have conservation of the kinetic energy plus the mechanical energy of the structure like that. OK. With this scheme, you don't have conservation and you have dissipation. But nevertheless, and some of the, of the numerical, terrestrial numerical studies are done with that and works only with this one, so, because it has dissipation in it. So in practice, we don't use this, this scheme. So here, this is mostly, so, the choice of d star. OK. If I choose d star equal to dn plus 1, then the scheme is fully implicit. Geometry, coupling and stuff like that. OK. If I choose d star to be equal to dn, then the geometry is treated explicitly. So how to choose d star? No, d star equal to dn plus 1 at the interface. The coupling is fully implicit, geometry and boundary condition. Whereas, if I choose some d star depending only on the previous time step, so first choice dn and other choice and extrapolation, the first order, second order extrapolation of the displacement, then the coupling is explicit. OK. But here, this is, if I put, maybe, yeah. So this is a fully implicit scheme. So this is highly nonlinear. So to solve this, you will have to do some fixed point, Newton, Meta and stuff like that. So, OK. So to design a solver dedicated to what we call a monolithic solver, dedicated to the, that solves the fluid and the structure together. So you will have a huge matrix with all the term coming from the fluid, all the term coming from the structure, and they are rather different problems. So you will have to design some algebraic methods in order to solve this huge linear problem. So and that's not the spirit I told you before. I would like to use a already code, a fluid code and the structure codes that exist already. OK. So this is a highly nonlinear couple problem, but it is stable. It's naturally stable because we keep all the, everything is implicit. So we will have energy balance at the, at the interface. And so we can prove that we have energy conservation of the whole system. So here you have the kinetic energy item s plus one minus the kinetic energy item s plus the kinetic energy of the structure at time s plus one minus the kinetic energy of the structure item s plus the, so the difference between the mechanical energy at time x plus one of the structure, mechanical energy item n of the structure and the dissipation. And so the, the energy item n plus one is lower than the energy at time n. OK. And in fact, to have this equality, we have to assume that we have what is called the discrete geometric conservation law. In fact, you have a new known, a new known in your, in your, in your system and if you discretize properly in time your system just by moving the mesh, you may lose some masses for instance. OK. So you, you, you have to discretize really properly this whole couple system. So you have three unknowns, the fluid velocity, four, freedom pressure velocity, displacement of the structure and the, the, the mesh displacement. So if you assume that you have this, in fact, I will say it's nearly a mass conservation of, of in the mesh. OK. Then you can prove that. Maybe I can do it. So you have an unconditional stable scheme in the energy norm. Is that what I'm doing? Maybe I will do it. So yes. So the idea is once again to take the fluid velocity as a test function and the structure velocity as a test function. Freed velocity for the three-day part and structure, structure velocity for the structure part. OK. So the, the, the, in the, in the variational formulation you have, so something like that. OK. The first term, one over delta t, this term plus, plus, plus. Yes. And what I would like to, so I take vf equal un plus one. Yeah. As, as in a, in the continuous case. So I have one over delta t and plus one square minus, no, this is un there, un plus one. In the case I don't have this time dependency of the domain, this is really easy because I apply Cauchy-Schwarz inequality and then Jung inequality and I recover one over two of the kinetic energy at time n plus one minus one over two of the, of the, and I compare it to one over two of the, to the kinetic energy of the, of the velocity at time n. But here I have some dependency with respect to the time of, of the domain. So what I have is that. This is un, un plus one is greater than omega n, qn plus one square minus. So this term is OK. Is it? So this term is OK. And this one, so this is un plus one transported in omega n, thanks to the ILA mapping. So this is an integral of, of a un plus, omega n. And here you have an integral of omega n plus one. So you have this mapping and thanks to the geometric conservation law there, I have that, this term will be equal to minus omega n plus one, un plus one square plus. So here I applied this geometric conservation law to q equal un plus one square. Yes? So here I have the, the right term, one minus one over two. So the kinetic energy of the, of the fluid at time n plus one. The kinetic energy of the fluid at time n plus a reminder, another term. But this other term will cancel with the terms here where you have divergence omega n plus one appearing and the convection term that appears here. OK? So this term will be exactly equal. So in fact, this one is, so if I take, I take vf equals zero, the convection term will give zero because you have the equality of the velocity of un plus one and omega n plus one at the interface. So when integrating by part system, it will cancel. The other one is exactly minus this one. OK? So, but we have to do something, we have to assume something and the, we cannot do any advancing in time scheme to take into account the mesh movement motion. Otherwise, we will produce some sparse energy and we may have some trouble. So the, the, the thing is, it's even more, it's really a problem when you consider compressible flow because then you, you have to be really, really, really precise on the, even more precise on the mass conservation. So the, the idea for me is that if you don't move the mesh correctly, you will have some additional mass and coming in and out of the, of the system. But you, you, you may have stability. OK? So the stability is not lost, but it's a stability that makes, in fact, that the, that makes divergence of omega n in W1 and infinity appear. Which measure the change of volume of, of your mesh? OK? And since Wn plus 1 is linked to the regularity of the structure and stuff like that, you may not have estimate on, on, on this. No, no. No, because the, the, in, in the, in the compressible case, you, you use, for instance, explicit scheme rather precise one. But the, the, the geometric conservation law is then really, really an issue. And this is for compressible solver that this geometric conservation law has been introduced. In particular, you can see the, the, the work of Charbel Farat. He did a lot of stuff on the, first they called it the geometric conservation law, and then he called it the discrete geometric conservation law. So because, in fact, the, what you have is that if you take a cell in the finite volume, so, and you derived it, you will have one, something like that. And if you, if you want to really, really precise, you, you, you have to discretize, in fact, this equation also. Like, so you, you, you, you have to, to discretize in a proper way this equation. So it, it, in a way, this is the, the geometric conservation law, the continuous version, but you have to discretize it in order to be, to be consistent. Yes. Consistence with the discretization of the, of the free part. So this is an implicit co-splin scheme that is unconditionally stable. Yes. No, no big surprise, but it will be really costly. And so first, can we do an explicit treatment of the, of the domain motion? So instead of putting dn plus one, we put dn, but we have then some modification to make in the scheme. And so, un plus one will, will not be equal to wn plus one, but to the structure velocity, because now there is a mismatch between the velocity of the mesh at time n plus one and the velocity of the structure at time n plus one. Okay. One is explicit, the other is implicit. So you change the, the boundary condition for the free, and you also change here the convection term, because in order to have this term equal to zero when I take vf equal to un plus one, I have, I have to have un and double un plus one to have the same trace on the interface. And the trace of un plus one is equal to the trace of un. So the velocity at the previous time set, not un plus one. But so next, so the coupling condition are implicit, whereas the geometry is explicit, and you can perform exactly the same analysis, stability analysis, provided that the geometry conservation law is, is satisfying and you obtain also a stable scheme. Okay. So what are the implementation issues? So to, to obtain um, um, um, um, uh, implicit coupling, the first way to perform it is to have a monolithic approach. So to solve the whole system as a big system. Um, there is then no stability issue because the, the energy at the interface is, is conserved, but the resulting system is huge and it requires the development of a new server. And you, so, okay. You don't have stability issue, but you have all the trouble. Um, the other way, the partitioned approach where you use independent solver for the freedom of structure. So you use the state of the art for the freedom, the state of the art for the structure, but you may have some troubles to couple them. And the troubles come from the coupling at the interface and not, in fact, not the nonlinearity of the system in a way. Um, even if nonlinearity system are more difficult to solve than in our ones, but so, um, and we may have to develop some preconditioners and stuff like that and to accelerate the convergence and okay. But here we focused only on this coupling condition at the interface and the energy balance of the interface. And in the partitioned approach, we can either have, um, a strong coupling. So you can solve the Fridt, then the structure and submit, um, yes, submit a rate at each time step. So you have, you are at a given time step, you solve once the Fridt, once the structure, and then you iterate to achieve the equality of the velocities and the equality of the normal component of the stresses, the action reaction principle at each time step. But, and you may also have really weak coupling where you perform the Fridt equation at once time steps, the structure equation, and then move to the other time step. So this is a really weak coupling and then you will lose the energy balance at the interface. But in, in both cases, you do some, you, you, you have the explicit and then in, in the strong coupling, you have to do some fixed point iteration and stuff like that. So you may, even in the strong coupling, in this sub iteration, have some trouble to converge due to, for instance, the added mass effect. Okay? The, the same, the same question will be, be still there. Okay. So, we will focus on the partitioned approach. We have a Fridt, we have a structure, and we would like to know how to couple them and to exchange the right information at the right time and stuff like that in order to have stable and so first question is stable and then accurate. Okay. So, concerning the strong implicit coupling, so strong implicit coupling means that you have the equality of the velocity i time n plus one and the equality of the normal component of the stress tensor. And so the terms coming from the, the boundary term coming from the coupling is there. This is this integral term and it will be equal to zero. Is it clear for everybody that this is this integral term that we would like to, to control at the interface where, when we integrate by parts, the Fridt, the viscous part of the, the gradient of the Fridt, we recover sigma f dot n times u n plus one and the structural part we have also the same, the same kind of term appearing because of the action reaction principle. So here, using the strong coupling, then I have that this energy at the interface is equal to zero and so the scheme, if the scheme satisfies those, these two relations, then I will have a stable scheme. So it can be achieved thanks to fixed point R M, new turn methods and stuff like that, but they can be really, really expensive. And so the explicit coupling, so the first, the first simpler explicit coupling is to solve the Navier-Stokes equation at time n plus one for a given velocity coming from the structure, the velocity of the structure at time n, recover Fridt force at time n plus one and then solve the structure equation at time n plus one with this given force. Okay, so Fridt, the Fridt equations plus the Riclet-Bondari condition coming from the previous time step and the structure equation plus Neumann-Bondari condition or external force coming from the just solve Fridt equations. So this is explicit and so this is cheap, but you lose the, you completely lose the energy balance because u n plus one is not equal to the structure velocity at time n plus one. So you have some spurious energy at the interface and you may lose stability property of the implicit scheme. So this is efficient for some kind of applications such as IOLST when you study the, the, the window over the, the wing of an airplane and stuff like that with some extrapolation and so to, to have accurate scheme and stuff like that, correction, prediction. But the idea may work, but here it's because you don't have uncompressible flow and here we are dealing with uncompressible flow. And so why, why, why, why is this scheme may fail? Because of, have you been following the added mass effect? Only the organizers follow. So it may be unconditionally unstable. That means that even if we make the time step and the mesh size go to zero, we will not recover stability. In the case of when the density of the structure is too close from the density of the, the free. So if you, if you have a, a, a light structure, then you may have this problem using explicit scheme. When you have a heavy structure, you will not see that. You will not see that effect. So the choice of the scheme depends also on the application you are looking at. And not only on the application, you may have some model and you may just change a little bit the physical parameter of the model and then everything will fail. So, so this is an implicit coupling. Everything works well. And the explicit is already, don't let you. And so the instability disappear when the solid density is artificially increased. The instability are independent of the time step and the instability is sensitive to the length of the domain. Why the length of the domain? Because the adenmatts operator depends on the shape of the domain and its eigenvalue will, that will determine the, the, the, the range of stability depends on the length and on the radius of the, of the, of the tube. So maybe I will stop there. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
Many physical phenomena deal with a fluid interacting with a moving rigid or deformable structure. These kinds of problems have a lot of important applications, for instance, in aeroelasticity, biomechanics, hydroelasticity, sedimentation, etc. From the analytical point of view as well as from the numerical point of view they have been studied extensively over the past years. We will mainly focus on viscous fluid interacting with an elastic structure. The purpose of the present lecture is to present an overview of some of the mathematical and numerical difficulties that may be encountered when dealing with fluid–structure interaction problems such as the geometrical nonlinearities or the added mass effect and how one can deal with these difficulties.
10.5446/57375 (DOI)
Thank you Emmanuel. I would like to thank the organizers for giving me the opportunity to give this lecture. And I would like to thank SMF and SMAE to let us organize this SEMRAX which is a very nice feature of the mathematical community I guess in France. So I will talk about wind energy in my first talk and I will give a second one related to what we have heard this week regarding optimization for bioreactors. I'm talking about the talk of Amira and the one of Simon yesterday. So it will be related to what you heard this week. So both of my talks will be approximately 30 minutes. Of course don't hesitate to ask questions within the talks. It's quite informal. So this first one is a work done mainly with Mireille Bosse. Mireille is a research director at the Sofia anti-police. She was actually my postdoc advisor 10 years ago at Sofia. And then I moved to Grenoble to do some fluid mechanics. But I keep a strong collaboration with Mireille. And we work with Philippe Drobinski who is a geophysicist at the EPSL and Ecole Polytechnique at CNRS. And Christiane Paris who is an engineer at INREA Chile where we work also for this wind energy production theme. And most of the work that will be presented here has been funded by INREA of course. And Adam. So I wanted to let you know. So SDM holds for stochastic downscaling method. So each time you see SDM that's what it means. So the challenges are to provide some small scale simulations for the wind at the scale of a wind farm for example. And unfortunately for the moment the major scale simulations, the meteorological tools that we have today are not able to provide a wind simulation at a finer mesh than maybe one kilometer in the horizontal scale and a few tens of meters in the vertical. So if you want to consider an area of a few kilometers square for a wind farm, you have to downscale your model to have finer informations. So for that of course in fluid mechanics and geophysics there are a lot of tools based on grid refinement for example. So I'm not at all going to talk about that. What I want to introduce today is a quite new method based on stochastic algorithms and particles that will use the mesoscale provided by some deterministic simulations. It can be also provided by data or whatever you want. And the stochastic downscaling method will launch a small scale simulation on the region of interest such as a wind farm for example. So SDM and wind pose which is the software related to this method work on a smaller domain and a finer mesh. With boundary conditions as I told you such as mesoscale data provided by the meteor of France or WARF, the traditional meteorological solver used in the US and in Europe. And it can be also by reconstructed boundary velocities from measurements, data, whatever you want. So the principle is the following is that you throw a large number of particles inside your domain and you run a Lagrange model that I will present afterwards for which we use a fluid particles discretization. And these fluid particles follow some stochastic differential equations that I will introduce. So the main point of that kind of simulation is that thanks to the particular discretization that we use with fluid particles afterwards we can of course average the amount of fluid which the particle properties to obtain some mean velocities at some locations of the fluid. But we can also obtain turbulence and second order and higher momentums of the velocity for example which means that together with the mean information that is required to obtain so for example the wind speed you can also have some error bars and confidence intervals. That's very important because the stochastic nature of the model allows to obtain both the mean quantities and the second moment. So again the computational domain is one or several cells of a large scale simulation. We want to downscale again. Boundary condition I told you about. And then inside our domain we use the Slagengine model which has no stability condition. Actually I will come back to that. And the statistical advantages that I just told you about which is that to obtain the error bars you don't need to run the code many times with a different boundary condition, different initial conditions, etc. and do some sensibility analysis. You can do this directly thanks to the stochastic nature of the model. So SDM is somehow a new method. Actually it has been developed in the 80s by Stephen Pope, a physician in the US. He only used this for reactive fluids at very small scales. And the main point of our work was to use that kind of method but at a much larger scale for geophysical fluids. And of course it raises new problems in modelling numerical discretization, well-posedness of stochastic systems, etc. Okay. So the outline is the following. I will do a brief introduction on turbulent fluids. Then I will talk about the SDM model. And then I will talk about validation because we have new results regarding this. So the Reynolds decomposition, maybe some of you know about that. So you just write your velocity, pressure and whatever parameter function that you have as the sum of its mean and its turbulent part or its deviation to the mean. And this deviation depends on the alia omega. And of course you recover the mean part thanks to an averaging and integral over dp which is the measure. And you plug this into the Navier-Stokes equation or Euler equations as usual. And if you take the brackets of this, of course, you have the linear parts provide the bracket terms like this one. And for the nonlinear parts, you have some kind of problem which means that if you call this bracket of Ui, if you call it Vi, for example, you don't have an equation for Vi. Because in the equation of Vi, you have Vi here, you have Vi here, you have P here, okay. But here you have quantities that depend on Ui, on Uij, okay, here. Thanks to the nonlinear term. So you cannot close the system. That's the traditional problem with this decomposition. So you can imagine that you can try to obtain the equation for the second moment. So you write the equation of the second order, you take the brackets of this. So if you call it Vi, again, you can try to have an equation for Vi. But if you do this because of the nonlinear terms, you obtain a third order terms, et cetera. So each time you want to have an equation for the nth moment of the velocity, you have the momentum number n plus 1 which comes into the equation and so on. So you cannot close the system. So you have, of course, different ways to close that. So the literature is tremendous on that point and, of course, the main thing is to give a parametrization of that kind of term. And if you do it in the first equation, you call it closure of order 1, order 2 if you take two equations, et cetera. So that's the way Rantz, which holds for Reynolds average Navier-Stokes equations, works. So in the Lagrangian approach, we will describe the fluid thanks to particles with a state vector which is position, velocity, and some other physical quantities such as temperature or salinity if you are in the ocean, humidity if you are in the atmosphere, et cetera. But I will focus on X and U. And this is typical of Landgeuvent type models. And the associated system of SdEs, of course, that we are going to write, have to be consistent in a way with Euler or Navier-Stokes. If you want to downscale Navier-Stokes or Euler in a small box, the model that you run in the small box has to be consistent in a way with the large-scale model. So this is how it works. So of course, dxt equals ut dt. That means that the derivative of the position is the velocity. No surprise with this. And this equation is given for the velocity of the particles. I will forget the third one because I won't use it afterwards. So here you will cover the pressure gradients. And some terms which I will talk about later on. And please notice that there is a Brownian motion w at the end of this equation that makes the system stochastic. So how do you compute brackets of U and brackets of Ui, Uj? So again, as I told you, we are going to run numerically some particles. And at the end, what you want to know is what is the wind velocity at this point. For example, if you put a wind farm and a windmill at some precise location, you want to have the wind velocity at this location. For that, we will average the velocities of the particles that are located around this point. And this will provide the wind velocity at this point. So we go from the Lagrangian formulation to an Eulerian quantity, thanks to Monte Carlo averages, actually. And this is what is written here, actually. The Eulerian density of the particles is computed thanks to an average of the Lagrangian one. In other words, and maybe you can just focus on this one. If you want to know the average of U at some point X, you just multiply Vi by the Lagrangian density, and you divide by the average, which exactly is the formula for conditional expectation in probability. So this means this is the expectation of UiT, knowing that XT equals X. That means that I take all the particles that have a position XT equals X, and I compute their average velocity. And this gives me exactly the Eulerian velocity at position X. Again, remember that a particle carries both its location and its velocity. So when you compute a conditional expectation knowing that X of the particle equals to an X that you chose in the domain, then you are considering the computation of Monte Carlo average of particles around the position X. Of course, this is from the continuous point of view, and XT equals X in the numerical context has to be defined with a nearest gate point or some approximation. Okay, I don't want to go too much into details with the Fokker-Planck equation, but we can discuss about that later on. From the density of the equation, from the equations I showed you here, you can compute the Fokker-Planck equation associated to the density fL, which is a PDE. And if you take this PDE and you multiply it by dv d phi and you integrate, then you recover exactly the mass conservation equation. If you do the same thing, but you multiply by v dv d phi, then you recover the first momentum equation of your Euler or Navier-Stokes equation, Euler here. And if you multiply by v i, v j, et cetera, you obtain exactly the other equations that I showed a few slides ago on the second order. So in summary, and here I use quotes of course, but it's just to say that if you take brackets of our Lagrangian model with the particles, then you recover your Euler model that I showed at the beginning. So this is just to say that the Lagrangian model that I showed at the beginning was compatible in a way with the Navier-Stokes or Euler equations that were running at the large scale. And of course, this has been studied really seriously by Mireille and a former PhD student Jean-François Javier with very nice results of the existence and uniqueness of solutions. Okay. So now the specific SDM model, I just wrote more precisely the terms that were in red a few minutes ago. So this is what is running our code. So it's, okay, again, it's an equation very simple on the position. If you derive the position, you get the velocity. And for the velocity, we have this one where k is the turbulent kinetic energy. So it's actually the sum of the second order, second momentum of the velocity. And epsilon is the production of kinetic energy, and it has to be defined, and I will come to it afterwards. And from the numerical point of view, you will see that we use a kind of prediction-correction scheme as a chorion and temam. And so you recover the pressure thanks to Poisson problem, classically, as we do in fluid dynamics. So it means that the numerical scheme is done in the Lagrangian part for the particles. And when you correct the pressure, it's done in the Eulerian system. So you have to go back and forth from the Lagrange to the Euler coordinates every time in the code. But I will come back to it. So to this, you have to add some boundary conditions. And in the SD configuration, that's how it works. So the x is actually just the velocity that is imposed at the boundary of our domain. It's boundary condition. And that means that when the particle reaches the boundary, then these two terms make the particle get the correct velocity imposed by the large-scale simulation. This is written here. This ensures that the conditional expectation of view with respect to x t equals x, where x is on the boundary, is exactly vx. So just a few words on the boundary layer theory. So we are interested in a domain which is approximately 1,000 meters high. It corresponds to the size of the atmospheric boundary layer. And inside this, you have a sublayer, which is called the surface layer, where important things occur. And inside this, you have the roughness layer, which is, again, a smaller layer. But actually, this one is not taken into account in the... Well, it's taken into account, but it's not simulated by the code. It's just parameterized thanks to a roughness parameter. I will come back to it. So in our code, we simulate this. But with the parametrization of the blue part, which is very difficult to simulate at most scales, of course. So on the ground, the goal is to account for the log low in the surface layer, in the green one. The log low is, I guess, lots of you know about this. But the thing is that you want to reach a vertical profile in z of this kind with u star, which is given by the second momentum, and k is a constant. And z0, again, is the roughness length, which is a parameter which provides information regarding the configuration in the sublayer. So for example, on the ocean, z0 is almost zero. But if you have buildings or things like that, z0 is a few tens of meters or something like that. So it's a typical length that is used in the model. But it's a parameter. And in the whole domain, so on the whole atmospheric boundary layer, we use this formula for the pseudo dissipation. Remember that epsilon was coming into the equations a few minutes ago. And the LM is the mixing length, so it explains how the turbulence is created on the ground and in the layer. Again I don't want to talk too much about this because it could take a few hours, actually. But I have lots of documents regarding this. So if you have questions and queries about this, I can provide lots of information. So again, k, you can compute it thanks to your particles with this formula. It's again a conditional expectation. And epsilon is parametrized thanks to this formula. And then we can go to the numerics. So again, this is our domain, and as I told you, we drop a lot of particles inside the domain. We have an external force, an external velocity which is provided by the large-scale simulation. And if you want to compute f of u, whatever f you have, you do just a Monte Carlo average of the particle field, okay? So you compute f of u, k, k is an index for the particle. For every particle, you compute f of u, k, and you just make sure that this particle k is located in the close to the position x, okay? So it means that if x here is away from the position of x, this capital X, k, then this is equal to zero, okay? And on the contrary, if it's located in the control cell around x, then this is one, and this is what is written here, okay? So it's just an indicator of whether your particle is close to the position you're looking at or not. And if yes, okay, this is one, this is the sum of the j corresponding to the particles at this position, and that's it. So it's just an average, f of u is just an average of the f of u, k, where k is the index of other particle fields. So this is one time step of the algorithm. So you start by computing a new position, sorry, thanks to the former velocity, and you compute thanks to a Scheme for stochastic equations, these terms, you forget the pressure term for the moment. Then you make a correction of the particles because we rely on the hypothesis that the density is constant here, so which is a very strong hypothesis. We agree that it's not correct at this scale. I mean, in a few centimeters or within a meter of vertical size, it's reasonable, but over tens of meters it's not, so that's something that has to be improved. But for the moment, we rely on this approximation. So if we want to have a constant density, that means that from the particular point of view, we have to have a uniform repetition of the particles in the domain. And since we've just moved the particles, thanks to this equation, we have to do some optimal transport problem, actually, to remove the cloud of particles from the position it has to a new one, which is a uniform distribution. And why is this an optimal transport problem? Is that you don't want to move the particle too far from the place there because what we are doing here is not physical, okay? So you don't want to take some physical information from one part of the domain to the other part. You want to stay close to the place where you were. So that's why it's an optimal transport problem. And this step has been done in the Sembrax 2007. The code has been written and the method has been validated, et cetera. And the last step is to correct the particle velocity this time, not the particle position, but velocity, thanks to the classical Poisson equation for the pressure. And here, what I want to stress on is that it's an Eulerian equation, okay? So after step one and step two, you compute from the Lagrangian information on the particles, you compute the Eulerian field. And this is only an equation on the Eulerian field, okay? So it's pretty fast, actually, because you can have many, many, many particles. But here in the box, this kind of thing can be done very easily thanks to fast Poisson solvers such as fish pack or that kind of thing. And of course, classically, you correct the particle velocity from the particle field. Okay. So validation now. We just, we compare the simulations with the LES method that is used by, classically, let's say by geophysicists and in particular, in particular, Philippe Drobinski. So the computational domain is three kilometers, one kilometer large in horizontal dimensions and almost 800 meters in the vertical. So it's approximately the size of the atmospheric boundary layer. Okay, and what you can see here is three couples of curves. So these are the variances of U, V and W. And each time you have computed with LES or with ARM model. And okay, they look very similar even if in the close to the ground, okay, sorry, this is the altitude and this is the variances in Y. So close to the ground, close to X equals zero, which is Z equals zero actually, we have a few discrepancies but afterwards it's pretty good. And here it's the same with the, with the covariances. Yes, covariances. So U, W, V, W, et cetera. Okay. So it can compare pretty well to LES and here are the main momentum. So that's the U velocity, U in the west-east direction and this is north-east. And okay, you can see that in the 100 first meters, you have this log low that I talked to you about. And again, this is a comparison between LES and the SDM, which is at several times and you know, even after a long time of simulation, it's really good. Again we have some discrepancies on the floor for the 4V. Okay and now towards wind farm simulation. So this was validated with no wind farm inside the domain. And Christian Paris put some actor disc model into the numerical code to simulate the presence of windmills. Okay, so I can show you what typical production of a kinetic, turbulent kinetic energy beyond the mill. So you have a constant velocity in U coming from the left and the mill turns and this is the production of a turbulent kinetic energy behind the mill. Okay. This is done with Numesis, the numerical platform of VINREA on anti-police. And finally, we just validated this model thanks to realistic wind data provided by Fernando Portear-Hel in a PFL. So he made up a small wind tunnel, one meter high, in which he implemented some windmills and many measures. He has many data on this. And okay, this is the configuration. You have a mill at maybe one meter from the entrance. You have a constant wind coming from the left and you want to see the profile of the velocity everywhere. So this, as you can see, the wall moves, it's just to see in 3D the velocity field from the numerical point of view. Okay, so that's just the configuration. And what you can see here is the velocity computed at several spots. So this is X. You have your mill here. The wind is coming this way. And the first simulation is computed here. So this is the vertical profile of the u-velocity just behind the mill. So as you can see, there is this kind of belly here, which is due to the fact that the mill pumps some energy from the wind to produce some electricity. Okay, of course. So this is why the u-velocity diminishes just after the mill. So in blue, you have the SDM simulation. In green, you can barely see it, but it's the measurements from Porte-Aguel. And in red, it's the Jensen wake effect model, which is classically used by wind solvers for EDF and the GDF suede, et cetera, in their... So what is interesting here is that SDM does better. And the more you go on the right, the further you go from the mill. And at the end, you almost don't see anymore the impact of the mill on the vertical profile. And if I drew a last plot, you would have that kind of thing, which would be the profile without any impact of the mill. So that was the validation we did with Fernando Porte-Aguel. And sorry. And I think that's it for this presentation. So again, a few conclusions and perspectives. Lagrangian formulation. So this is new for downscaling techniques at that scale. Theoretical problems in probability theory rates. So a few achievements have been done by Mireille Bossin and Jean-François Javier, as I told you. So the link with meteorology has been done thanks to a calibration and comparison, both with experimental data and LES methods that are classically used in that field. And as I told you at the very beginning, our numerical simulations also provide some variability on the fields, which is interesting. Of course, we have a lot of improvements to do. Non-flat domains, it's almost done. Actually, we are dealing with non-flat domains, which is not very easy, but it's almost done. But the non-neutral case means row non-constant. So it would imply probably an equation on the temperature, because you cannot take rid of the temperature evolution at that scale. In the wind tunnel, the reason why our simulations are quite good in comparison with the data is that it's one meter high, the temperature is constant, and everything is in a lab, clean, et cetera. But we don't want to test SDM now on a wind farm simulation. We are sure that the prediction would not be correct, because we are missing some physics still. Okay. And of course, we need to do some optimization and parallelization, and I wanted to end with this because one of the key arguments of using a stochastic and particular method is that it can be parallelized very easily, well, more easily than with the classical deterministic tools, because there is no interaction between the particles themselves. They actually interact through the Eulerian field. Okay. In the equation, you have an equation on the particle, and you have a link with the Eulerian field. So of course, they interact with each other from the continuous point of view, but in the numerical simulation, they interact through the Eulerian field. So if at every time step you compute the Eulerian field, then you can completely parallelize the evolution on the particles, which we think is interesting, but still we have to do it. Okay. And you can browse the SDM.inria.fr. We have some more explanations and simulations. I'm done for this first talk. Thank you for your attention. Okay. So for this first talk, we have to do some questions. Maybe we can take time for a few questions. Please. Thank you. So you said that you will talk about the fact that your method do not have to respect the CFL condition, but in fact you didn't say no. Yes, yes, you're right. Thank you. Thank you for that. Yes. Actually, there is a hidden CFL condition. Well, on stochastic particular algorithms, there is no. But here, when we do the optimal transport thing, for example, we don't want to move the particles too far from their previous position for physical reasons. And that is actually a hidden CFL because it means that you don't want the particle to cross too many cells in one time step. So it's exactly a CFL condition, actually. So it's not, well, it's not, there is no theory about this, but in numerical cases, we could absolutely observe that we could not take any time step. There was a condition on the time step, of course. And linked to the, of course, to the side of the mesh. Okay. Is there another question? I have a question. You said that your method would not work on a real wind farm. So for the time being, what is used to build or to make choice concerning wind farms? Yes. For the moment, it's, you have softwares like, again, what is used in the industry. It's mostly interpolation on the large scale. So of course, they use some meteorological data provided by WAF or whatever weather forecasting system. But at the end, when they want some information locally, it mostly works with interpolations. So there is no physics. What I mean is there is no physics in the downscaling systems that are used today in that kind of... But how do they take into account the presence of mill and so... Presence of? Of mill. For that, they use the same kind of thing with the actuator disk. It's quite, well, it's not difficult. I mean, it's a force, mechanic force that you add at the right-hand side of the equations. And it's not a big deal. I mean, the mill. But the most important point is to have the velocity at some specific locations, getting rid of the mills for the moment. Even getting rid of the mills, the way they downscale the information is done thanks to interpolations, which is completely false. But don't you think that your method, even at its level of development, wouldn't give better result than pure interpolations? I don't know. I don't know because the fact that the temperature is not taken into account is really a big deal, I think. Because the wind, the thermal effects near the ground are really, really important. So I wouldn't, I don't want to rank too bad methods. But these effects are taken into account in the interpolation method? Yes, yes, yes. Okay. Yes, because they interpolate the velocity and the temperature and everything provided by the large scale. So in a way, they are taken into account. Yes. Okay. So another comment or question. So thank you again. Thank you. So. Thank you.
In this talk, I will introduce the stochastic downscaling method (SDM) that borrows techniques from small scale turbulence (S.B. Pope) for the simulation of wind flows thanks to hybrid methods (deterministic-stochastic). I will present the downscaling method used to refine a wind forecast at a sufficiently small scale, and the way wind turbines are implemented in the model. Comparisons with traditional numerical methods (LES) and validation w.r.t. experimental data will also be provided.
10.5446/57355 (DOI)
Donc, ma talk, on va commencer par une introduction, dans laquelle je vais vous montrer une question qu'on a en réservoir simulation. Je vous présente également les spécifiques de l'inert système que nous avons en réservoir. Ensuite, je vous présente quelques résultats du travail sur l'algebraique multigrid, qui est aujourd'hui un souverain de référence que les préconceurs utilisent pour la partie pressionale dans nos systèmes. Donc, c'était un problème plus en train de mettre des différents ingrédients que nous avons trouvé dans la littérature de l'AMG pour imposer la scalabilité de l'AMG pour la simulation. Ensuite, je vous présente une autre approche qui consiste maintenant à commencer par un préconceuse très scalable et à essayer de appliquer des idées de Laura sur un grand method de création de krilloff que Usam va essayer de appliquer à la jamresse pour réservoir simulation. Et ensuite, je vais vous donner des conclusions en respectant ce projet. Donc, ce que nous essayons de faire dans la simulation de réservoir est de modèler les effets de la flotte en bas. Donc, ce que nous avons est un modèle tentatif de la propriété de l'에요 et de la sélection medie, donc par la syntaxe tema, on Jesse사 et вдéjeuner cela. Alors je vouslight survivors de Sазann orat exhibieron des informations que nous avons proposées depuis des années, par exemple, avec ce modèle d'어� surrounded car membrane et vous avez des lois de production, où vous espérez avoir des oiseaux, en fonction du fil. Et la grande raison de la simulation réservoir est d'essayer de prédiquer ce que la production obtiendra en oiseaux et gaz, en particulier les lois de production. En particulier, ce que nous voulons forecaster est de la hausse de l'huile que vous pouvez récupérer de l'eau. C'est une question économique importante pour nous. Donc, on a des lois de production, des curves, les productions d'huile et les productions d'eau. À l'un des deux, nous produisons l'huile, c'est le plateau de production, et ensuite, nous commençons à avoir de l'eau, nous avons un bruit d'eau qui vient de la hausse de production. Et ce que nous essayons de faire, c'est de matchir nos modèles numériques avec ce que nous avons observé sur le fil. Donc, nous avons usually a long story of data concerning each wealth, et nous essayons de obtenir la même, sur cette histoire, nous essayons de matcher le résultat par ajuster les propres propres de réservoir pour ajuster le résultat de la simulation sur les mesures réalistiques. Donc, ce que nous appelons la « story matching ». Donc, ce qui requiert un peu de « run » pour matcher ces résultats. Et puis, quand nous avons fait ça, nous pouvons commencer à faire des forecastes basés sur le modèle que nous avons obtenu. Un autre problème typiquement que nous avons est que, dans certaines situations, nous voulons faire des... Nous voulons faire des oil recovery techniques, qui signifie que nous ne sommes pas seulement injectés d'eau, nous pouvons faire plus de complexe. Par exemple, si vous avez un oil très viscous, en ce cas, vous pouvez mettre du polymètre, par exemple, dans l'eau, afin d'incruter la viscosité du l'eau. Et nous voulons modèler ça. Et c'est assez complexe, parce que le polymètre et le l'eau se sont non-nuitonalisé. Et la physique est assez complexe pour modèler. Donc, par exemple, ici... Oh, pardon. Je dois faire un petit vidéo, un petit film, qui montre l'éjection d'eau dans l'eau viscous. Et ce que vous voyez, c'est que nous appelons viscous fingering, qui est un phénomène complexe pour capturer la frite. Donc, pour cette simulation, nous avons usually besoin d'un très grand modèle, parce que vous devez capturer la perturbation à une scale très petite pour obtenir ces fingers. Donc, maintenant, nous allons regarder la question que nous avons solved. Donc, c'est assez simple en termes de mathématiques, parce que ce que nous avons à l'équipe continuelle est une balance matérielle. Donc, ce qui dit, c'est que nous... Nous avons cette question ici, c'est la fluctuation, la velocity de la frite. Ici, vous avez la densité. Bien sûr, ce qui est dans un cellule, la variation de matériel dans un cellule est lié au fluctuation. Vous avez un terme source, qui vient d'une fois les lois. Et bien sûr, la autre équation que nous avons, c'est les droits d'eau qui contrôlent la motion de la frite dans les médias pour le poursage. Donc, ce que nous savons, les d'Arcilos, en monophasique, vous avez seulement ces termes qui ne sont que des variables dans le espace. Mais quand vous avez des fruits multifasiques, parce que vous avez l'huile, l'eau, le gaz, vous avez ce terme, qui est relativement perméabilité, qui est très complexe, en quelque cas, parce que, à l'abord, il dépend de la saturation et il peut, il peut, il peut aussi dépendre de la histoire en temps, de la histoire de saturation dans les cellules. Donc, il peut être très complexe en termes de numériques pour obtenir la bonne modélisation parce que, de ce terme. Donc, les d'Arcilos sont en train de faire la saturation entre deux cellules. Donc, la discretisation de ces équations sont faites en la plupart des simulators industriels en utilisant un scheme de volumetre finit. Donc, il n'y a pas de bonnes raisons d'utiliser cet scheme, d'excepter que nous avons un scheme conservatif, mais nous pouvons aussi considérer que cet autre scheme est la plus importante raison de ce que nous avons avant, en fait, de utiliser cet scheme. Et, quand vous avez discrétisé cela, vous vous vous dites pour chaque cellule, un balance math où f est le flux qui va dans les cellules. Ici, vous avez la variation de matériel dans les cellules et ici, vous avez le source-tème. Et bien sûr, nous essayons de le faire. Donc, à la valeur non-linear il faut nullifier le résiduel à chaque étape pour savoir que l'on a des variables de saturation dans chaque cellule. Donc, nous avons un très classique scheme en termes de simulation. Nous faisons une simulation implique donc, les conditions impliquées sont les pressions et la saturation. Dans une certaine situation, c'est possible d'utiliser un scheme impasse par exemple, quand nous avons une compétition complexe et nous savons qu'on a des temps-steps qui ne ne sont pas dans la condition CFL. Mais en général, dans un certain réservoir, nous utilisons un scheme implique. Donc, vous avez un loop de Newton et bien sûr, dans cet loop après que vous avez ligné la compétition, la plus importante stage de temps-suffisance de CPU est la solution du système de lignes. Et c'est ce que nous allons regarder dans les prochains slides. Donc, une des grandes difficultés dans la simulation réservoir est que nous n'avons pas des simulations qui sont très bien scales. Donc, la main raison pour cela est, comme nous le verrons, par le solver. Comme vous pouvez le voir ici, c'est une curve de scalabilité pour une simulation réservoir qui est pour la plus grande model. C'était pour un model de 13 millions de celles. Donc, à ce point, vous avez environ 50 000 celles par corps. Et comme vous pouvez le voir, la scalabilité est rapidement diminuant. Et ce que nous essayons est d'improver cette situation, car, comme je l'ai dit il y a beaucoup de rangs pour les matchs historiques, pour exemple. Donc, si nous regardons l'algorithme parallèle nous devons utiliser par exemple pour le solver le parallèle est très simple dans la réservoir parce que nous avons un scheme de volume finit donc c'est un scheme très classique parce que vous devez compter des dérivations pour chaque cellule et vous devez les cellules de la réservoir pour compter ces dérivations. Donc, ce que vous devez faire pour compter ces dérivations qui construisent la matrixe Jacobian est de utiliser ce simple scheme avec des cellules de gosse que vous devez communiquer à chaque iteration pour compter donc pour cette partie c'est très simple, donc la partie complexe est, bien sûr, un solver où vous avez beaucoup de communication et comme vous le voyez quand vous avez une compétition parallèle la plupart de la time il est spent dans le solver donc ça s'amène à 60% mais même à 80% de la compétition donc maintenant, nous allons voir le solver linéaire que nous avons envers donc on a usually 2 parties dans le solver linéaire donc ici c'est la partie qui est liée à la réservoir la réservoir grid donc cela veut dire que c'est la partie du space sur les rues donc ici, pour chaque cellule comme je l'ai dit pour, vous avez une équation de balance de la balance de la base avec le flux obtenu par les margements sur la balance de la base et ce que vous faites c'est que pour chaque phase vous avez une équation de balance de la base et la dernière équation ici est une la dernière équation est la somme de toutes les équations de balance de balance et habituellement, nous alignons la dernière columne dans chaque cellule avec la pression unknown donc cette partie est la réservoir concernant les réservoirs de la grise et ici, vous avez un couplet avec la équation de la base donc ici, généralement ce que vous avez est que quelque chose qui dépend sur la complexité de la networks parce que vous pouvez avoir un networks de surface qui connectent les rues mais si vous n'avez pas un surface c'est juste un diagonal et ici, un rues bien sûr, va par beaucoup de cellules donc il connecte beaucoup de cellules de les réservoirs de grise donc la spécificité de les réservoirs de grise est que ils combinent une partie elliptique qui vient de la pression unknown avec des transports des les saturations unknown. Et la idée de la méthode de la réference que l'on utilise depuis la AD dans les simulations réservoirs qui s'appelle CPR pour consommer la pression et essayer de séparer la partie elliptique de l'élipthère de la partie transport donc la idée de la méthode est que ici j'ai fait une permutation sur la droite où j'ai mis toute la pression unknown à la fin de l'élipthère donc vous pouvez voir un sub-système qui ne concerne la pression unknown. Et ce que nous faisons c'est d'obtenir une bonne approximation de la complément sure ici obtenue par éliminer tout ce que nous avons détenu dans ce système vous savez que si nous faisons exactement ça ici nous serons en train de faire un système qui est bien sûr n'est pas solvable en un bon moment de temps. Donc la idée est en fait d'obtenir un un bon complément qui est très close à la vraie ou à la fois qui donne des solutions qui ne sont pas mal comparé à la solution de pression de la vraie mais qui est encore sparse donc pour faire ça en fait la idée est de replacer cette matrice ici par un diagonal parce que si vous faites ça ici cette matrice a la même pattern que celle-ci et quand vous éliminéz cette équation dans celle-ci ici vous ne changerez la pattern sparsity de cette matrice. Donc la matrice diagonal ici est obtenue par un sucement de blocs de tous les cellules et dans cette partie nous sommes tous dans la colonne dans le diagonal et ça est lié à des signes physiques parce que tous ces termes sont en fait des dérivations de blocs des dérivations de blocs comparé à les dérivations et donc ce que nous faisons en fait, est de faire que nous dérivons avec un système un système de pression qui est fermé pour celui que nous obtenons avec un schéma implique en pression et en dévouement donc c'est pourquoi cette approximation est bien en fait pour le système de pression donc quand vous obtenez ce système vous solvez la partie pression vous avez une solution tentative pour la pression renoncée et puis vous re-injectez-les dans le système global par juste une simple extension et ici vous appliquez un deuxième préconceinte qui est généralement très simple comme bloc du 0 donc bloc pour chaque domaine la solution à l'égal global donc ce que vous avez est un général FGM-RES méthode à l'égal de l'autor avec un préconceinte qui consiste dans deux phases la première phase est de solver ce système qui est fermé pour une matrixe ce n'est pas symétrique mais c'est fermé pour une matrixe symétrique avec un valeur positive et une valeur de l'agent donc ce que nous habitons ici c'est de utiliser un méthode multigridé qui est un méthode référence pour un système de recherche et puis vous appliquez la deuxième partie de la préconceinte ici qui est une simple alu 0 donc à chaque iteration de la préconceinte de la préconceinte ce que vous avez est que vous vous appuyez vous appuyez les pressions de l'un des autres et puis depuis que nous ne n'avons pas besoin d'une solution très accurate pour la pression vous n'aurez généralement seulement besoin d'un cycle V multigridé à l'égal de la pression pour que vous puissiez apprécier cette solution pression parce que ce que nous voulons, à la préconceinte de la préconceinte c'est de capturer la mode de la préconceinte de l'erreur qui est seulement due à la partie de la préconceinte de la préconceinte donc pour capturer cette mode de la préconceinte on n'a pas besoin et bien sûr, c'est la partie critique de l'absence parce que comme vous le savez MG est un sol très scallable c'est à dire si vous augmentez la taille de votre modèle avec le nombre de processeurs que vous utilisez vous avez généralement acheté une très bonne scalabilité c'est à dire que le temps mais pour la fin en termes de la scalabilité ce n'est pas très bon essentiellement parce que les premiers niveaux de MG je vous remercie le principe de MG donc le principe de MG est de construire un projecteur d'algebraie de l'initiel pour avoir un modèle plus petit vous construrez un constructeur de votre système original pour avoir un petit modèle et le rôle à chaque niveau de ce... à chaque niveau vous appliquez un simple préconceinte qui est appelé un smoother pour capturer la fréquence qui est visible à ce niveau de la grise donc bien sûr, le smoother à l'un des niveaux capturant la fréquence de la grise et quand vous vous rendez au système de coiffeur vous capturez la fréquence de votre erreur et bien sûr l'algebraie multigrid comme l'origine de la grise géométrique a la propriété que c'est optimal c'est à dire que ça est généralement scale et lignalement avec la dimension de votre problème mais le problème, bien sûr, est que l'algorithme pour construire ce constructeur est assez complexe parce que vous avez à choisir pour chaque fois que vous avez un nouveau système ce constructeur dépend de la coiffure de la grise donc si vous recoulons le loop de Newton à chaque iteration de la Newton nous devons récomputer cette marque et bien sûr, vous avez besoin de communication pour faire ça pour la phase de solver mais surtout pour la phase de sétablissement où vous construisiez toute la grise vous avez beaucoup de communication ici vous pouvez voir ce qui se passe si vous utilisez beaucoup de corse puis à l'élection de coiffeur vous vous rendez en train de faire communication avec la scalabilité donc typiquement en la temps dans la simulation réservoir est largement spent dans le MG solver donc ici vous pouvez voir sur ce grapho la temps pour chaque numéro de processus MPI donc nous utilisons aussi des corseurs pour chaque processus MPI nous utilisons un certain nombre de corseurs utilisant des corseurs donc généralement, ce que nous faisons c'est de mettre un processus MPI par un socket CPU et nous utilisons tous les corseurs en utilisant des corseurs pour une meilleure stratégie et ici vous pouvez voir la scalabilité obtenue pour chaque numéro de processus, nous avons multiplié la temps par le nombre de processus donc la scalabilité perfecte signifie que chaque partie de l'algorithme qui reste qui cause la même chose sur ce grapho et comme vous pouvez le voir sur le nombre de corseurs la partie de l'AMG la partie de la temps qui est en train d'increaser en plus de la partie donc cette partie de la partie que nous faisons dans le CPU est néglectable en comparaison à ce que nous faisons dans l'AMG et ce que fait dans le simulator est aussi assez scalable donc ce que nous avons fait c'est d'essayer de renoncer l'AMG solver donc ce travail a été fait en collaboration avec surfax donc nous avons ajouté un pavel que pour 1 an et un an il était dans surfax donc il était en train de regarder tous les algoritmes de l'AMG qui existent dans la littérature surtout vous avez deux principaux clés de l'AMG vous avez une stratégie d'agrégation qui est basée sur une construction de gris que sur le classique l'AMG donc les grises de classique sont construites en faisant un cours de gris entre les modèles de fin gris et puis vous avez construit un restricteur sur le cours de gris sur la fin gris et l'agrégation est une stratégie plus agressive pour construire le gris où vous construisiez un sorte de domaines et puis vous avez quelques simples restricteurs sur vous assignez un cours de gris pour chaque domaine que vous avez créé et vous avez des différents numériques pour choisir le cours de gris et le moyen pour construire ce petit domaine dans l'AMG donc la différence entre les deux approaches est que ici vous avez plus de niveau dans l'AMG donc cela signifie que vous avez moins de communication pour faire mais le revers de la médale est que généralement vous vous rendez avec une meilleure convergence que dans le scheme de l'AMG classique donc le résultat qui était obtenu par la sélection de différentes stratégies de l'AMG était que il y avait un meilleur système pour le réservoir le système de pression pour commencer le gris de la hierarchy par utiliser des agressives agragations pour les les deux premiers niveaux ou les niveaux de l'amg et puis pour changer la stratégie classique cela est pour vous car dans le réservoir il y a une très grande isotropy dans notre modèle parce que parce que nous avons une très fine cellule en termes de de dimension physique comparé à la surface que nous avons dans la direction Z donc cela signifie que nous avons une très forte coupline dans la direction Z dans notre modèle et les algorithmes travaillent très bien quand nous avons une situation comme celle-ci parce que ici vous vous pouvez diminuer très rapidement le nombre de noms dans votre course gris donc par combiner ces deux idées et aussi par utiliser des stratégies qui consistent pour garder la restriction et la prolongation d'oppérateurs que vous avez construits par exemple à la fin de la Newton iteration pour le reste de la Newton iteration vous avez un bon improvement comparé à ce que nous avons pu obtenir par exemple avec un boomer-amg qui est le package de référence pour l'algebraique de Guilty Grid donc ici vous avez le temps qui est spent dans la résolution pression dans le algorithm de CPR donc la ligne dash correspond à la package que Pavel a écrit c'est le package de multi-grid que il a écrit donc cette package de multi-grid combine l'idée d'agrégation en classique mais ça peut aussi se défendre, donc c'est la différence entre la curve grise par exemple et la curve bleue ça peut aussi utiliser la restriction et l'opérateur de prolongation donc sur les modèles comme celui-ci était un million d'hommes nous avons cette vitesse comparée à la meilleure parmetteur tuning nous pouvons atteindre avec un boomerang donc ce n'est pas un très grand improvement sur un modèle comme ça sur un petit modèle parce que les improvements étaient faites au premier niveau de l'algorithme ici, sur cette curve le modèle n'est pas grand enough pour avoir beaucoup de temps spent dans les premiers niveaux mais si nous regardons beaucoup plus de modèles celui-ci est un million de modèles puis nous avons pu atteindre dans le cas où nous compuient le setup à chaque iteration nous pouvons atteindre factor 2 avec 8000 cores sur ce modèle et par re-usant le des gris d'arquilles pour le sol de linéaire système linéaire nous pouvons atteindre 5 fois de vitesse mais nous n'avons pas encore l'answer au problème de la scalabilité parce que comme vous pouvez le voir ici la scalabilité est très proche comparé au nombre de processeurs donc maintenant la idée est de prendre un autre approche donc dans le premier approche nous nous avons commencé par la stratégie parallèle de MG et nous avons essayé de simplifier cela pour obtenir quelque chose qui est plus scalable donc maintenant la idée était de appliquer des works et d'avoir fait des idées qui ont été développées plus tard par Laura cela a été de commencer de très scalables préconçonnables mais nous voulons combiner avec des recours de recours qui consistent de recours de recours qui est plus efficace par rapport aux récits traditionnels donc j'ai utilisé quelques recours que Laura m'a pour cette partie donc si nous regardons le algorithme de GMS nous savons que la communication est faite par la multiplication de la multiplication et que vous avez besoin pour la programmation de GMS de la création de la spécifique et que vous avez besoin de obtenir des normes dans l'algorithme donc, bien sûr quand vous avez plus de recours pour parallèler cet algorithme cela ne coûte rien dans le cercle il y a la la main de la gêne de la méthode GMS donc ce qui est ce que nous voulons réduire et une chose que vous avez besoin de vous en savoir c'est que en termes de parallélisation ce qui se passe en ce cas est le nombre de messages que vous faites pas vraiment le volume de ce que vous faites c'est plus que que vous faites dans l'algorithme qui tue la scalabilité donc, l'idée de l'enlarge-clos méthode vient de la méthode GMS vous savez que vous savez probablement que si vous avez un système de multiples récits donc c'est pour dire que vous avez plusieurs systèmes de lignes pour le solver mais la météo est la même c'est plus efficace pour combiner la information de la spécificité de l'algorithme de la GMS pour chaque système que de les solver indépendamment donc l'idée en parallèle est de utiliser une combinaison de clopes en créant artificiellement plusieurs systèmes de récits d'un système donc typiquement, ce que vous avez vous distribuez les rôles de votre matrix en utilisant un parti graphique donc ici c'est une picture de la matrix typique que vous avez en parallèle chaque processeur va obtenir des rôles de la matrix associées à un domaine donc sur le diagonal, généralement vous avez un peu de non-zero termes et puis vous avez la coupling entre les domaines ici, qui sont assez sparse et l'idée est de considérer les multiples de la droite-à-side qui sont obtenus par diviser donc vous avez un système avec une droite-à-side et par diviser cette droite-à-side dans un côté de la sévélité où vous avez nulle termes sur tous les domaines, excepté un domaine donc, c'est le de la droite-à-side on va essayer de en parallèle et bien sûr, quand vous avez tous tous ces systèmes vous avez la solution de votre système initial par en soumettre toute la solution donc, ce qui est à l'interesse c'est en fait, vous avez un grand space de courant à chaque iteration vous faites le même nombre de messages dans votre algorithme courant depuis que vous avez plus de vectors dans votre space courant vous vous envergissez vous vous envoiez votre convergence et donc, ultimement, vous avez moins de communication en parallèle donc, c'est le principal ingrédient de cette méthode donc, si nous regardons l'algorithme de Gémoire ce qui change, en fait est la c'est la plus ce qui arrive quand vous faites la multiplication des vectors matrix parce que maintenant, ce que nous avons est la multiplication avec un vector à une fois donc, bien sûr, cela fonctionne si vous avez un précondition de Gémoire aussi, ici vous vous envoiez A par la précondition de la matrix donc, vous devez avoir un nombre de de pre-contentation donc, ce qui change est cette multiplication des vectors matrix et aussi, la multiplication des grammes-schmidt classiques donc, ici, ce que vous faites est un algorithme de G&M et ce qui change comparé au algorithme original c'est que vous avez un algorithme de G&M et ce qui change par rapport au algorithme original est que vous avez besoin d'une addition pour organiser tous les nouveaux vectors que vous avez ajoutés à la iteration de la parenthèse donc, ici, ce que nous utilisons est un algorithme de cure qui fait un nombre de communications et cette version est un cure de Toul-Skinig et c'est le basic d'une idée de l'idée première d'un grand G&M et bien sûr, vous pouvez faire plus de ça donc, ce qui est intéressant est que, durant la iteration vous avez un système donc vous avez un système de g&M qui va s'y convertir pour cet système, vous n'avez pas de compter le système de cure encore plus donc, c'est pour dire que vous avez besoin de détecter le part de la partie de votre système de g&M donc, cela signifie que durant la iteration vous avez besoin d'un plus en plus de vector de cure et une autre property intéressante est que pour les parties qui ont converti, vous avez une bonne estimation dans le space de la valeur et de la vector de la valeur et vous pouvez les utiliser pour faire des déflations à l'level de g&M donc, maintenant, ce que Usam a essayé de faire est de combiner l'ID dans g&M rest pour faire des tests sur un système de réservoir donc, maintenant il a beaucoup de travail sur le prototyping dans Matlab donc, sur des systèmes qui viennent de l'exploitation de réservoirs et ce que nous pouvons voir ici, sur ces deux graphes donc, sur les deux graphes ce que vous voyez est le nombre de les acteurs qui ont été assez fort pour ce type de système mais il a été fait en propos pour voir toutes les conditions de convergence donc, ce que vous avez sur la gauche est le nombre de vectors que vous vous ajoutez à chaque iteration pour voir un nombre différent de domaines donc, chaque couleur représente la convergence d'un système avec un nombre fixé de domaines dans l'espace de large clé donc, cela signifie que le plus de domaines que vous avez le plus de vectors que vous ajoutez à chaque iteration donc, vous pouvez voir donc, la première curve correspond à un classique de la gamme RES et ensuite, vous avez la curve pour 2, 4, 8, 16 et 32 domaines et pour comparer tout ces variations de ce que il a fait est que nous fixons le nombre de vectors de Krilloff nous nous allons le plus fort dans le base Krilloff et dans la déflation donc, ici, il a été fixé pour 400 vectors donc, quand vous comparez tout cet état vous avez le même nombre de memoires que vous avez utilisé donc, vous pouvez voir que la convergence obtient un niveau Krilloff est meilleur parce que vous ajoutez plus de vectors de Krilloff à chaque iteration et donc, c'est ce que vous pouvez voir sur le gauche donc, ici, vous avez ce type de artefact que c'est parmi les factures que depuis que nous avons un limiter pour 400 vectors parfois quand vous vous entrez vous n'avez que pour obtenir un 2-vector, par exemple donc, ici de toute façon, vous pouvez avoir un nombre de vectors que vous ajoutez et ce que vous pouvez voir ici est que la diminution du nombre correspond à le fait que beaucoup de systèmes ont convergé ok donc, la partie intéressante est que en parallèle en fait, le plus de domaines que vous avez le meilleur que la convergence peut être avec cet algorithme sans augmenter le nombre de messages et le préconceur utilisé ici était le bloc Jacobi mais, en avant d'avoir un comparisme proper bien sûr le bloc Jacobi le nombre de domaines utilisé dans le bloc Jacobi ok, nous avons une partie pour le bloc Jacobi de la méthode qui était fixée pour un nombre fixé de 128 domaines donc, ici, sur ce slide ce que vous voyez est de stresser l'effect de la déflation donc, comme je l'ai dit avant une autre property est que nous pouvons extraire des valeurs de l'aigone et des valeurs de l'aigone de l'information de l'interaction de Krilloff donc ce que ce que nous avons fait ici c'est pour comparer l'algorithme de large de l'Aigone GMS donc, avant avec une stratégie de restart sans aucune déflation entre chaque cycle c'est-à-dire que nous restartons le krilloff à l'initiel à chaque cycle et puis, nous avons comparé dans ces deux curves les convergence obtenues à chaque à chaque restart nous avons comparé les méthodes qui font une déflation à chaque restart pour que l'aigone de l'approximation de l'aigone de la prévue cycle et la méthode où nous ne restartons pas en large de l'Aigone GMS comme vous pouvez le voir le déflation de la algorithmme est assez efficace car nous avons presque la même convergence entre ces deux curves et une autre intéressante c'est que si vous le solvez contre le système de la prévue avec un différent de l'intérieur de l'aigone et par l'utilisation de la prévue obtenue de l'aigone de l'aigone à la prévue de la solvée vous avez une meilleure convergence et c'est très intéressant pour la simulation de réservoirs car, comme je vous ai dit quand, par exemple, nous avons une iteration de Newton mais même pour un temps de several temps, nous avons une évolution de la sub-systeme de pression donc ce qui est très intéressant est que nous pouvons nous pouvons nous pouvons garder des valeurs d'aigone obtenues par la prévue de l'interaction c'est quelque chose nous voulons regarder plus en détail avec Sam et Laura donc c'est ma conclusion donc je vous ai présenté la main de la simulation de réservoirs en termes de HPC qui sont souvent parmi les solvaires lignes nous avons essayé de éprouver le méthode de référence pour les pressions qui sont aujourd'hui les multigrides algebras donc nous avons essayé de combiner en fait, beaucoup de références que nous avons apportées dans la littérature pour obtenir des improvements et maintenant nous avons commencé, depuis l'année dernière avec Sam, le travail de la méthode de la crue large où l'approche est de commencer avec un préconchonateur mais nous voulons avoir une stratégie plus efficace à la level crue en utilisant des informations de la spectra de notre matrice en utilisant ce que l'on a fait avant et, bien sûr, si ce n'est pas suffisant nous voulons essayer de éprouver tout ce préconchonateur avec un meilleur préconchonateur mais, d'abord, nous voulons commencer d'une très étrange approche et maintenant le travail que nous voulons faire avec Sam et Laura est de voir cela en situation réelle dans le réservoir dans le réservoir-simulator parce que pour maintenant Sam a fait beaucoup de théories pour prouver beaucoup de choses que je vous jetez et maintenant le travail que nous voulons faire ici dans SamRacks est de définir la interface pour la code C++ de Sam pour intégrer cela dans le réservoir-simulator et nous allons aussi regarder plus près pour réutiliser l'approche de la valeur eigen de l'approche de la valeur eigen dans le système de sub-seconds nous le solvons dans le réservoir-simulator donc merci pour votre attention.
In this presentation, we will first present the main goals and principles of reservoir simulation. Then we will focus on linear systems that arise in such simulation. The main HPC challenge is to solve those systems efficiently on massively parallel computers. The specificity of those systems is that their convergence is mostly governed by the elliptic part of the equations and the linear solver needs to take advantage of it to be efficient. The reference method in reservoir simulation is CPR-AMG which usually relies on AMG to solve the quasi elliptic part of the system. We will present some works on improving AMG scalability for the reservoir linear systems (work done in collaboration with CERFACS). We will then introduce an on-going work with INRIA to take advantage of their enlarged Krylov method (EGMRES) in the CPR method.
10.5446/57361 (DOI)
So the first title is the more methodological of the two and the second is for those more interested in applications and I'll discuss both in the course of the presentation. I'd like to acknowledge my industry partner Axelos, in particular my co-authors in this talk are Fung Ho and Loin Goen in the Vietnam office and also in Boston David Knezovich. I should emphasize that Axelos is a company that licenses technology that was developed in my research group over the past 10 years. But that I myself I have no financial interest in Axelos but I do have a great deal of intellectual interest in seeing what the software can do given many years of research. My academic collaborators are on this list for this particular talk in particular Catherine Smetana and Masayano. Again neither of these two collaborators or any of these collaborators has any financial interest in the Axelos software company. And my sponsors in case they're here or even if they're not. So first parameterize partial differential equations. I will go from the I will describe a general setting but let me start with some specifics which will inform the applications and illustrations that I give today. Acoustics the pressure is a function linear acoustics pressures a function of space and time is given by the real part of the product of the frequency domain pressure you and a complex exponential where f is frequency and we will use units of Hertz. The frequency domain pressure satisfies the well-known Helmholtz equation and you can see here this is the undisputed term this is the slight dissipation together they form a kappa which is essentially very close to one and you have the term here which represents the time harmonic behavior. This is the wave number squared and will play a central role essentially a non-dimensional frequency and one over 2 pi over K is the wavelength of the wave. And so this is the equation of departure for many of the applications that I will show. What is the difference between that equation and a parameterize PDE well all we need to is explicitly identify the parameters of interest in any particular PDE. So for Helmholtz acoustics I would choose the wave number k and parameters lambda related to the geometry. I can then introduce the map which is central to this talk which is the map from input parameter mu k and geometry to the field which depends on the parameter as well as any quantities of interest say scalars which also depend on the parameter. More abstractly we have a parameter and a compact parameter domain a P tuple if you like the field u mu satisfies the weak form the output then is a linear functional applied to the field and of course the parameter makes its way from the weak form through the field and then through the linear functional and finally to the output of interest. So that's the highly non-linear map between parameter and output even if of course the equation itself is linear in the field variables. So what do I mean by a model? A model is a particular problem definition, particular parameterization, a spatial domain which may depend on the parameter, a physical discipline in our parlance of PDE if you like and engineering outputs that may be relevant and a model is this a model essentially maps the parameter to the field and output as I just described. A family is a set of models which share a common discipline in engineering context so you could have a family of acoustic ducks which could be anything from musical instruments to audio equipment to mufflers, you could have a family of elastic shafts, you could have a family of historic structures and I'll give brief examples of the latter two and extensive examples of the first. What is a PDE app? The app here is originally application as Caroline indicated but these days of course an app has another connotation. It's intended to be a very simple at least on the surface piece of software which provides instantaneous gratification and so that's the use in which we use the term here. So a PDE app is software associated to a model which maps any parameter mu associated with that model to an approximate field and an approximate output the tilde here indicate approximate again both parameterized. Subject to performance requirements befitting of an app or at least the connotations associated with an app and since this is also intended to be a scientific inquiry they don't have to just be fast they have to be also to a certain extent correct. So there are requirements on both and here they are there are four fives. A deployed PDE app should satisfy a five second problem setup time how long it takes to say what you'd like to solve five second problem solution time of the PDE the field and the outputs five percent solution error specified metrics and five second field visualization time of the full three-dimensional field. The choice of five seconds of course everything has to be non-dimensionalized so where does five seconds come from? Five seconds comes from roughly the human attention span. It's because these are intended to be interactive those of you that teach probably think I'm being a little generous with the five seconds but in any event five seconds is a reasonable order of magnitude for how long you might be able to amuse somebody. All right so what is the model what is the paradigm behind the PDE app? Of course there are cases where the equation is simple enough that classical techniques implemented in an efficient fashion can meet those requirements but very often not. So we will pursue a model reduction paradigm and ultimately I will fill out this acronym though I do not expect you to remember it. So there are actually two stages offline and online let me omit this for the time being. The offline stage is very slow takes days and for a given family we form an online data set D. Then in the online stage which is very fast seconds given a PDE app we evaluate the input parameter to field an output map by virtue of this pre-stored online data set. Now of course you can only justify days in exchange for seconds if you're going to be using the same app many times the many query context or if there's an imperative on interaction. So these are the two contexts in which you can justify the offline in terms of the online. There is actually a second offline stage which is more related to software that is where we go from a family to a particular model and actually script the app which is then on a server which is the essentially the software we appeal to in this online stage. So offline online and we will exploit model reduction. All right computational methodology first the perspective in terms of genetic lines essentially the methods I'm describing here are not new they combine effectively two streams. The first stream is component mode synthesis which dates back almost half a century and from this stream we take the fundamental idea of component to system synthesis. The second stream is the reduced basis method and from this stream we take the notions of model order reduction for parametric systems so here's where the parameter enters in a central fashion. These two streams have been combined before most notably in the early 2000s by Evo Mede and Einar Ronquist and so what I'm describing today is one particular variation on the general theme of reduced basis element methods which combine components and model order reduction for parameterized systems and there are detailed references at the conclusion. So first component in the system synthesis I will need to introduce a few bits of nomenclature which have very obvious interpretations and as I'm describing these various pieces you can think of a set of virtual Lego blocks all right and then I think most of these concepts will become rather clear. The first concept is a parameterized archetype component from which we will manufacture other components instantiated components in the image of this archetype component. So I show here a bend you see it in the background this is for an acoustics collection there is a reference spatial domain and a reference finite element mesh the archetype component also has two ports you see port one port two is on the dark side of the moon on the other side so associated also with this bend another attribute is the finite element mesh details of the ports in terms of what type of port and mesh critically also though we associate to each archetype component a set of local parameters to that component new which must reside in a parameter domain curly v so for example the angle of sweep is a parameter as is the ratio of radii and also the wave number and we have mapping functions that I'll introduce later but also of course we must say what physics this archetype component is intended to simulate and in this case it's the equations of acoustics so this is an archetype component the key points are spatial domain reference mesh and ports as well as local parameters we then form a library of these archetype components so this is the library of acoustic ducts components in fact it's only a selection there are about 20 or 25 components in all and in each case here I show you the actually reference finite element mesh for the component and I also indicate either in red or yellow the ports of this component and it is through the ports that we will combine the components in order to form the systems in an obvious fashion admissible connections are by ports of common color which refer to a common port type so for example any of these red ports these components can be connected similarly these yellow ports can be connected and there could be many different port types in this particular collection there are two port types or if you like fiducial ports so how do we synthesize a model or how do we synthesize a system which relates to a model this is an exponential horn don't be confused by this dome that's not part of the horn that's actually a hemisphere in which we apply radiation boundary conditions out to infinity so the horn is characterized by an initial length the exponent of the horn the ratio of mouth to throat and finally the wave number and they must reside in this particular domain so this is a model a model parameter and a model parameter domain so how do we synthesize that well we take an inlet component from our little set of Legos and we put it right here we take three channels which we put here here and here we then take about 12 exponential horn components each of which have different local parameters to form the horn and finally we take this hemispherical radiation bubble so in this first stage we have now instantiated the archetype components into a set of components which can create the necessary system in the second stage the local port pairs indicated here in red coalesce into global ports because they are compatible by construction and we have now formed the model I should emphasize that mu is the model parameter which then induces in each component a associated local parameter value new so as to create the system so the local parameter values are different in each component so the archetype can instantiate different shapes through the parametric variation so that's how we form a typical model from components and you can now actually define more precisely what we mean by a model a model is essentially all possible component combinations subject to the fact that you need to combine or connect only ports that derive from a similar port type or fiducial port this is actually there's a horn inside here it's a an ongoing phone created by a Lloyd just as an illustration of flexibility it's actually quite close to a prototype for a base clarinet the only thing I want to emphasize is these actually are our holes in the side of the instrument and one of the parameters is the hole open or closed so that's a topological variation that's actually captured by the parameter and that corresponds to replacing one component with another component so you can have highly nonlinear system definition associated with these different models and we have actually in fact synthesized music not for this particular instrument but clarinets based on the techniques I'll describe today so the next step is finite element approximation which is standard except then I will make it more complicated the first issue is geometry mappings which is the foundation for the semrax project this year with which I'm involved with Yvon and Jean Baptiste and also Rashida and Caroline and so these mappings actually play a central role and tend to be a little bit neglected in terms of the mathematical and software foundations so the idea is that any archetype component D for a value of parameter new which relates to geometry say is a mapping curly T new of a reference spatial domain associated with a particular parameter value and furthermore the archetype component ports I will assume that all components have two ports they could have more these are the mappings from a fiducial port which I indicate here so in some sense we have for our particular library two fiducial ports this fellow as well as the annual list that you saw earlier for each reference spatial domain for each archetype component we map this this port to the associated port on the archetype component that ensures geometric compatibility and then we can vary the shape of this system through this all critical mapping curly T new we then associate each archetype component to reference finite element mesh which you actually saw on the previous slide and then associated say P1 or P2 finite element space of dimension curly nfe for finite element and a key point is that we require that for any V in this finite element space on local port 1 or local port 2 that the restriction of our function V and xh to the port is given is in the span of a set of fiducial port modes associated with each port type and that assures functional conformity as well as geometric conformity when I connect two different components so key point here is that the chi are port modes that represent the finite element functions on the two ports of the component the finite element space for a model is then just the direct sum of all the finite element spaces on all the little instantiated components of course intersected with x which provides the necessary which is h10 say which provides the necessary continuity and then we proceed with standard Galerkin projection we actually never formed this finite element system but that is the underlying truth approximation to which we then apply model reduction in order to accelerate the app response. Alright so now we take that formulation and we apply static condensation which dates again it's a roughly more than 50 years old it's a very standard technique in structural analysis as you can tell from that static adjective it also has a very nice mathematical formulation you've probably seen it before even if you haven't seen that term in terms of a sure complement so this is the procedure in our context in our language in a given instantiated component so remember my horn had 20 instantiated components so I now look at each one of those components separately and for each of its two modes port modes I'm sorry for each of its two ports remember a port on one side of the bend and on the other and for each port mode J on each of those two ports those are the finite element functions on the ports we create a function psi which is the lifting of the port mode into the interior of the reference domain and then a function phi which is this lifting plus a bubble function eta which such that phi actually satisfies the finite element equations in the interior of the component subject to the port boundary conditions that is this phi is actually a harmonic lifting of chi such that the equations are satisfied in the interior so once I have these phi functions I know that let me first actually provide a brief detail so these these bubble functions right because these already satisfy the boundary these aides are bubble functions they satisfy the weak form of the PDE in the interior and I just indicated here so you can see it's very standard stuff these are the finite element coefficients k is the index associated with the finite element say nodal basis functions this is the system of equation satisfied these are the standard weak form associated with the acoustics Helmholtz operator and this is a system of curly n fe by curly n fe of system of size curly n fe by curly n fe so if I have those bubble functions then I can form these sort of if you like greens functions or solutions of the PDE in the interior that means that I can represent the solution in the interior by unknown coefficients times these fee functions because these fee functions by construction satisfy the equation in the interior and once I have the solution in the interior I can form a stiffness matrix which relates the normal velocities on the local ports and port modes to the pressure on the local ports and port modes this is a standard next step in domain decomposition and in our particular case the stiffness matrix as you would expect takes the form of the bilinear form trial function and I'm sorry test function trial function test function trial function and you see that it's rather special because the bubble function enters into this stiffness matrix because they're ad hoc functions that actually satisfy the partial differential equation so now I have a stiffness matrix at the level of the component in terms of port degrees of freedom and I now require continuity pressure we continuity of normal velocity that's a standard procedure in the software context of direct stiffness I take my little elemental I'm sorry component stiffness matrices I stamp them into a global shore complement similarly for the force and I find at the end a block sparse shore complement solution of which will yield the full finite element solution in terms of the degrees of freedom on the port the two issues with this technique which is why in its native form it's not used at present is that its size will be large this is the number of global ports which may be small say 10 or 20 but this is the number of degrees of freedom on each port which could easily be in the hundreds or or potentially even many hundreds because these are two-dimensional surfaces for three-dimensional components and furthermore not only will the system be large and therefore costly to invert it will be very costly to form and why is that well remember that this is assembled from these stiffness matrices associated with the components the stiffness matrices with the components have bubble functions and the bubble functions for each component and for each port and for each port mode in other words thousands of them each satisfy an equation which corresponds to a large finite element system within the interior of the component so it's very expensive to form and it's very expensive to solve so the idea is to apply model order reduction and it's a very simple idea that's largely explained by this slide and then I'll fill it in with a few details so remember before in a given and stand shaded port and for each local port and port mode I created some lifting and then some bubble functions well I do the same except now I truncate the number of port modes and M will be in the examples I give 11 whereas JFE would be on the order of several hundred and furthermore I replace the bubble functions with approximations derived from a reduced basis space with n degrees of freedom where n will typically be on the order of 20 to 50 compared to curly n which can easily be on the order of 100,000 so there are two levels of model reduction here the first is I truncate the representation on the port of each of the components the second is that I look for an approximation to the bubble functions in a low dimensional space tailored to each component each port and each port mode but I should emphasize that this one space will serve all archetype components and has all instantiations of a given archetype component so let me just mention that these coefficients in front of our new the coefficients associated with our new bubble functions this is the these are the basis functions of my reduced basis space they satisfy a system of n by n equations so much less expensive than before so I now can assume that these are relatively inexpensive and that I have relatively few port modes I can now represent the field in the interior of the component instantiated component in terms of unknown coefficients times these approximate solutions to the PDE I then conform a stiffness matrix as before but now notice that these are approximate bubble functions that will be much less expensive to compute and I can then assemble them in an approximate sure complement and the sure complement is much smaller because I only retain 10 port modes they compared to several hundred and it's much more readily computed because these bubble functions which are solutions of the PDE are approximated with a an ad hoc in the positive sense reduced basis space tailored to that particular function parametric manifold as I'll describe shortly all right so why should this work this is this would work if m and n were small why should there be only a few port modes for a complicated acoustic system why should there only be say 20 or 30 modes needed to approximate these bubble functions the solution of the partial differential equation well let me treat first the port reduction and why should m the number of port modes that I actually need to represent the function be much less than the number of finite element degrees of freedom well I consider a simple waveguide p here is pressure I impose g and I require that I have a bounded radiating wave at infinity outgoing I'm sorry and I introduce the eigen system associated with the cross stream across plan form as epsilon and lambda eigen values and eigen functions and I can then write explicitly this is classical separation of variables the solution and you will see that for a typical wave number there are only very few propagating modes and all of the other modes in the system decay those are known as evanescent modes that's a classical result in acoustics and of course if you don't have acoustics there's no k and of course that's just the elliptic equation the decay of modes by separation of variables and what does that say so that says if I have a system and I look at one particular port all the information coming from nearby and afar is all filtered before it arrives at my port because I have most of the modes which are decaying and only a very few which propagate so that implies that in fact I should be able to find a low dimensional space on which all of the restrictions to the port reside right so how do you actually find that we have several methods I'm going to show the simplest most intuitive here we go through all pairs of archetype components and let's denote the left fellow left and the right fellow right so this is a side hole for a musical instrument this is a duct left and right and I have a left and a right port which is exposed and I have a port on which I am going to collect data how do I collect the data I solve the PDE of interest over this domain with a rich set of Dirichlet conditions left and right and a rich set of admissible parameters in these two domains I collect on this port of interest all of the restrictions of the solution in a set S and I then apply a POD so that's the sense in which the dimension is small and that's the sense in which we're able to find that low dimensional space. All right how about the bubble reduction why do I need a low dimensional reduced basis space rather than a high dimensional finite element trace space well for any archetype component to the library for any local port and for any local port mode I remind you that Ada is the harmonic lifting into the reference domain of the port mode namely the solution to the PDE now that resides in a high dimensional space but it also resides in a low dimensional smooth manifold because we're only interested in parameters new that lie in a compact domain curly V and why is this a low dimensional space an important point to note that relates to the Semrax project is that all of these solutions for any value of the parameter for any instantiations of this component are registered on a common reference domain and that's what ensures that if you have a sharp domain although the solution say at the corner may not be regular if as the corner moves all the solutions are pulled to a reference domain then in fact the smoothness is embedded in the manifold all right so that's a key point for the approach and I'd also like to point out that typically a component only has a few parameters say V say 3 or 4 the model may have hundreds of parameters and so in some sense a typical problem with reduced basis methods is too many parameters and with components you basically divide and conquer how do we find this space for each archetype component for each local port for each port mode we form this reduced basis space is a Lagrangian snapshot space that is we sample the solutions for different parameter values take the span for quasioptimum optimal parameter values that correspond to different new in the parametric domain associated with that archetype component these are selected by a reduced basis if you like weak greedy procedure the first reference to that is a paper with Karen Veroy and Christof Foudon in fact and if I recall correctly I would say that probably the first person to propose it was indeed Christof that has subsequently become very popular in fact now is on a very firm theory thanks to a number of different groups who have shown that you get close to a common gore of n-width with these kinds of techniques remarks optimality so I just foreshadowed that first result under certain hypotheses the best fits associated with the port reduced spaces and the bubble reduced spaces convert converge at rates similar to the corresponding common gore of m or respectively n-width so the spaces have good approximation properties of course that's only half the battle the other half of the battle is stability and the Glerkin projections are of course optimal but only to with in a model and discretization dependence stability constant all right and I'll come back to that point shortly so that's the first remark second remark is verification and validation so we do exploit a posteriori error indicators in order to choose optimally our snapshots and choose them efficiently and also to choose the discretization cutoffs m and n but the first point is that these are not rigorous a posteriori error bounds their error indicators so they cannot necessarily serve in verification so how do we verify well each model is verified over usually a subset of the model parameter domain we can confirm H m and n in terms of refinement studies decrease H increase m increase n we refer to appropriate closed form approximations and we compared a third party computations and experiments I would also like to say that the library notion is to a certain extent socialist in the sense that each model that you develop and you validate and verify those same components serve in other models so in fact the library as a whole converges because as you improve one model then all the others in fact are also improved and we see that effect in the first two weeks when we get a new library we find all sorts of problems and slowly over time we find less and less problems as we increase m and n or choose different optimal parameters and choose different functions all right computational procedure and then I'll turn to the examples so the offline stage at present this offline stage for the particular problems I will present is performed in a factory in Ho Chi Minh City not a real factory a virtual Lego factory and what is prepared in that in that stage offline well we form the online data set archetype component reference meshes are constructed we identify where they identify the mappings which allow us to proceed from the reference domain to a wide family of parametric geometric variations we find the optimal set of port modes we find the optimal reduced basis spaces and we we identify parameter independent inner products which we store in D now one last detail and that's the final for the talk I just wanted to point out that the stiffness matrix at the component level for the reduced basis and port reduced system takes this form and you'll notice this depends on parameter which would make it quite expensive to form but I can express this in terms of parameter dependent coefficients real numbers or complex numbers times parameter independent basis functions so you see that we could pre store this quantity offline and then form it online as a simple sum but not quite because you'll notice these geometric transformation factors depend on the geometric parameters you knew and for that reason we need to apply a second eim expansion to represent these in affine form so that's the final little bit in order to make all the pieces work quickly in the online stage so that's the that's the offline stage note we never form a model in the offline stage only pairs of components so the system can be of size 10 million and we'll never see it because we only get to models later and all models that we subsequently form and evaluate can amortize this effort is not just one model but it's literally hundreds of models in the online stage implemented in a in a cloud context the user inputs the parameter we then synthesize the model from a script and that's done by a model server which is created in the offline two stage for each model in the offline two stage we prepare or script the app which is then uploaded as a server once the model server specifies the program of the problem this is sent to the compute server which invokes the online data set forms and solves a sure complement calculates the field and output and download and displays the solution so that's the entire process all of this resides in the cloud and is all that is through a web user interface which is effectively a browser alright so onto the examples and I will start let's say I will I will first I've included a demo because just because it may fail and and that's it adds some element of excitement but but I also have a self-contained set of results that are more scientific so the first result is a flanged exponential horn horns can be flanged or unflanged a flanged horn is one like you have an audio speaker where it comes out and there's a wall behind it unflanged is the case like a musical instrument where the sound goes everywhere alright so this first one is a flanged exponential horn I introduced that earlier and the you can see the parameter domain is quite extensive and what I show here is just one result from the PD app this is the throat impedance which is how you characterize horns and audio systems to determine their or one of the ways and what I show here the app is in red and previous boundary element calculations are in blue these also agree with experiment this is therefore both verification and validation these you notice that the wave number here relative to the mouth goes up to 10 that means there are many wavelengths alright per unit length in the axial direction therefore difficult calculation this is actually what the pressure field looks like at the outlet and this is the far field calculated in terms of a spherical harmonic expansion and again these three lines are previous experiment computation and our results and this is measured in db alright which is a logarithmic scale so each unit here is is about 10% error okay so you can see even for very small pressures way out here the accuracy is good to within 5 to 10% alright next example is an expansion chamber this is a very crude form of a muffler have an inlet duct you have an expansion chamber then you have a contraction down to an exit duct we can vary many of the parameters I show here a comparison between the PD app for the transmission loss transmission loss large means noise small alright so ideally you want infinite transmission loss that means that no sound is radiated out at the other end of the muffler all of it is reactively reflected back these are non-dissipative mufflers not not resistive mufflers and what you see again is the PD app in red compared to previous boundary elements and also previous experiments at this point the solution becomes non-planar right after this resonance and we still agree with the experiment I don't know what happened to the boundary elements but they didn't seem to make it through this transition but in any event we still agree with experiment and this is an example of what the pressure looks like inside the muffler it is no longer a plane wave it has radial dependence alright the next example I will do as a as a demo in principle alright so the first thing that can fail is that the server has gone down alright so what I've done here is I've actually in order to save a little pre-processing time I've cheated I actually already launched the PD app server which is the server that knows how to specify this particular app and this particular app that was not good sorry let me well actually let me go back I'll show you down here actually so this particular app is an acoustic bend I have velocity at one end impose zero velocity at the other end the parameters that I can vary with this app are indicated here this the pre-length before the bend the length after the bend the ratio of the bend radius to the duct radius the bend angle from 30 to 180 degrees the wave number which can go up to the first appearance of three-dimensional modes at the inlet and velocity boundary conditions so I now ask the system to solve this particular case but I'm not going to do that because Feng who writes this software has lots of tricks so he caches things so if you happen to ask for one of the parameters that you've asked for say in the past day he stored them somewhere in the cloud so it looks like the software is much faster than it actually is so in order to actually gauge time so so my day do you want to give me a bend angle between 30 and 180 it can't do 72 okay alright 72 alright so I say update model it says solving and it's solved in the time you just saw alright so it solves 200 partial differential equations in 3d for 200 different wave numbers and then calculates the inlet impedance and then downloads the pressure field for one of those wave numbers alright so that happens all in the time indicated and if you go to the bottom that looks like that looks roughly like 72 degrees or so from the other and you see the bend and what you also see is that in fact the flow not the flow the acoustic field becomes three-dimensional because you excite three-dimensional duct modes in the process of taking that turn and at the bottom you also have the results of the 200 wave numbers I can ask to see for example the inlet impedance and this is the imaginary part in green which corresponds to the resonances in that system and so then you can query for different values of the parameter and and there's also a variety of ways to visualize the solution on different planes and all of it is intended to give you on the order of five second turnaround right for each of these results and so that demo worked and I did not even cheat because I did not take advantage of the caching alright so I think that's reasonably I wouldn't say completely honest but reasonably honest alright so is the answer right so again let me show a result for this same problem so this is the geometry these are the parameters that we can vary sweep angle ratio of radii pre-length post-length and wave number you just saw this little demo by the way the first page I did not show is where you pick the PD app you wish to solve and that invokes the right server and that brings you to the next page which is what you saw and that brings you to the solutions okay this is a comparison with experiment and prior theory and in fact prior computation this is the inlet impedance as a function of frequency and what you see is the PD app in red and it falls very close to the blue dots which are the theory but it falls even closer to the to this red line I'm sorry the green and the red and that's actually experiment and you see these lines that look like they're unrelated this is if I take that that bend in the duct and I straighten it out and take an effective length straight duct you can see that you miss completely these resonances and you even in fact miss a low resonance which is one of the reasons why people know that bends and musical instruments have a non-trivial effect on the inlet impedance not necessarily on the reflection coefficient but it's the inlet impedance which is what's felt by the reed or the lips and it's the interaction of that nonlinear system that generates the sound that you hear and so this is the crucial function that serves in that capacity and if you get it wrong of course you'll get the wrong sounds alright and there's the visualization with a fully three-dimensional pressure field this corresponds to roughly half a million finite element degrees of freedom the response time including the download from the net and and phone uses a number of parallel download tricks as well as parallel processing in the cloud there's a feature by which you can send different sets of wave numbers to different machines on the cloud and therefore get the result that much quicker in this particular case the response time is eight point four we were shooting for five on the other hand we got two hundred PDEs so perhaps you'll get us with a little little slack alright so this last example has a slide which probably is the most interesting of the talk which you may think is doesn't have much competition but nevertheless it's potentially the slide which merits most attention so this is an extent extended tube expansion chamber this is what most even simpler mufflers have at least this structure where you can see that the tubes come into the chamber left and right and these can the lengths can be tuned in order to provide much better reactive muffler reflection of waves and of course it's interesting numerically because there are a lot more parameters so we've also created an app for this the the black dots are experiment the red is the PDE app and the dash blue here is boundary element you'll notice that the PDE app actually does better than the boundary element and the reason is we have ten to the minus five dissipation the boundary integral has boundary element has zero the real physical system has somewhere in between the two but you'll notice that some of these dips are characteristic and shifts of a slight bit of dissipation which in this case is almost or more realistic than no dissipation if the dissipation is sufficiently small alright so this is effectively the end of the talk I'll show you one example and then a few pictures but what I wanted to comment here was that so this is a result we had the result we compared with experiment it compares beautifully and I was idly trying another case with the PDE app for this same configuration you can't see the tubes inside and this was the result that I obtained right so this is two-dimensional inlet axi-symmetric inlet axi-symmetric outlet axi-symmetric geometry three-dimensional pressure field so the first thought the more benign thought right is that this is a physical instability there actually are duct modes in 3d which have resonances earlier alright than axi-symmetric duct modes their cutoff frequency is in fact lower so what could have happened if this was a physical instability was that there actually is a resonant mode present but it's 3d a little bit of numerical noise then stimulates the result that you see here so this would not be a physical result but it would be a result which is physically relevant because it's indicating that there is in fact a nearby resonance alright so next what I did was I actually went in by hand and I asked for a different number of components in this gap so this is the same problem on the right alright same physical parameters but a different discretization and you see that the 3d effect has disappeared so now the question is why is that and there are two possible reasons one is that it could still be true that there's a resonance nearby but there's less perturbation in the third direction if you like due to the solution algorithm in this case we don't see it or it could be that this is more of a numerical instability than a physical instability so how do we distinguish between those two well a physical instability just means that the imp soup gets very close to zero except perhaps for a small dissipation term and that could certainly give rise to a system like this what do we mean by numerical stability we would mean that the imp soup of our discretization is greater than the imp soup of the finite element so it would be possible that I could have this instability because the numerical imp soup if you like goes to zero in which case this could be entirely spurious or it could be that simply that the imp soup is relatively small and I'm seeing the manifestation of that instability so the obvious thing to do and we have not yet done it yet because this is a large commercial code is calculate the imp soup and to look at that compared to the finite element and if the finite element is very small then we're seeing a stimulated 3D effect which is in some sense real if the finite element is much larger imp soup than the actual computational then we need to work on new training and projection techniques okay alright so that is really the point that requires more subtle analysis otherwise I think the apps I've shown you're in pretty good shape you can also treat problems in linear elasticity this is a family of shafts and you can make shafts with notches with fillets with grooves with holes that are through holes for various reasons for cotter pins say these are typical solutions you see stress concentration factors which we've confirmed are accurate typically to within a few percent these are the components from which you can make these shafts in each we saw the equations of linear elasticity with ports as indicated in blue and red we then have these archetype components which we can form into a system by essentially the same fashion as an acoustics we then coalesce the ports to get our system and then the underlying stitch together finite element mesh so you can see that now this is a real shaft from a design handbook this is the model we can form these are the breakdowns into components and then these are the axial displacements in various stresses so now you have a design tool which takes a matter of seconds in order to analyze three dimensional linearisticity for these kinds of shaft systems which share a common set of engineering components and last example are what architects call lentils but we think of more commonly as arches so here we can think of having historic structures that either have roman arches or gothic arches we could have beams or we could also have for the modern era I beams and we could then form from these systems gluing together as before an example of a component is the I beam which I've shown here here are the components with all the different colored ports and that allows me to form say this roman arch or colonnade as the architects call it from these three components here combined in the fashion shown so that's an example of application I showed you three different physical domains I tried to illustrate how they're all put together in an app context and finally give you some sense of the numerics that's under the hood that allows us to get these five second turnaround for finite element problems that sometimes have many many million degrees of freedom thank you. Thank you. Thank you.
Parametrized PDE (Partial Differential Equation) Apps are PDE solvers which satisfy stringent per-query performance requirements: less-than or approximate 5-second problem specification time; less-than or approximate 5-second problem solution time, field and outputs; less-than or approximate 5% solution error, specified metrics; less-than or approximate 5-second solution visualization time. Parametrized PDE apps are relevant in many-query, real-time, and interactive contexts such as design, parameter estimation, monitoring, and education. In this talk we describe and demonstrate a PDE App computational methodology. The numerical approach comprises three ingredients: component => system synthesis, formulated as a static-condensation procedure; model order reduction, informed by evanescence arguments at component interfaces (port reduction) and low-dimensional parametric manifolds in component interiors (reduced basis techniques); and parallel computation, implemented in a cloud environment. We provide examples in acoustics and also linear elasticity.
10.5446/57364 (DOI)
Thank you for the introduction. I will speak about modeling and data assimilation in cardiac electrophysiology. It's a common work with many people, and in particular with Philippe. Cardiac electrophysiology is a study of the electrical wave with crochets over the heart, and which triggers the muscle contraction. As you can see on this movie, no? We cannot see. Ah, yes. The heart becomes depolarized, and the contraction occurs. In cardiac electrophysiology, we are interested in free potential, the intracellular potential, the extracellular potential, and the transmembrane potential, which is the difference between both. We are interested in different modeling scales. First, we are interested in the evolution of the transmembrane potential over time in a cell. Here you have the depolarization, after there is a plateau phase, and after there is zero polarization. We are also interested in the heart scale. I mean that the diffusion of the signal over the heart, as you can see here. The most common model in cardiac electrophysiology is the bidoment model, which is a nonlinear reaction diffusion system. In this part, you have the unique terms, which depend on many variables. This variable verifies some hode keys, and this allows to represent the cell scale. After you have different diffusion tensor. It is the bidoment model because you can write it with the n-noha, the extracellular potential, and the intracellular potential, or the transmembrane potential. There is existence on uniqueness of this model, and there is some assumptions, in particular some assumptions on the unique terms, because the unique term is nonlinear. There is also a simplified version, which is called the monodoment model. It is when the extracellular tensor and extracellular tensor are collinear. With this equation, you just have the solution of VHEM, this transmembrane potential. There are many issues in cardiac electrophysiology, and I want to present some of them during my talk. First, concerning the modeling, we can look at the effect of the mechanical deformation on the electrical activity. In fact, the bidoment models come from a microscopic bidoment model. In order to obtain a 3D model, you need a numbering process. If you want to integrate a new effect and to improve your model, you need to restart all the processes with the homogenization or with the immature theory. I won't speak about that during my talk, and for the sake of simplicity, we will neglect this effect in that follow. Another issue in modeling is the particularity of the atria. Here you have the heart, and the inferior part are the ventricle, and the superior part, the atria. The atria are very thin, and that's why you want to derive bidoment surface model. For that, we use the nasophotic analysis, so it will be the first part of my talk. After, I will speak about data assimilation, because when you have a model, you want to adapt this model to each patient, and for that, you need to use the data, which are available, and it will be the second part of my talk. The atria are very thin and only appears as a surface in medical imaging, so that's why we want to derive a surface electrophysiological model using some asymptotic analysis. The second objective is to decrease the computing time. For that, I will call the bidoment model, and the main difficulty comes from the fact that the heart has a fiber architecture, and this fiber is preferred direction for the diffusion of the signal. This means that the intracellular tensor and the extracellular tensor are decomposed into two parts. The first part is the homogenous part. You have a propagation in the whole direction, and you have a propagation along the fiber, and here, too, is a unique vector, which is parallel to the local fiber direction. As I already said, the atria are very thin, but the fiber across the thickness of the atria can vary rapidly. This means that we want a surface model able to consider some 3D aspects with the variation of the fiber. It is the main difficulty of this work. That's why I will focus on the asymptotic analysis of the diffusion problem. First, I consider a diffusion problem, and here you have the weak form. I need some assumptions on the variation of the fiber across the thickness. I will consider here that tau, so tau is the local direction of the fiber. I will call tau zero, which is the mean direction of the fiber, and I assume that I have linear variations of the fiber inside across the thickness. This means that a vector in the fiber can be decomposed into two parts. The first part is where you have the decomposition on T0, so the mid-surface direction, and along the orthogonal of tau zero. You can see that if tau equals to zero, this means that there are no variations across the thickness, so everything is on T0. More tau is important, more you have to consider the direction on the orthogonal of T0. We did an asymptotic analysis inspired from Shell Theory, and for a while we considered an asymptotic decomposition of U and V, as you can see here. There is existence and uniqueness of a solution, and this solution depends on the epsilon, where the epsilon here is the undimensional small parameter. In fact, it is the thickness on the diameters of the surface. When you want to derive a bidomain surface model, the first step is identifying a limit problem. For that, we just keep the first terms in the asymptotic development, and you can see that you have two terms. The first term is the homogenous term, and you have a propagation in the whole direction. The second term, which is here, is the isotropic term. As you can see, there is a part, it is a decomposition along the basis of the orthogonal of tau zero, and you have some parameter which depends on the angle theta. If you look, when theta equals to zero equals to one, so this means that everything is along T0, and more theta is important, more this second part will be important. The second step consists in proving some convergence theorem, so I don't want to do the demonstration today, but if you are interested, everything is here. After, we can propose bidomain model by applying the result of the derivation of the diffusion terms. Here you can see the classical bidomain model, and the diffusion tensor decomposes into two parts, where here you can see the interest of the asymptotic analysis, because you know that the tensor will be... You have the propagation in the basis of tau zero and the orthogonal of tau zero. All the simulations that I will show today are obtained with the finite elements library called Felice, which is developed at INRIA. It is a finite element library, and the objective of this library is to contain all the tools, and to perform simulations of complex cardiovascular models, I mean that electrophysiology, fluids, and solid mechanics, and also all the coupling phenomena between this phenomena. It is in C++, and it is based on the bedsylibri. Here is the first simulation that I want to show you. On the left, you have the 3D bidomain model, and on the right, the asymptotic bidomain model. You have three snapshots. We cannot see a difference between both, and if we look at the error in L2, the error is in favour of two persons. This means that it is a very good result. In terms of computing times, the ratio is very important. This means that these first days validate this strategy. I want to show you another first state in order to show you how the asymptotic analysis is important. Here, it is a very classical benchmark in cardiac electrophysiology, when the spiral wave is happier. The idea is that you have a signal, so here is the depolarization. After you have the repolarization, you can imagine that there is a pathological area here, which triggers a new signal. Repolarization is a refractory phase, so this means that if the heart is not repolarized, you cannot imply a new stimulation. That's why here, this new stimulation will go on the left, as you can see, and after, you can have some spiral waves. I will show you on a video, and you have the bidomain surface model on the left, the 3D model on the right. On the middle, sorry, and on the right, you have the 2D-naive model. This model is when you do not know the asymptotic analysis, and you say, okay, I have a linear variation of the fiber. Maybe I can just consider the fiber, which is in the mid-surface, and apply it across the thickness. You see here that this model is not able to follow the 3D bidomain model, so this means that we need the asymptotic analysis if we want to integrate some variation across the thickness in the model. After, we can do some realistic stimulation. Here, I have a mesh of the atria. Here, you have the fiber at the endocardium, so the inner surface, and here, the fiber at the epicardium. You have here a very realistic simulation of the atria, and this simulation is in very good adequacy with 3D modeling studies where everything is assimilated. I will conclude the first part of my talk. We have a bidomain surface model, which is adapted to thin cardiac structure, as for example, the atria. With that, we can obtain a full electrophysiological model, and by coupling the atria with the ventricles. We can also, for example, apply some electrocardiograms, so I don't speak about that today, but this is a way to validate also all the strategies. There are many perspectives, as for example, obtain an asymptotic electro-mechanical model, and this is also very... It is a difficult subject because the fibers are also preferred direction for the contraction, so this means that you have to develop surface mechanical model able to consider the variation of the fiber because of the thickness. And the last, and also, for example, the effect of the mechanical deformation on the electrocardiograms. The last point, which is very important, is that when you have very realistic simulations, as you can see here, we won't have to adapt this simulation to each patient. So it will be the second part of my talk. So the objective is to personalize a model, typically for a patient, and the idea is to give this information to the doctor to help him for the prediction to provide a diagnostics and prognostic assistance. So the starting point of this second part is the very realistic simulation of the full heart, as you can see here. And the data... Okay, so when you want to do some data assimilation, you have to choose the data that you want to assimilate. So here we will consider the depolarization map. So this means that in fact we know where is the font at different instants. So we don't know... Okay, so here you have depolarization time, so this means that here, for example, you know that it was the font at 5 milliseconds, for example. But you don't have any information of the value of the transmoment potential. Here, you just know that the font is here. So here it's the depolarization maps of the atria and depolarization maps of the ventricle. And the objective is to propose an effective strategy for performing estimations with data which are in form of level sets. Okay, so first, all are obtained the font level set data. You have some image, which are CT or MRI, and you can... With the image, you can build the heart and the torso geometries. Okay, as you can see here. And you have also some electrode vest, which in fact records the multiple surface potential that you have on the torso. And with that, you can build a body surface potential, as you can see here. Okay, so with the geometry and with the body surface potential, you can use computing inverse problems in order to find the map of the electrical activation. So this is also a very difficult inverse problem, but I don't want to speak about that today. I will focus on this part. I mean that I consider that someone already did these inverse problems and I have the depolarization map, which are on the atria and on the ventricle. Okay, so this means that I consider that I have the depolarization map as you can see here. So you can see here like that, but also this means that in fact you have the information where is the control over time. And you have your model, which is the Bidomin model. And we want with that, provide some patient-specific simulation. So there are many work on that in the literature, but in many cases, people don't use the Bidomin model and they simplify the Bidomin model in order to have, for example, a level set equation. And in this case, of course, the solution U is the same as your data, so it's easier to assimilate it. So the idea is really to use a sequential estimation on the complete Bidomin model. It's our objective. So that's why I will try to explain first what is a sequential method. So the idea is that you have a target model. So R here is your model. It can be nonlinear. And U0 is your initial condition. And theta are your parameters. And you assume that you have an apriori on your initial condition and on your parameters. Okay? But you have also an another part. And if you do the simulation just with the apriori part, you know that it's not the same simulation and you did the mistake. And you don't know the solution of the target model, okay? But you have some information which are the observation. In your case, for example, we do not know the solution of the electrophysiological model with the full initial condition and theta, but we know where is the font at different time, okay? And we want to use this observation in order to adapt, correct this model. So the idea consists in adding a terms, like that, again terms, where D is a discrepancy. So the idea of D is to compare the observation that you have here and the solution of your observer model. Okay? So this can be something very simple or something very complicated. In your case, it's something which is complicated because the observation is the position of the font and U is the solution of the reaction diffusion model. Okay? And after, you add a gain operator here on the decoupancy and the objective is to decrease the decoupancy over time. Okay? And, okay, so this will correct the state of your system. And if you want also to estimate some parameters, and this is the objective because if you estimate the parameters for a patient after you can do some prediction, the idea is to add a dynamic on the parameter. I mean that here the parameters are constants and you will add a dynamic of them and to the model part of that will be a gain operator, which is different from the gain state in this case, apply also to a discrepancy. And the objective is to have U hat and theta hat, which tends to U and theta over time, over simulation time. Okay? So this strategy is decomposed into two parts. So first find a state observer for the reaction diffusion model and after extend to parameter observer. So the second part is something which is classical, which is presented in this article and as you can see, if you have a state estimator, you can theoretically extend it to a parameter observer. So the first point is a difficult point for us. We need to build a state observer. Okay? That's why in that's for all for the moment, I just will consider that I have a narrow in the state. So for example, in my initial condition. Okay. So I have a reaction diffusion model, as you can see here. So it could be the predominant model, the monodomain model or over reaction diffusion model. And I can define C th, which is the threshold of U. I mean that if you is superior to this constant, this means that it is depolarized. And if it is inferior, this means that it is not depolarized. And we can define with that, a omega U hat, as you can see here. And this means that we can define also a level set, which will be called phi U hat, which is associated with omega U hat. And there is an economic equation, which is verified by phi U hat. So this means that when you have a reaction diffusion model, you can find a level set, an economic equation or a level set equation, verify by the level set, which is defined with the solution of your reaction diffusion model. Okay? So this is something which is very classical. The idea consists in doing an asymptotic analysis along the direction of the front. So here, C1. And you obtain this economic equation, which is presented here. Okay? So there is an asymptotic relation between your reaction diffusion model and the economic equation, which is here. And so first, why? Why we decide to do that? Because if we look what happens in image processing when you want to segment an object, you have a level set, so here in red, and an object in black, and you want to segment it. And this means that for that, you need to compare two fonts. And in fact, we have the same problem. I mean that we have the font, which is the font of your target. So for example, here it is in blue. Okay? And then you have the solution of your observer model, which here is in red. And the idea in image processing consists in minimizing an energy. Okay? So this energy is decomposed into two parts. So first, the regularization terms. For example, the minimization of the perimeter, the surface, or the volume. Okay? And some data terms. The data terms, hello, to compare these two fonts. Okay? The idea is that with the blue font, you can define an object with a value inside and a value outside. Okay? So this is ZU, the observation. And you can compute C1, which is the average of ZU inside the red font. Okay? And you can define also C2, which is the average of ZU outside the red font. Okay? And this is a way to, if you minimize this, you will minimize the difference between your two fonts, which are here. After, in image segmentation, so they have an energy, they want to minimize this energy. And they just use a gradient projection method, as you can see here. And this means that you have here your level set, which represents the green font in the previous movie. And you see that here you have the regularization terms. I don't want to speak about these terms today. And here you have the data terms. Okay? So our idea was to use the economic equation that we have with the Reaction Diffusion Model. So, which is here. And how do the data terms, which come from the, we come from image segmentation, as you can see here. And this gives us a state observer for the economic equation. Okay? So here you have your model. And here you have your data terms will correct your model over time using the observation. Okay? So in order to propose a state observer for the Reaction Diffusion Model, we do the inverse asymptotic analysis in order to find the terms that we have to put here. If we want to find the terms, which is here, with the same asymptotic analysis, but for the model. Okay? And we obtain this state observer. And we can simplify it by saying that a level set, which is adapted to you, it just you minus the threshold. Okay? And if you look with terms, so here we have the Reaction Term, here you have the Diffusion Terms, and here you have your Hub Server Terms. So first you have a gained parameters after you have a direct here, so this means that you will just apply your corrections on your font of the, of you, you had. And this will give you the direction. I mean that are you in late or are you in advance? Okay? So we, we prove that these terms is the observer terms is established in terms. I mean that, okay, so we have you, which is the solution of the target model, and you have you had, which is the solution of the observer model. And you can compute the error model, which is the difference between both. You look and you look the energy of this error model, and you want that these new terms, the observer term, decrease the energy of your error model. So we prove it and we just have to verify a contract, a contract condition, which is something which is very known in image segmentation. I mean that here, for example, you cannot segment the banana because the contract condition is not, is not verified. So, okay? So this means that we are in the same case. I mean that when we have the font, we have to define an object with really a value inside and a value outside, which are different. Okay. So first, so now I will show you an illustrative example in one day. So in red here, you have the target font. Okay. And here in blue, you have the observer font. So this means that here I did an error in the initial condition. So there are both initial conditions and I made a mistake. I switch the initial condition and I delayed it. Okay. And we can see that the observer model is able to track the target's model and correct your error over time. Okay. And if we look, for example, in a surface case for the atria. So I consider that I have a mesh geometry of the patient using CT or MRI. I have the fiber of the patient. Okay. And I have the data. I mean a depolarization map as you can see here. And for example, if I did a small error in the initial condition, this means that I move the position of this node, okay, which is the natural pacemaker of the heart. Just like that. I can correct this error using the observer. So you can see that here the solution of the reaction diffusion model will trigger your font, which is in green. Okay. If we just look what happened for the font, so I have three font here. First is the no observer. I mean that I consider that I have the wrong position of this new node and I don't correct this model. And you can see that the blue font, which is the observer font will quit the green font in order to go to the white font. Okay. So this is a very good result. And after, if you want to estimate some parameters, you can use the automatic strategy. I mean that when you have a state parameter, you can directly extend it to a joint state and a joint state and parameter observer. So the procedure is you have an observer, a state observer, which is here and you will hide a dynamic, sorry, on your parameters like that. Okay. So in this simulation, we use some error-acquire algorithm. So the idea is that you have your solution at the time TN. And you will do a sampling, which means that you will launch P-particles in parallel here. And so first you will correct the state. So no, sorry. So first you will have your prediction. So using your model, so you will obtain a state and parameter prediction. After that, you will do state correction using your observer terms. And after, you can use all the sampling that you are here in order to build a parametric correction. Okay. And after, you can restart. And the advantage of this strategy is that all the particles can run in parallel and they are just to communicate at the end here in order to build your new state and your new parameter. Okay. So here it's also an illustrative example in 1D. So in this case, I will consider that I do not have an error in the initial condition. Okay. So, but I have an error in my parameter, which is sigma m here, which is the diffusion parameters. And as you can see in green here, you have the solution if you do not correct the parameter. And here you have the evolution of the parameter. The objective is to find the parameter which is equal to the red line here. And you can see that it works perfectly. Okay. So no. If I have an error in my initial condition and an error in my parameter and I use the same strategy, it doesn't work. Of course, because here you have, I switch the initial conditions. So this means that here it is in advance and here it is in late. So, and this, in this case, the observer is not able to know if you have to decrease the diffusion parameter or if you have to increase the diffusion parameter. So one strategy if you want to circumvent this limitation is to say, okay, so first I will just correct the state. I mean that I will correct the error that I did in my initial condition. And after that, I will, sorry. And after that, I will estimate my parameters as you can see here. And this strategy works. We can also imagine over strategy where, for example, during the first cardiac cycle, you estimate the initial condition. Okay. And after with your estimated initial condition, you can estimate your parameters. Okay. So I just want now to show you a limitation of this observer is when you have two phones, for example, for the target. So you have two red phones here and just one, only one phone for the observer. And as you can see, the observer is able to correct the left phone, but you cannot track the second phone. Okay. So this is the limitation of this first observer. And it could be a problem when you have some fribrillatorial fibrillation. So I mean that here you have a pathological area which trigger a new stimulation and you have the spiral waves. Using the same problems, using the refractory phase of the polarisation. And this is a limitation because if you are not able to track this new front, you cannot imagine to have a realistic simulation. So the idea consists in saying, in the first observer, we have this energy and we compute, in fact, we have computed a shape derivative of this energy. And we have the observer term which is defined like that. So you have your energy which comes from the image processing. You can define a shape derivative and you had this shape derivative as an observer. Okay. So this was the first strategy. So now the idea is to circumvent the limitation of this shape observer is to say, okay, I want to have a new phone. So this means that I want to have a change in the topology of my phone. So the idea is just to use the topological derivative. So I have my energy here and I can define a topological derivative and say, okay, here I have the shape-based observer and I can have the topological observer. Okay. And now, so here is the simulation that I showed you previously. And now with the topological observer, I can detect a new front which will appear. So this will be very useful when we have some atrial fibrillation. So here, for example, it's a case where a new front will appear here. So I will put a delay before applying the observer in order to show you how it works. So first you have your new front which appear and the topological observer is able to detect it, as you can see in this movie. And if we look the front, you see that at the moment you see the queen and the, sorry, the red and the blue font are insane. Sorry. Okay. So now to conclude, I want to speak about the two CEMAC projects where I am involved. So the first one is to use, okay. So we have did some, we did some realistic case for the atria and now we want to prove that it's work also with real data. And we decided also to use another model just in order to show that it's not depend on the model that you use. If you have a front, if you have a reaction diffusion model, it will work. And here it's a billier model. So the idea is that you don't have a bidomain model, just a monodomain model, but you will consider two meshes and two layers of fiber. One is the inner surface and one is the outer surface. Okay. And you have a coupling terms between your solution at the outer surface and your solution at the inner surface. And we will use some real data which comes from the Lyric Institute at Bordeaux. So this is the first CEMAC project. And the second one is to say, okay, it works in fact, our observer works when you have a reaction diffusion model and when you have data which gives you the position of the front. And if I have propagation, they do not really have a reaction diffusion model, but they have a level set equation which is here. And in fact, it's work also because you have a font and you want to target the font of your target. To track the font of your target. And so this is the objective of this second CEMAC project. And we want to apply it with real data. So here you can see the temperature which define a font, okay, so an hydrogen font. But we can use some strategy to define a font in this case. And here is the first simulation that we have obtained with real data in fire propagation. So in blue, you have the solution of the observer model, resolve the observer term, okay. In red, you have the font that you will correct over time. And in yellow, you have the observation. And you can see that we are able to follow to track the yellow font. So it's a very promising result. So thank you for your attention. And she'll have some questions. Okay, I have just one quick question. Do you have any other ideas of application for this kind of observer or other modeling applications? Yes, for example, it's work also with some model which are transport model. Because if you have, when you have a font and when you have some data which are position of the font. So for example, you can use it with a tumor model where you have the evolution of the tumor as a, which is done by a biotransport model. And you have the position of the tumor using CT or MRI and you can assimilate it. So, yes. Okay, thank you very much.
In this talk we overview some of the challenges of cardiac modeling and simulation of the electrical depolarization of the heart. In particular, we will present a strategy allowing to avoid the 3D simulation of the thin atria depolarization but only solve an asymptotic consistent model on the mid-surface. In a second part, we present a strategy for estimating a cardiac electrophysiology model from front data measurements using sequential parallel data assimilation strategy.
10.5446/57365 (DOI)
The work I will present is the result of several years of work, including a former PhD student Marine Torx, Sarvesh Dubey, who is now in the US, Yanmer Dessoif from LSE, another lab from the Institut Pierre Simon la Place Federation, and Eva Gellos who will be here in two weeks. And I will talk about climate modeling with an ion high performance. I will explain how we designed an atmospheric flow solver based on mimetic finite differences. And I will sketch ideas that we have to possibly go beyond these finite differences with similar ideas but based on finite elements. So first I will give some background on climate modeling and atmospheric modeling before going into the details of the model that we have been developing the last few years and finish with these finite element ideas. So I think last week you had a presentation by my colleague Fabrice Voitus who is working at Meteor France and doing weather forecasting. So forecasting is really you model the atmospheric flow but it is an initial value problem. You try, you do your best to know the flow tomorrow given an initial condition today. So it's a short term in our time scales. It's a very short term prediction. And I think Fabrice has emphasized the very tight time constraints they have to do this exercise. So if you do the calculation typically they have one hour to give their forecast which means the model must run about 100 times faster than reality to be useful. So for climate we are not looking at the next few days. We are interested in several decades or several centuries. And it's not an initial value problem. It's more a boundary value problem where we look for the statistical equilibrium of the system given some external conditions like solar forcing or the composition of the atmosphere things like this. So we look for the statistics of the model but we do run an initial condition problem, initial value problem for a long time. But the initial value is not an important daytime in this exercise. And because we are interested in these long time scales we need an even faster model. So for modern climate if you have a patient, a PhD student who waits for one month until the simulation gets out of the machine you need to run a thousand times faster than reality. So modern climate is very interesting and important but you may also be interested in paleoclimate. So typically even so-called modern paleoclimate it's how climate changes over a few thousands of years under say changing insulation conditions. And even if you're very patient if you want to have 10,000 years of simulation you need an even faster model. So we will aim at high performance to reach these kind of time scales but we don't want to do this at the expense of the quality of the simulation. So I will explain what we focus on later on. And also in order to reach these high throughput rates we can't really achieve the kind of resolution that you will have in a weather forecasting model which resolves a few kilometers now. In a typical climate model today, sorry, it will be more in maybe 100 kilometers maybe now it's a little bit below that depending on which group is running its model. It used to be 500 kilometers 30, 40 years ago. That means also that the fluid solver is only a small part of the model because very important things happen at subgrid scale. So what we really call a climate model now it's actually what better described as an earth system model. So it's a modeling platform which couples the atmosphere, the ocean, sea ice, possibly biogeochemistry, plankton in the ocean, possibly usually you need for climate continental surfaces that's subsurface hydrology and vegetation. All of these different physics coupled together. So what I will talk about is only the atmospheric component and in fact what I talk about is just the fluid solver inside this atmospheric component. And 90% of an atmospheric model is actually devoted to everything that's subgrid which I won't talk at all about. And this kind of model can be used for terrestrial climate but also for planetary climates like Mars, Venus which are also very interesting scientific objects. Okay, so now more to the specifics. So I'll talk now about the flow solver. So the atmosphere, it's just, you know, the loads of physics are valid for the atmosphere as well. So it's a compressible flow and it obeys Navier-Stokes equations for compressible flow. I will talk only about the inviscid part of the equations for the talk. One first difference compared to standard CFD is that we work in a rotating frame. So we work in a frame attached to the Earth which is rotating. And this implies that in addition to the usual pressure gradient and say gravity, so that would be if the Earth was not rotating, you would have here the gradient of the gravitational potential of the planet. So because of rotation, we have this Coriolis term here. So r is the velocity of the planet itself, the velocity field, a solid body rotation. And the curl of r is twice the rotation rate of the planet. So that's the Coriolis term, 2 omega cross velocity. That's the fluid velocity. And in addition to this Coriolis term, we have a centrifugal term which also derives from a potential. So you can put it together with the gravitational potential to form what's called the geopotential. So even if the Earth were perfectly spherical, and even if V had a spherical symmetry, because of that centrifugal term, the isosurfaces of geopotential would be slightly elliptical. And in fact, V itself isn't perfectly spherical because the Earth is slightly flat. Okay, so that's what we want to solve. And we also have the mass budget here and some thermodynamic equations. So s is entropy. And for what I'd be interested in, this heating term will be zero. So that's the second principle of thermodynamics. Okay. But we don't solve it, the atmosphere is not just a random standard flow. It's a flow in a very specific regime. And to understand that, it's good to have some scales in mind. So there are some velocity scales given by the speed of sound and wind. And some time scales. So the buoyancy oscillation is the oscillation of a piece of air. That imagine that you have some piece of air here. And the air above is slightly warmer. And if I bring this up, this cold air will be in a warmer environment and be heavier and it will sink back. And this creates oscillations at a certain frequency, which is this one. And we have another time scale, which is given by the rotation rate of the planet. So the velocity together with the compressibility of the gas measured by the speed of sound gives rise to a very important scale, which is the high scale, which is essentially the thickness of the atmosphere. So if you go above by a number of these scales, essentially you're in outer space. You can construct this other scale, which is given by the speed of sound on the Coriolis force, which is about 1,000 kilometers on Earth. And it's essentially the size of the cyclones and anti-cyclones. So this is from Jupiter, actually. But that would be the cyclones and anti-cyclones that you find typically at the mid-latitudes. So on Earth, there are some small numbers. Like the Mach number is quite small, the centrifugal acceleration compared to the gravity of the Earth is small, the atmosphere is quite shallow, a few tens of kilometers compared to thousands of kilometers. So there's also a scale separation between the scale height and the Rosby radius. So that means that typically a climate model will capture all these large scales, but in current conditions it will not resolve these smaller scales here, which are widely separated. So this means that there will be room for approximations in order to simplify a little bit the equations we're solving. And the most important one probably is the hydrostatic approximation. So if I take the vertical momentum budget, typically this term here, the vertical acceleration, is negligible at the scales at which a climate model is operating. So you can do some standard estimates based on physics. That's the physical way of doing estimates. And you end up figuring out that this acceleration term is negligible compared to the other ones if the horizontal scale of your flow is large enough compared to the scale height. So for a climate model, this term is small and we typically will neglect vertical acceleration, which has profound consequences on the structure of the equations we saw. But for weather forecasting, they don't do that. Also because the atmosphere is a very thin shell, we don't work in, say, Cartesian coordinating in outer space. We use coordinates that are fitted to the atmosphere. And especially what has been used for a while is to use coordinates which follow the geopotential surfaces. So that means that if you stay on this surface over which phi is a constant, you don't feel gravity because you're on an outer surface of the total potential, the geopotential. And gravity by definition, the vertical by definition is the direction along the gradient of phi. So by using this kind of coordinates, you separate the gravity, which is the dominant force, from the other forces. That also means we need to choose some coordinate system in the horizontal, which cannot be done everywhere on a sphere. So either you use, say, lat-long coordinates, which have a singularity at the pole, or you use more this kind of coordinates, but you need to connect together several patches, like as is shown here. That's the so-called cube sphere. OK, so typically in the next slides, I will be working in curvilinear coordinates, but it doesn't mean that everything becomes more complicated, especially for transport. If you, so when you change from Cartesian to curvilinear coordinates, and if you work with the right quantities, you don't even see that you're working in curvilinear coordinates. So first, your velocity has to be, I'm not sure you see that very well, your velocity is, so the motion of the fluid has to be described by the so-called contravariant velocity component, which are just the Lagrangian derivative of a fluid parcel in your coordinate system. And then you have to take into account the fact that the volume is not given by just dx dy, the volume is given by dx dy dz, but because you have this curvilinear coordinate, there is a Jacobian here. But if you take, if you look at the so-called pseudo density, which is the density of the fluid times the Jacobian, which is essentially the amount of mass per unit psi coordinates, then your transport equations look exactly the same in curvilinear coordinates as in Cartesian coordinates. You don't see very well here. So this would be lambda dot. So lambda is longitude, lambda dot, phi is latitude, phi dot, and r is the distance from the center of the planet. So here I've just taken as an example, latitude, longitude, radius, so three coordinates. So for transport, this looks exactly like Udxs plus, so the three components of Cartesian velocities times the three components of the Cartesian gradient. So if you do things right, you don't really see the coordinate system. So where you do see the coordinate system is in the dynamics. In the momentum equation, here you have to take into account this metric tensor. The metric tensor tells you the distances between neighboring points, and only the dynamics really need to know about that. So you have nice Christopher symbols all over the place, and the rest doesn't change too much. That also means that once you've done this, once you've changed to these curvilinear coordinates, you don't really have to be, well, you can decide how accurate you want to be on these metric terms. And typically what is done in atmospheric models is that because these geopotential surfaces are very close to perfect spheres, we just pretend they are perfect spheres. So in the end, the metric factor we have here are just the metric factors of the spherical coordinates. So you don't even see that the earth is not completely flat. When you forget about this idea that you need to incorporate together gravitational and centrifugal potentials. So that's called the spherical geoid approximation. You can also take into account the fact that the atmospheric is shallow to further simplify the metric terms. In the end, so the real equations of motion are up there, the compressible Euler equation in 3D space. And you can simplify the metric like this as I explained, spherical geoid. You can further simplify by pretending basically that any air parcel is at a fixed distance from the center of the earth. And you can also do this hydrostatic approximation I talked about. So in the end, climate models actually do solve these equations here, which look a lot like standard Euler equations, but they are slightly different. And an important consequence of the hydrostatic approximation is that because it introduces a constraint in the system, because the balance, so the vertical momentum balance, which was an equation used to predict vertical velocity, is now changed into purely balanced equations. That means the system has fewer degrees of freedom when you do the hydrostatic approximation. And this suppresses acoustic waves, which is good because we are not, we don't care about acoustic waves. And from a numerical point of view, they cause additional difficulties because they are fast oscillations. So it's easier to treat these systems without these fast oscillations rather than the full system that Fabrice has been talking about. So that's basically what all climate models do regarding the flow solver. So what do we do to solve it numerically? So what we did before is we had a formulation of these equations in lat-long coordinates, which is nice because it has a Cartesian structure. So you can do centered finite differences and have easily second order numerics with some conservation properties. And for a while, we've had finite volume transport. So as I explained, the transport part doesn't really see anything special. So you can use standard methods like finite volumes or your preferred transport method. So we've had positive definite finite volume transport for a while, basically van Lier schemes, and some more ancient finite difference schemes, but which had some interesting conservation properties, which I will develop now. The only problem with this is that we have this singularity at the poles, which is a feature of the spherical topology. And so from a computational point of view, you can't really distribute the work. So we need to put filters at the pole for stability to remove the fastest oscillations there to have a decent Courant stability condition. So that means we need to apply some Fourier transforms back and forth there, which prevents parallelization along the zonal direction. So you can't really distribute the work in parallel as much as you would like with this kind of model. Essentially, you distribute along the latitude but not longitude. So it creates a bottleneck in scalability for higher resolutions. So the solution, in a way, is well known. You want to go away from this singular mesh and have a more uniform mesh. Then what you lose is the regularity. So essentially, you have a mesh where every cell is different from its neighbors, and it's harder to get second order or even first order accuracy there. So what you get easily is scalability, because then it's just straightforward to distribute the work. What you need to work a little bit for is to preserve this consistency, so properties, essentially halting discrete conservation laws at the discrete level. And also, that's another thing that we wanted. We didn't want to be locked in forever with this hydrostatic primitive equation set. So we wanted to have an approach which would be adaptable to different sets of equations of motion. So especially, we will conserve energy. So just what I call discrete conservation law is the idea that after discretization, you have some quantity that is exactly preserved in the numerical model. So for instance, it's pretty easy to conserve total mass. If you use a finite-volume method, mass is going from one cell to the neighboring cell. And so if you do your communications properly, everything that leaves one cell goes into another cell. And since we don't have real boundaries or we have no flux boundary conditions, total mass is exactly preserved in the discrete model. So that's pretty easy to do this for, say, total mass, total mass of water, total entropy, because you can cast them as flux form. d dt plus divergence of something is zero. What's more difficult is energy, because energy is typically a complicated expression of the variables that you're involving in time. You have internal energy, which is very nonlinear, kinetic energy, which is quadratic, et cetera. So what I will discuss is essentially how to conserve energy at the discrete level in the model. So first, maybe we want a good reason to make that effort, maybe, because if it's not a worthy effort, we're not going to do it. So there are physical reasons to do it. If we do climate, climate modeling is essentially looking at how energy flows in the system. The atmosphere receives energy from the sun, and then this creates motion, and this motion will transport energy here and there from the equator to the pole, for instance. And this will be sent back to space by radiation. So we want to have a good energy budget in the model. So that's a physical motivation to have an energy-conserving numeric. But also from a purely numerical point of view, I think it's interesting to stress that if you conserve energy, which is a convex function of the prognostic variables, then you gain some stability guarantees. So that's energy, right? Kinetic internal potential. And it's typically will prognose rho, u, and s, and this is all convex. So you have a convex functional, and it's conserved. So if you are close to a minimum, you should stay close to that minimum. The thing is, this thing doesn't really have an interesting minimum, but if you have other conserved quantities like total mass or total entropy, you can form what is called an absurdo energy, which now has an interesting minimum. So if you preserve all these three quantities, you can demonstrate that any isothermal state of rest is stable. So physically it's stable, but numerically it will be stable. So it gives you some stability guarantee to your model, which means that conservation, real conservation laws will create limits to what the model can do. Even if it makes errors, these errors will remain within certain physically realizable states. So if now we're convinced we should conserve energy, we can reflect on where this conservation of energy comes from. And it comes from the fact that our equations of motion have a variational structure. So they come from a least action principle. And if you have equations of motion deriving from a least action principle with Lagrangian being time independent, then you conserve energy. So to see this, I rewrote these equations of motion in Cartesian vector form. And you see that if I write Lagrangian, which is kinetic energy plus this Coriolis term, plus internal and potential energy, then I can write the equations exactly in this form. So dL dx dot is just the acceleration, the only term here, which depends on x dot is this one. So if I derive L with respect to x dot, I get x dot and so on. So this internal energy will produce your pressure term, potential energy will produce your gravity term and so on. So you can show that all the equations that I've shown derive from an least action principle. And you can rewrite this in curvilinear coordinates too, but it's just a rewriting of the same thing. So we have a good start. We know that all our equations of motion can be written from a least action principle. So we know why they have the conservation of energy. So that's more a physical interesting thing. Numerically, you will not discretize the least action principle. Some people do that, but it's difficult. Another approach is to use the Hamiltonian formulation. So if you have in classical mechanics, if you have a system which derives from a least action principle, you can derive a Hamiltonian formulation for it. So there is a very elegant algebraic way to do it. I will use a much lower level presentation. So what does it mean that we have a Hamiltonian formulation? So the Hamiltonian formulation says that instead of looking at individual evolution equations for each variable of the system, we will derive a single equation which is valid for any quantity depending on the current state of the system. So if we do have those equations of motion, the kind of which I've already shown, and we have some expression F, which depends on velocity, density, etc., typically some integral, then we know how to compute this time derivative. We just do the derivation under the integral and we can compute everything. So what that means is in order to know how F evolves in time, we only need to know how velocity, density, and entropy evolve in time, and how F depends on all these quantities. That's just a chain rule. There's nothing here but the chain rule. So now once we've written this down, we can replace these by their expressions from the equations of motion. And a system is Hamiltonian if the expression we have is something that is antisymmetric between F and H, the Hamiltonian. So H is the total energy. So that's the general form. So how does that look like for us? So first we will write down the Hamiltonian. So I've written it already. So this is kinetic energy. And when you do the least action principle, it turns out that the momentum, which is the variable conjugate to velocity, to position, is actually the absolute velocity. That is the velocity relative to Earth plus the rotation of the Earth. So to get kinetic energy, I must first subtract the rotation of the Earth, and then apply my metric tensor. And everything else is standard. And then when I look at how my energy evolves in time, here I find the mass flux here, density times contravalent component. Here I find something which looks like a Bernoulli function. So that's kinetic energy plus geopotential plus Gibbs function. That's a thermodynamic potential. And here I get temperature, right? Because when I derive internal energy with respect to entropy, I get temperature. So if I'm able to cast the equations of motion only in terms of these quantities, then I will find my Hamiltonian formulation. And that's actually well known. So I just written down all the steps here. Fortunately, it looks pretty ugly on the screen. So this is the standard way that we usually write the Euler equation. So dTU plus U grad U plus Coyolis plus gravity plus pressure gradient is 0. And then you can do this standard transformation. U grad U equals this, right? That's a very well known trick. You can also use the definition of the Gibbs potential to do this transformation here. And in the end, you end up with exactly what you wanted, which is that other terms can be identified as functional derivatives of the Hamiltonian. So now we get this other but still completed general form of the equations of motion. And now the conservation of energy only depends on the ability to do integration by parts. So when you do the total budget, the only operation you need to get zero at the end is an integration by parts. That's something that's relatively easy to get in a numerical method. So here I've shown how it works for Eulerian coordinates. In practice, what we use are non-Eulerian vertical coordinates. So I have extra slides if you want to know more about this. But basically, that's the idea. So we got a slightly more complicated system. But the basic idea is the same. If you can do integration by parts, then energy doesn't change in time. So now we know from what form to start in order to conserve energy. And we just need to discretize. Because everything I showed was still continuous. At some point, we need to go discrete. So we're working in this thin shell. And I will separate the horizontal coordinates, the two, the psi 1 and psi 2, which essentially are coordinates on the sphere. So I will actually write this now in a more intrinsic way, with n being some point on the unit sphere. And I have my other coordinates, which is an abstract vertical coordinate, going from 0 to 1. And I will discretize separately each of these. So the idea is that you always want in atmospheric or ocean models. As far as I know, everybody who has tried not to have the model mesh aligned with gravity has had terrible problems. So we didn't try to do it differently. So doing this guarantees that you're working with columns, which are perfectly aligned with gravity. So in the horizontal, we will use, so in the atmospheric nomenclature, it's called C grid. I know in other context, it's called MAC. So the idea is that you have, these are control volumes for mass. And we will have mass fluxes across cell edges. And we will describe velocity. We will have one degree of freedom of velocity per point here, per intersection point. So that's staggered mesh. In the vertical, we also have some staggering. So we have model layers. We will put some, the mass and thermodynamic quantities here. But we will, so in time, the attitude of these layers evolves. And we need to keep track of them. And so the attitude is defined at interfaces. So we have like one staggering in the vertical. So if you put this together, you have this nice columns. And then you need to decide how to represent your fields on this discrete mesh. And the choice that worked well for us is very simple in a way. So we will associate, we will attach quantities to the geometric object that they are, they naturally coupled to. So for instance, mass wants to be integrated over a cell volume so that you can do fine volume formulation. That's what we do here. That's an M. So we just integrate this over some cell. And that is the degree of freedom. That's all we know about the mass field is how much mass there is in each cell. To do our fine volume mass budget, we will look at mass fluxes across surfaces. So some other quantities are integrated over 2D surfaces. And the momentum, because it's a conjugate variable, it works well to integrate it along 1D lines. So what we do is we define our degree of freedom for velocity as the integral of momentum along these lines here. And the immediate benefit of this is that you can do very easily the curve. So you do the circulation around that triangle and you get the curl of momentum, which is involved in the equation of motion. So this idea, you can find it into the literature. It's actually very old, but the idea to cast it as a discrete sphere calculus is relatively recent. So the idea is that all these equations I've described, they are actually exterior derivatives. So they are derivatives that don't depend on the metric. And that's why in this context, they are exact. So when you do, if I give you exactly the mass here and the fluxes, then you can do an exact mass budget. That's because the geometry fits together. And another thing is because you compute the curl as a circulation like this, and the grad is just the difference between two points here, then the curl of grad is zero, which is nice. That means that your gradient term will not produce vertices while it shouldn't. So even if you have errors in the term, inside the gradient, the vertices will not be affected. So that's important because the large scale circulation is essentially a vertical circulation. And finally, you can show that you have this integration by parts property. So it's straightforward to derive an energy conserving 3D software. Okay, I have like five minutes left. Three, no, five, because I... Okay. So this is what we've been working on in the last few years. And now we are in a project where we want to apply this model to different applications. So I talked about paleoclimate at the beginning. So we work with paleoclimate colleagues to integrate the atmosphere solver into Earth system model. But the first application, which was more straightforward to obtain was with our planetary climate colleagues, who now work on giant planets. So now this shows a vorticity initially, and now it was showing the wind magnitude. And this is a preliminary attempt to model the circulation of Saturn. It would maybe remind you Jupiter, because it's less known that Saturn has this bounded structure too, but it's actually Jupiter. It's supposed to be Jupiter. So the model has some physics, so it can't really do the equatorial jet right, but it has some reasonable looking features. I just show it again. So this was a pretty big simulation. It's typically... We have more mesh points than what we expect to use in climate in the foreseeable future. I think you see the mesh at some point. I think this one, I think, was one-eighth of a degree, which would mean on Earth something like 12 or 15 kilometers. And we want more to be able to do like 15 kilometers or 25. But this was... So of course, this is driven by heating from the sun. So the colleagues put some... So they say it's simple. So that's a simple radiative transfer scheme. So on Earth, it's much more complicated than this. And this radiative transfer created the heating, which created the density differences, which created the pressure gradient, which put the whole atmosphere in motion. So it was a pretty big simulation. We were also using... oops, sorry, parallel input output in order to get the system also right, the computation that it was doing. So that's leveraging on work by Jan Mordeswa that has been going on for several years to actually offload the work of writing to disk to separate processes. And that's... Yeah, what you don't see... So that's a scaling curve. That's pretty horrible. But... Oh yeah, so you can see the number of cores here. So it's... So what you don't see is how big the problem was. So it's not strong scaling. So this is for... Blue is for, I think, one half of a degree. And I don't really see very well. So that's, say, a small problem. So half of a degree would be 50 kilometers on Earth. That's what we want to do for climate. That's quarter of a degree and that's eighth of a degree. So we can scale to, say, a few tens of thousands of processors on the current machines. Okay, so I will finish with a few words about really what our project here at SamRex is about. So at this point, we are relatively happy with this model. So we want to make it operational, do climate and so on. But we still think about the future. And yes, and so the point is that because the mesh is irregular, because every cell is different from the neighbor, you don't have the consolation of errors that you typically have on a regular mesh. So on a regular mesh, if you do a centered, finite difference in second order, but on an irregular mesh, it's first order. So typically, the model we have now is probably less accurate than the previous one in terms of order of convergence. So that's a bit... That's not a major concern, but we would prefer it possible to have a slightly more accurate model. So and probably find element methods can do the job. So of course, I think it's well known now that you can do high order find elements like spectral elements. So the question of feasibility is not really there. So the main question is whether you can still do this energy conservation exercise. So I will just try to illustrate that on a much simpler set of equations, which is just a nonlinear wave equation. So it's completely manufactured set, but it looks a little bit like the Euler equation I've shown. So here, this is just mass, a mass budget with some mass flux here. And this is some F of H. So it looks a little bit like the gravitational potential, for instance. So you can show that there is a conserved energy in this system where G is the potential for F. And this is a kind of kinetic energy here. So you can recast this nonlinear wave equation in something that looks more or less like a Hamiltonian formulation. And again, the conservation of energy is just the result of an integration by this. So now with this starting point, can we still conserve energy? We find elements. So first, of course, we find elements method, you start with picking spaces. So we have, if we, in the most general setting, we have four kinds of quantities, the time-evolving variables H and U, and the functional derivatives of which we take the gradient. So we have four, potentially four spaces. And if you look at the definition of the functional derivatives from energy, this tells you that these should be obtained by projection. I'm sorry, this really looks ugly on screen. So these terms here, first, you should obtain them as a projection. So that means that these two spaces, H and B, and U and F, they should be in duality, right? Because you need this, the mass matrix here to be well conditioned, to be invertible and well conditioned. And maybe the easiest is to take these spaces identical. So you have only two spaces to choose. And then, since you take gradients, you need at least one of these spaces to have square integrable gradients, to be H1. So let's say it's the flux space, right? So we can take the gradient of the flux here, which is actually divergence, right, but it's not. So now, once you have decided that this space here is H1, this equation will be tested precisely against this space. So since your test function has a gradient, you can do integration by parts and use a weak gradient here. So you don't need this space to have square integrable gradients. And by using a weak derivative, you actually enforce the discrete integration by parts property. Essentially, it's contained in your definition of the gradient. So it's actually pretty straightforward to have discrete integration by parts in a fine element context, too, to conserve energy. So the conservation of energy is not really a big issue. What is a bigger issue is to pick good spaces, especially to have stability and good condition. So that's what we... In principle, it should work and it really should give a very nice numerical method with all the qualitative properties that we are looking for. And the only downside is that typically, a FEM method will be more expensive to compute, right? In a fine difference method, it's very simple. If you look at the code, you just... For a gradient, you just take the difference between two points and you're done. A divergence is just some sum of our neighboring cells. So it's very... The model now, I'm sorry, this is falling, the fine difference model is very cheap, it's very simple, but it's low order. The question is how much more expensive... So we know our FEM method will be more expensive. The question is whether it's twice or 10 times or 100 times. And that can depend also on how you implement all these costly operations like assembling your matrices or computing right-hand sides. And so that's... So Chris, who's in the AHA project with Manel, who are there in the back. I saw them. Yes, I saw Chris. I don't see... Manel is there. So they are looking at a technique proposed by Kirby to rearrange the computation and use a dense matrix, matrix multiplication to make full use of externally optimized libraries. And I hope they will know and you will know in two weeks how much that can help in making this kind of FEM method sufficient. So that's my summary. So I hope I've given you some better knowledge of what we actually solve in a climate model, the kind of equations we actually solve. In our group, we have put a strong emphasis on conservation properties and some work to really go to the source of these conservation laws. And what we've tried to be is to be systematic. So everything I've shown, we can change, say, the equation of state or the geometry. And we know we can still have all the qualitative properties we want. So we're not stuck with a trick that would tie us to a specific set of equations. And that's thanks to the Hamiltonian formulation. So I haven't really showed much detail with the mimetic find difference method, but I have extra slides if you want. So it works very well with these mimetic find difference methods. And also probably we find element discretizations. But the end performance is an important issue. Thank you very much. So I'm rephrasing for the recording point of view. So given the size of the systems that you're considering, I would expect in finite elements that the assembling process, that is, of course, the stage when you are computing the matrices, should be negligible compared to the solution process, so the inversing of the system. Do we have any estimate of the respective times that it would require? We have only very preliminary experience, and the right person to ask is Chris about this. But I think assembly is still quite, well, these are two issues. But I think for high order, you do spend some, maybe not assembling the matrix, but at least computing the right-hand sides, the nonlinear terms. Maybe Chris, you can comment on that. I think that's very architecture and implementation. We think that we had kind of differences within a factor of, say, three to four. But I think it's going to be very implementation and architecture dependent. With respect to the number of degrees of freedom, right? Yeah. With respect to the number of elements, obviously, so it's directly proportional to the number of degrees of freedom. And then, of course, assembling is embarrassingly prone to parallel strategies, because you can distribute your assembling process over as many processes as you like. So it should be very easy to lower the cost of this phase, I think. I don't want to take too much. It should be very scalable. But we also think in terms of the overall resource that we are using. Other question? I have just one quick. Can you give us a little bit more insight about the lack of physics for the Jupiter case? Who was the... Ah, yeah. So, Jupiter... It's Saturn. It's actually Saturn. So on Earth, we have some heating from the surface, right? Because we have a solid surface and the radiation heats the surface and this causes convection. So on Saturn, you don't have a solid surface. It's a gaseous planet. But you still do have convection. And that's because there is some heat going from the interior. So actually, Saturn has a positive energy budget. It still radiates more energy than it receives from the Sun. So it's a belief. There are some arguments that in order to get the zonal... The jet at the equator in the correct direction, you need to have some input of energy from the inside of the planet. So here it's not there. And I don't think I've shown... So maybe I've shown that. The jet... No, I haven't. The jet is in the wrong direction at the equator. And I don't get the number of degrees of freedom that you have in the geopotential direction for your... So typically, for this one, eighth-degree simulation, I think we are about one million cells in the horizontal and a few tens in the vertical. When we discuss at the canum, we talk about parallelization in time. Is it still something you could imagine? Well, I think that's something very interesting. It will take a long time until it gets operational. But that's something that several people are looking at. Because probably we will reach the point where we have used all the available parallelism in space at some point in the next decade or so. I don't know exactly when. But so if it's feasible, and there are many obstacles, maybe not in the fluid solver, but in everything else, if it's feasible, then it would be a very interesting direction to use. So personally, I'm interested in thinking at least about it. Maybe the last one question for me. When we used to do energy conservation time discretization, it's most of the time implicit. Are you also... Yeah, so I didn't talk about time discretization at all. So in time, we use standard explicit rule-scruta time stepping, which means that even if the spatial discretization is energy conserving, you lose that once your discretization time. But you still get some stability properties. Like, you can show that linear stability is still there. And since in time, we are fully explicit. So we are limited by a current condition based on horizontal sand waves. So the time stepping is relatively small. So with a say third or fourth order runch-scruta, you have very small time discretization errors. Okay. Thank you very much. Other questions? Okay. So thanks again, Thomas. Thank you.
Climate models simulate atmospheric flows interacting with many physical processes. Because they address long time scales, from centuries to millennia, they need to be efficient, but not at the expense of certain desirable properties, especially conservation of total mass and energy. Most of my talk will explain the design principles behind DYNAMICO, a highly scalable unstructured-mesh energy-conserving finite volume/mimetic finite difference atmospheric flow solver and potential successor of LMD-Z, a structured-mesh (longitude-latitude) solver currently operational as part of IPSL-CM, the Earth System Model developed by Institut Pierre Simon Laplace (IPSL). Specifically, the design exploits the variational structure of the equations of motion and their Hamiltonian formulation, so that the conservation of energy requires only that the discrete grad and div operators be compatible, i.e. that a discrete integration by parts formula holds. I will finish my talk by sketching how the desirable properties of DYNAMICO may be obtained with a different approach based on mixed finite elements (FEM). Indeed while DYNAMICO is very fast and scalable, it is low-order and higher-order accuracy may be desirable. While FEM methods can provide higher-order accuracy, they are computationally more expensive. They offer a viable path only if the performance gap compared to finite differences is not too large. The aim of the CEMRACS project A-HA is to evaluate how wide this gap may be, and whether it can be narrowed by using a recently proposed duality-based approach to assemble the various matrices involved in a FEM method.
10.5446/57366 (DOI)
So, on Monday I assume that you heard about cardiac modeling already indeed, but essentially regarding the electrophysiology of the heart, that is the electrochemical phenomena. Today I will talk about biomechanical modeling, both direct and inverse, and you will see this will cover quite a wide area of subjects that is going from multi-scale modeling to experimental validation and clinical applications with a specific subject that will be patient-specific modeling that I will tell you about soon. So this is really teamwork that I'm going to present, medicine, that is an INRIA team now joined with LMS, Taboratoidomechanical Dissolid, Ecopytechnic and CNRS. The outline of my talk. So again this will be biomechanical modeling and vastly multi-scale as you will see in a minute. So focus on mechanical modeling and the electrical chemical input phenomena will be seen as an input to the mechanics. We'll cover some validation using experimental data. You will see and now of course we need to not only talk about material modeling but also organ and system modeling and this will be illustrated with a clinical application that's called cardiac resynchronization therapy, CRT. And the last part of my talk would be dedicated to specific considerations regarding inverse problems aimed at estimating the model for the purpose of patient-specific modeling. So adapting the model to a given situation and again with a detailed validation. Modeling of the heart starts of course from the myocardium that is the material itself, the muscle that we need to model and as I mentioned this is hugely multi-scale because it starts actually at this scale and below. This is a single myocyte cardiac cell that is beating live so to speak in a microscope. This is typically 100 microns long here and below this scale what the bending structure that you see here translates, reflects the basic entity of muscle contraction that lies within the cells and that's called the sacomere that you see here. So two bands denote the ending of the sacomere and the sacomere is made of a specific arrangement of two types of filaments, so-called actin and myosin filaments that are able to create bonds with each other. So there are head-like structures on the myosin filaments that can attach on the other actin filaments and that creates forces that tend to shorten the whole structure. The basic type of description that we use is actually inspired from Huxley who was a great man in cardiac modeling. So we visited a model that he proposed in 1957 to describe statistically speaking the attachment of such heads, so-called cross bridges between the two types of filaments, via the N quantity that's here and N denotes for the population of heads that are located at the distance S from the nearest actin site at the event time T. So for this population the ratio of actually created bridges. So this is really accounting for these myosin heads that is described by this equation. So what we have here, EC dot is the strain rate, extension rate of the whole sacomere, so the filaments slide along each other, this is the so-called sliding filament theory. And in the right hand side what you have is creation and destruction rates, F and G. And N0 would be another function to model. So essentially here to obtain, so S again denotes the, it's important as a variable in this equation other than the time, S is the location, the distance to the nearest actin site, so that would be the extension of the myosin head if you represented like this like a spring-like structure when the head is actually attached. So here what you need to model to in a way close the whole problem are the creation and destruction rates in the right hand side and the energy that would be created in a cross bridge once the bond is actually created. And once you have modeled the energy in particular with the N variable you can average over the whole population to obtain, so this would be the derivative of the energy with respect to extension would be of course the force created in a single bridge, so if you average over the whole population using N you obtain the total force, the average force created in a sacomere. So this is how we will travel over the scales using the N variable. Very speaking the type of equation that we can look at is pertains to what is called the moments of N, so moments are obtained by weighing powers of S with N over all possible values of S and if you multiply the Huxley equation by S to the P and integrate you obtain a dynamical equation like this on the moment of order P that relates the variation rate of mu P to the previous moments and of course quantities that are inherited from the rates. Now what we frequently consider in our models are the simplest modeling choices for the three quantities that I pointed out on the previous slides, so creation, destruction rates essentially two different chemical reaction rates depending on a switch that represents the calcium concentration above or below a certain level, so if it's above you create, if it's below you destroy the bonds and for the energy of bridges the simplest you can think of is of course quadratic energy with a little shift that means that when you attach you're already unattended, so this is so called symmetry breaking in physics. This is described in more details in various papers but again these are the simplest choices that you can think of. Now at the macroscopic level, so if you consider the average, the integral of N you will obtain something that's proportional to the total number of actually created bridges at a given time T, so the Kc here only depends on T of course, it's average over S and if you multiply by the individual stiffness of one single bridge you obtain something that denotes, that represents the stiffness of the whole sarcomere, so this is the moment of order zero on the previous slide, it's the integral of N multiplied by S to the zero. Now the stress created in sarcomere, again I showed this on the first slide, would be obtained by differentiating the energy of a bridge, so this quantity here and you can see that it's a combination of two moments, zero order and first order moments. Now if you write the equation obtained from the moment equation on the previous slide, you obtain these differential equations here, this ODE system here that describe tau c, the stress in the sarcomere and Kc, the equivalent stiffness of the sarcomere and this you can think of as some kind of complex constitutive equation that relates the extension strain to the stress via of course some time dependent equations, so the rates are appearing here and the system also needs this equivalent stiffness as an intermediate variable and you here is directly something that is inherited from the rates, the chemical rates that appeared on the previous slide, so it's a summary of F and G if you like. So U is one constant if the concentration is above the threshold and minus the other one if it's below. And likewise you can also monitor the microscopic energy that is the energy that is stored within the chemical bonds and this would use the second order moment in addition to the first two. So you can obtain an equation that describes the energy as well. Now this was for the active behavior, so the active forces that are created within the sarcomere, of course it's not the whole story because there's a lot of passive behavior also in such a system, in particular for the endings of the sarcomere and then for the whole cellular envelope and beyond for the extracellular matrix, so all this needs to be taken into account. And this we do by a reological modeling in which various types of behaviors are aggregated within this type of spring combination and damper combination analogy. So here the sarcomere would be at this location over here and it's in series with some elastic components in parallel with viscosity, this is quite typical, and then so this would be the endings of the sarcomere typically and then in parallel with again viscoelastic behavior for the cell envelope and the extracellular matrix. What's important is that the sarcomere essentially creates forces and stresses in the direction of the muscle fiber, so this is essentially 1D what you have in the top branch. And the rest is of course 3D. And this is one way of modeling but of course you can sophisticated as much as you like starting actually with the description of the sarcomere themselves so we could take additional chemical states, here you had only two for the myosin heads and additional internal variables. And now this schematic here, I mentioned it was an analogy with springs and dampers, it needs to be interpreted in a consistently nonlinear manner because the strains that we're going to look at are large, so not only displacements but also strains are large, so this needs to be revisited in this framework if you want to be energy consistent which is essential here like we heard yesterday in climate modeling. And once you do this you can obtain a complete energy balance that describes the variation of the total energy, so kinetic, elastic stored here and there and then the elastic energy stored in the sarcomere with external power forces. This would be the engine input due to, so the positive input due to chemistry and these are dissipations. Some more details quickly regarding the whole behavior, so the fundamental equation would be the principle of dynamics, the so-called principle of virtual work that's written here in total Lagrangian, that is you use a reference configuration to describe the displacements, y. So y double dot would be the acceleration, here you have the inertia effects, here you have the external forces represented here by essentially pressure against which the heart's walls would actually work, so external loading and here this is the internal work, so here this is the internal stress tensor that you have in capital sigma and capital sigma aggregates the various components that I described on the previous slide, so that is we'll take hyper elastic parts, viscous parts and the part that was in the 1D sarcomere branch. For the hyper elastic term we use something that's very standard in biomechanical modeling that is exponential laws plus something that allows to model incompressibility because biological tissues are frequently very incompressible, say. Another viscous term that's here we take the simplest that we can think of, so something that's essentially the square of the strain rate and again the active parts that will govern this 1D stress tensor along, this is the fiber direction and so this gives you a second tensor and this stress tensor directed along the fiber direction is the result of the sarcomere modeling. So this is a very fast overview of the type of modeling that you can perform, but now we have something that's complete that's quite general because this is one choice but you could use other constitutive choices and that's again energy consistent. Now let's look at some experimental validations that we performed for this model. So this is at, I could say at the local scale you want to check that the actual material behaves as modeled in the equations and then we talk about the heart scale that is the global stage. But here for the experimental validation at the local scale we used some data in collaboration with a physiologist, data obtained on this type of little samples taken from lab rats, so these are called papery muscles, they are made of the same myocardium material as the walls of the heart but they are much easier to deal with when isolated from the heart, they're still active when you save them and then you submit these samples to a cycle of loading that's designed to mimic the cardiac cycle. So first you extend them passively so you have to load on them and then you will make them contract by electrical activation and you prevent the fiber from shortening until they're able to reach a higher force and then they shorten and they come back. So here this would represent the passive feeling of the heart, the first stage of the cardiac cycle and then this would represent here the, well actually this one would represent the ejection phase of the heart against a higher pressure that is the pressure prevailing in the arterial system, so this would be the pressure prevailing in the venous system and that's in the arterial system. What you measure, well forget about this figure here but what you measure in these experiments are recordings over time of the force measured at the tip of the sample, so it's limited by this M1 plus M2 value that you have seen and then shortening of the sample over the cycle and this is what we aim at reproducing with the model which you see here. So this is a comparison over time of force and extension, so shortening in this case, against two initial loads, two levels of preloads, so the M1 and then several levels of M1 plus M2 and it's a comparison again over time of these two quantities between simulated in dashed and experimentally in solid lines, so as you can see over the various types of loading it's pretty reasonable, reasonably accurate. Now again if you want to go to the organ scale it presents some additional challenges. In particular in the muscle sample case the fiber direction was quite straightforward but in general for the heart the structure is complicated and the fiber directions are not something that you can measure in vivo, so in a patient it's not something that you could actually measure. So what we do is so far because well hopefully in the future there will be some imaging modalities that will be able to give you fiber directions on a patient specific basis but so far what we do is prescribe the fiber direction based on standard anatomical knowledge, so typically based on the position across the thickness of the muscle that we compute we have an angle of elevation of the fibers that you can see here. This is the fiber direction used in this particular patient specific model. Now boundary conditions are also something quite complex for this type of model because there are complex interactions with the surrounding structures for which we use a combination of viscoelastic support and sliding contact, so it's very demanding in terms of modeling and CPU time. And also we have the truncation of the model because you have to truncate your system somewhere so here you need to substitute some artificial boundary conditions for the actual structures that you have taken out of the system. And then you also need to close the system in terms of hemodynamics, so blood fluid mechanics which for many cases we represent in a very simplified manner in each cavity of the heart as a single pressure volume system, so this means that you can from the point of view of the tissue for many applications you can consider that the pressure inside the cavity is homogeneous and then it's simplified you don't need to actually consider the flow equations for all the types of applications we actually can consider these equations. But then if you take the cavities as single pressure volume systems then you can also close the system by using simplified modeling of the outlets, so the arterial network by equations that are obtained as analogies to electrical circuits, so the combinations of resistances and capacitances outside of the heart, so this would close the system in terms of outgoing flow and pressure sitting at the outlet of the heart. These are called windcastle models in this type of application, so a set of 0D that is ODE's and now we also have some difficulties with the reference configuration because in practice you never see the reference configuration in an actual heart, it's always moving say, in particular the one that's the most still at the end of feeling is of course not stress free because it's inflated due to venous pressure, so if you want to find the reference configuration that is again the stress free configuration you need to solve the inverse problem to find the configuration in which stress is vanished and this is difficult, well not only because it's an inverse problem but also because it's ill-posed in this case due to what you see here, this curve is the curve that represents the passive behavior of the heart, so force against extension, this is zero stress and you see that it's extremely soft near the reference configuration, so of course it means that it's very difficult to get here by an inverse problem, it's very ill-posed. Okay now let me show you the type of validation that we can obtain at the end of this path of modeling and this is, these are some trials that we made regarding what I mentioned before as cardiac recinculation therapy in collaboration with clinicians from King's College London. Okay this is the detailed protocol but let me explain it in a few words. What happens is that for certain types of patients the activation of the contraction is quite heavily desynchronized due to some pathologies and then of course as you can imagine the contraction is much less effective than with a healthy heart because it's desynchronized in the way that active stresses are built. So these patients suffer from severe symptoms, heart failure typically and the way you can try and cure them is by using this type of therapy, CRT, cardiac recinculation to resynchronize the contraction and the way you do this is use a pacemaker with several electrodes to activate the heart, the contraction is in a more synchronized manner. The problem is that it is estimated that at least for 50% if not 2 thirds of these patients there is actually no response, no benefit from the therapy. So it's a heavy and expensive and risky therapy to actually go and place these electrodes in the heart in some cases and for at least 50% of these patients there is no response, no benefit. So here what's at stake is the possibility, the perspective of using modeling, patient specific modeling, so models of a given patient to at least detect people who will not respond to therapy and of course also hopefully optimize the therapy for each specific patient. So decrease the rate of non-restraunders by using the model to tailor the therapy. At this stage of course this is preliminary what we have done so it's a validation stage so what we looked at was how the model could be used to reproduce what was actually measured in the patients. So this is what we focused on and what's the main indicator that's used to assess the success of the therapy is by looking at the pressure inside the lab ventricle, so the main ventricle, the increase rate of this pressure you take the maximum. The maximum pressure increase rate and of course the more effective contraction is the higher this indicator should be and this is what the kader is used and this is what we will use as an indicator. Now for one patient this is the patient specific model that was constructed. So here you have two cross sections of the heart, so-called long axis and short axis, left ventricle, right ventricle, left, right and what you can see so the electrical activation is in color and you can see that visibly, so of course it's in slow motion but even so it's very visible that the activation starts in the right ventricle and then propagates to the left ventricle. This should be much more synchronized in order to have a healthy heart. It's a typical indication of CRT this patient. This is the model, so complete heart cycles all the way on the modeling at the end of the modeling path that I summarized in the first part of my talk. How can this, so this is the patient before therapy, how can this be validated? Well, we have lots of measurements, we have the pressures inside the ventricles. We also have medical imaging, so this is the first time that you will see but you will see more. This is an MR sequence in this case in three sections and it's compared to the boundaries of the model in blue. So you can see that in these two cross sections, the main ones again, the model follows quite accurately what's measured in the imaging. Of course, as you could say, this is the result of calibration. So we tweaked all the parameter models, boundary conditions, everything I mentioned to obtain this good adéquation, including for the pressure. So this is the pressure inside the left ventricle. It's not perfect but the peak of DPDT that's here is close to data, it's simulation in dash blue and data in red solid. So this is the part that we will look at, this peak of DPDT. So again, everything was calibrated but now what's extremely important is that we will not change the calibration before we apply the therapy in silicone. So we will try and prescribe the same electric activation as corresponds to the therapy in the various strategies that were actually tried on this patient. First cases here are the therapies that prove to be ineffective for this patient. So this is a patient before therapy baseline. We had this level and for these two therapies, the DPDT max was not increased. And again, the model shows what is accurate, is close to what was measured. This is extremely important because here it means that the model is able to detect non-responding behavior to the therapy. And now these two resynchronization schemes here would be strategies that were actually effective for this patient. So here you see the corresponding models with the activation again in colors. And here the DPDT max that was obtained for these two cases. Again it's the same patient with several strategies that were tried. This is a summary of DPDT max measured versus simulated for this patient. So this is very nice in this case, but it's a single patient. So of course it requires much more validation. That is actually taking place right now within a European project that's called VP2HF in which we take part. So in this project, approximately 100 such cases were actually considered with the data collected specifically for the project. And the same type of validation is actually happening right now. Okay, so this is for the direct modeling all the way to clinical application. Now as I mentioned in the beginning, I want to talk about something else in the remaining 15 minutes or so. I want to cover inverse problems pertaining to patient specific type of motivations. That is the estimation problems that you need to solve in order to adapt the model to a patient. So the types of models that we have, as you have seen, are dynamical systems that we can summarize in a very compact form like here. So dynamical system. X would be the state variable. So typically the displacements at each of the points in a system. Theta would be a set of parameters that are unknown or uncertain so that you need to adjust in order to have proper dynamical behavior. The equations that would be summarized here are typically of PDE type. So they would in particular come from the rational formulation that I showed you before. Sets of ODEs, for example, for the simplified fluid models that I mentioned in my modeling part. So it's a combination of equations that you have here. But again, it's heavily PDE based. And what you need to estimate in order to simulate a system that will be close to the actual system is the initial condition always in a dynamical system, natural system like this. You never know what the initial condition is. And you need to also know, adjust a set of parameters, again the theta vector, in order to obtain the trajectory. Now the assumption that we have, and it's quite valid in practice, is that we have much higher dimension in a state. So typically what you need to estimate here would be from a few thousand to a few million degrees of freedom. Whereas in theta, well, essentially it's kind of reasonable to estimate from, say, a few tenths to a few hundred parameters. So very different dimensions in the two vectors over there. Now to cope with this uncertainty that we mentioned, you have actual measurements on the system, measurements coming from various images and signals that you can model based on the state variable. So Z would be the observation measurement vector. And you can formalize the measurement process by this observation operator H. And of course you have always some error in the measurement process that's denoted by key. This operator would represent the measurements either in a raw, but most probably in some kind of processed form. And this would summarize, here we had a lot of equations summarized here, and here again we have a lot of measurements summarized in particular medical imaging, as you can imagine, not straightforward to formalize like this. But I cannot dwell on this in this talk. And it's important to recognize that this again includes some modeling, whatever happens in the processing, this is a modeling equation that models the measurements. So in a way we have models on the two sides. Now estimation, we look at this from a sequential data simulation point of view. This is also known as filtering. So data assimilation means that we're going to use some data available on the system to reduce the uncertainty and actually track the system in time. Sequential means that, well, this is the basic principle. So you're looking at a dynamical system in which you at least don't know the initial condition. So you have an uncertainty and initial condition, but you have measurements. And now the principle of sequential data assimilation is that you are going to simulate a system that mimics the actual system that you're looking at. But you correct the system equations by a term that takes into account the discrepancy between the measurements and the simulated system via the observation operator. And what you want is to design this gain operator k here, the gain of the filter, in order to – so this would be the actual system. The system that you simulate will start from the a priori and initial condition. So you have an error here. But you hope that with this correction in a dynamics, you can converge to the actual target system. So how can you achieve this? Well, this was formulated by Kalman for a canonical case of a linear system. And then it formulated this type of filter in an optimal setup. And this would be the continuous time Kalman equations, which can be also extended in a nonlinear system by various means. But the major drawback is that the computation of this filter that sits here, okay, would be this here, uses the so-called covariance matrix P of Kalman. And the covariance operator or matrix has the size of the state. And it's full. It has no reason to be sparse like a usual physical operator. So P in a system like we're considering here is not something that you can compute or even store actually, even the sizes of the systems. Now, another type of an alternative to this type of optimal filtering is given by the theory of Lohenberger observers. So the idea here is to design K as something much easier to compute, but nevertheless effective. And what you want is to have convergence of the what's called the observer, X hat, so the virtual in-silico system, simulated one, to the actual system. The way you can look at it, so if you take, on the previous slide, you take the real system and the observer system with X hat and you subtract them. So you obtain the system satisfied by the error, X tilde. And what you obtain is this equation. And for those who have some background in automatic control, what you will recognize is the equation of a closed loop system that's built on the original one with a measurement H and a feedback K. So it means that if you have an effective feedback operator for your original system, you can use this to build a filter that will stabilize the error X tilde to zero, which is what you're trying to achieve. Bringing X hat to X means bringing the error to zero. Well, it's a typical problem in automatic control. So what you want is bring the pulse of the system as much as you can to the left-hand side of the plane and so on. And then the advantage of this strategy compared to Kalman is that for a wide range of systems, you have many feedbacks that work, actually. Most of them are physics-based, so they're easy to implement, including in the simulation software. Frequently, actually, the operators are already available in the software at a reasonable cost and in a rather robust manner. Robustness, in this case, for a real system, is much preferable to optimality that would be theoretically provided by a Kalman filtering. Last but not least, the control, the feedback, is applied on not a real system, but a virtual one. So it means that you can think of a much wider variety of feedbacks. You can extend the dissipative feedbacks that are known for the actual systems. Now what do you do? This was for state estimation, so this typically is to cope for the initial uncertainty and initial condition. Now, in general, you want to also estimate parameters. This is the basic principle. What you do is, in the same type of setup, you consider what's called the augmented dynamical system. So you have the first equation. We suppose that parameters here enter in a linear manner in the dynamical system. Of course, there are extensions to this. And then you introduce the additional equation that describes the parameter that is zero dynamics. Okay, parameters don't move, at least in a time scale consider here. And you have uncertainty on initial conditions. So the uncertain parameters turn into uncertainty in initial conditions here. And you can apply it during strategies as before. So for example, Kalman, but Kalman again, is not tractable for the state part, even though it could be for the parameters. So what we would like, so you have in this case two, the Kalman gain could be split into two parts. But what you want is not use Kalman for the state, but substitute this Luhanberger type that we have discussed. The difficulty is that for the parameters, you don't know of Luhanberger strategies that work because the dynamics is not physical. So what you want here is in essence to retain Kalman filtering while changing to Luhanberger observer here. Well, this is, we showed that this is possible. And the way you do it, so you use the Luhanberger observer that works in the state equation. You use the Kalman filter for parameters. And then you need to correct the first observer equation by sensitivity of the state with respect to parameters. With no virtually little additional computational cost here. And we showed that this is actually effective in a series of papers. Now quickly in the remaining time, I will show you some application of this using medical imaging to characterize an impact by estimation. So this is again a collaboration with a hospital in this case in Kretei. We used again some animal data obtained on a pig on which an infarct was actually created. So the idea here is by occluding a coronary artery of the pig to control the extent and location of the infarct. And this will be used for validation purposes of the whole estimation process. We have, of course, as data, we have a lot of imaging pressures and so on. And the idea will be to use this modeling and estimation chain to characterize, that is locate and estimate the parameters regarding the infarct tissue. So what we want in this case is to estimate the contractivity parameters that are these quantities in the active modeling parts of the equation. Here is a calibration, again, model against MR sequences in baseline. So that is before the infarct was actually created. So again a nice ad equation of the model and the images. And now this shows you the type of data that will be used in the estimation framework to perform this whole estimation strategy that I outlined in the previous slides. So what we used are segmented surfaces of one single ventricle in this case, the left to the main ventricle. What you vaguely can see in orangeish over here are the parts that correspond to the infarct that, again, are known both from the controlled protocol and here also from one specific modality of imaging. So we have a control. Now what you see here is the healthy model. So the model that was calibrated with respect to the baseline data against the data here in the infarcted heart, 38 days after the infarct was created. Of course, the healthy model doesn't know about the infarct and it contracts much more strongly than the actual heart at this stage. So you have this pretty big discrepancy and this is what's going to be used in the sequential data estimation strategy to correct the dynamics and estimate parameters at the same time. So here what you have is the corrected observer system in magenta. As you can see, it's quite close to the data. It doesn't have to be exactly coinciding with the device. It's just a correction. But it's very close to the actual walls. And then the orange was the heart before infarct. So you can see that's a big difference. And in the same time, we estimate one quantity in various regions of the heart. So this is a view from the top of the ventricle. This would sit on this part. And dark means low contractility, light, high contractility. As you can see, the dark contractility really represents the part where the infarct was actually created. So of course, we don't have any ground truth in this case, other than this control information. But it's quite a nice validation of this estimation setup in a real data clinical setup. So concluding remarks, I summarized the type of multi-scale modeling that can be performed to represent the myocardium based on actual physical and physiological considerations at all scales, starting with the nanoscale of the actin-myosin bridges. We carried fundamental physical requirements in particular energy balance all the way throughout the scales. And this is actually also true for the numerical procedures that we design. So these numerical procedures satisfy the energy balances. This of course is well adapted to multi-physics coupling because when you want to couple to other phenomena, chemistry, fluid flows within the muscles, that is so-called blood perfusion, all this will mean that you will be able to formulate the coupling in an energy consistent manner if you start by components that satisfy these balances. We have already some substantial experimental and clinical validation, as I showed. And I also covered the inverse problems that you need to solve in this case and showing that in our team, we focused on some novel types of data estimation methods that we designed. And this provides, of course, key information for a diagnosis because once you estimate some actual parameter values of interest for medical purposes, for example, this contract CT value here has a meaning for a cardiologist. And this parameter cannot be measured in any other way in practice. So here, you provide something that's very important for diagnosis. And then once you have adapted your model to the patient, you can carry out the program of patient-specific modeling. That is, you can use the model to predict the future of the system that is typically the outcome of therapeutic strategies as I showed in the first place. Again, if you want to have a predictive model, you need to adapt it to a given case. So here, the inverse problems have two benefits. Diagnosis when you estimate and then prognosis when you use the predictive nature of the models. Okay, this is the end of my talk. Thank you very much for your attention. It's just a very naive question concerning the last part of your talk. Do you have some issues of observability of the state when you estimate the parameters and the initial state? So it's in practice, I mean. Of course, this is a very complex problem. It's a major issue in estimation and inverse problems. So it's not something that you can answer in a unique manner in general. So as you mentioned, it needs to be addressed if not only in practice, but it needs to be addressed by looking at a given type of setup, that is, a model that needs to be estimated versus the data that you have at hand to estimate the model. In our case, the problem of observability is mainly an issue for identification purposes. That is, it's more an issue for identifying estimating parameters than for estimating the states. Critical images are very rich, as you can see. So it means that it's in practice highly identifiable in respect to the state, whereas what you can have in terms of parameters, of course, depends on not only the kinematics, but also the richness of the description in the model that you have. So if you want to estimate a large number of parameters, which means that you have, for example, complex pathologies with multiple components of the system that are affected, then you may get into trouble in terms of estimation. The good news with the type of strategies that we have, so sequential strategies, is that in the end, you not only have an estimate of the quantity itself, but also you have an estimate of the error that you perform. This is hidden in this covariance operator that you're actually computing also. You have an answer, but you also have a level of confidence in the answer in the estimation. And then there are lots of additional considerations that you can include a priori to assess the identifiability character of the system. And for this, you should, the expert is right here, so you can ask him afterwards. Okay. Questions? Maybe a short one about the HPC challenges that you have to deal with for this kind of big simulations? Well, it's a tricky question. He's doing it deliberately. In our team, we are not HPC focused. So we know that HPC is highly technical. We cannot cover everything. You have seen that already we cover quite a large spectrum of topics. Of course, these are highly nonlinear, multi-physics model, so we're always limited by the computing time. Okay. We don't want to spend right now, typically, I can tell you all about it, a single heartbeat or our main biomechanical model would take, if you don't use sophisticated boundary conditions, it would take a couple of hours on a regular workstation. If you use sliding contact to actually accurately represent the interaction with the external structures, it would take a day for a heartbeat. Okay. So it takes a lot of time for a number of degrees of freedom that is 20,000 degrees of freedom, 30,000 maybe. So it's a, as you can see, we're limited. We would like to go higher in terms of degrees of freedom to have finer meshes. We would like to couple with other types of physics and so on. So we have various paths to try and achieve this. And here we have a new code that we're actually building right now with Sebastian Gilles, in particular, that is PETC-based. So in this case, we use some HPC libraries to achieve some higher efficiency in a code. But it's a, well, you all know this, right? It's an ever-ending race that you always want to push the modeling further and you're always limited with computing capabilities. Thank you very much. Other questions? Okay. So thanks again. Dominique. Thank you. Thank you. Thank you. Thank you.
The heart undergoes some highly complex multi-scale multi-physics phenomena that must be accounted for in order to adequately model the biomechanical behavior of the complete organ. In this respect, a major focus of our work has been on formulating modeling ingredients that satisfy the most crucial thermomechanical requirements - in particular as regards energy balances - throughout the various forms of physical and scale-related couplings. This has led to a "beating heart" model for which some experimental and clinical validations have already been obtained. Concurrently, with the objective of building "patient-specific" heart models, we have investigated some original approaches inspired from data assimilation concepts to benefit from the available clinical data, with a particular concern for medical imaging. By combining the two fundamental sources of information represented by the model and the data, we are able to extract some most valuable quantitative knowledge on a given heart, e.g. as regards some uncertain constitutive parameter values characterizing a possible pathology, with important perspectives in diagnosis assistance. In addition, once the overall uncertainty has been adequately controlled via this adjustment process, the model can be expected to become "predictive", hence should provide clinically-relevant quantitative information, both in the current state of the patient and under various scenarii of future evolutions, such as for therapy planning.
10.5446/57367 (DOI)
Thank you very much, Karine, for the introduction especially, but we feel it for all the work you have done to organize this 2016 session of SEMRAC. So I'm very happy to be here. And I think it's a real luck that we have to be able to use these facilities here. So what I would like to talk today about is something we went during the PhD thesis of Mathieu Sal, who is in the back here. So the subject of the thesis was 3D sound. And the aim of the thesis was to be able to reproduce on a headphone. So maybe I can tell you the idea with this cartoon here, to reproduce with headphones the sound that you would have in your living room. So the idea is that when you are in your living room here, you have a music system like typically two loudspeakers here, one on the left, one on the right, and you hear your music. And what happened is that actually you can hear, you can listen to the left and the right loudspeakers with you two ears. So each of the loudspeakers is listened by your two ears. And that's not what you get when you hear the same music on headphones. So on headphones, you hear the right channel on the right ear, the left channel on the left ear. And so the music is not mixed in order to be listened on headphones, but instead to be listened on loudspeakers. And that's very different. And maybe you, I don't know, I mean, maybe if you hear music on headphones, you would feel the impression that the music is in your head and it's not outside you. And it's very uncomfortable, but unfortunately in some cases you are obliged to hear headphones, to wear headphones like when you are doing sports, for example, or when you travel or when you're working. It's very difficult to bring with you your music system in work. I mean, my neighbors don't want that. And so I mean, when I hear music at the lab, I usually wear headphones as I am sure you do. And so the idea is to understand what happens when you hear a sound, what makes you feel some directivity of the sound with your two ears, and are you able to reproduce that with headphones. So just to make you see, to convince you that this is indeed possible, I would like to show you this film here. So this film is something that was made in the lab of Edgar Schwiery in Princeton. And what you see here is that there is a room where you have many people here, you have seven people, and you have a microphone head here which is pterophonic. So this head here has two microphones, one in each ear. And on the bottom here you will see someone who is in an echoic room and which listens to exactly what is recording the head. So this guy here has a headphone and really listens to what exactly this head is recording in real time. And he will try to just localize which people he is talking. So let's run the... What everybody is. Okay. So what you see here is that the guy wears headphones but is perfectly able to localize the sound in this space, especially because the sound is recorded with this head here. So in order to go further and try to understand what happens when you want to localize on what makes you... What gives you the ability of localize sound in 3D with only two ears, you have to really understand what is the sound mechanism. So the idea is that first if you do that, so that's very easy to... I mean you just need to have a microphone head and you could localize sound if you record anything with this microphone here, so that's what we did. We put the head here on the batoukada here, so you have a batoukada here, this microphone is recording this and when you hear the music that was recorded with this head, although it's terrifying, it really contains some special information of the music, not only two channels. Okay. Because that's the same story than before, I mean because the microphone is a head. Okay, let's go. You can do that with also real human beings, so if you take a volunteer, so that's one of our students here, and you put some microphones in his ears and then when he plays you really can hear what this guy is hearing in real time and so what happens is that you really feel the cello here on the bottom of your body here and although you were at fault. So that's the idea. So that's very easy but that's not... So what we would like to do is to reconstruct, to rebuild something from... So imagine that you have a left channel voice, I mean something on your CD that you want to listen as if it were on the left channel. What should you do? What should you do? I would... Do you need to treat your signal in order to give to the two ears the signal that makes you feel that the music is coming from outside, from left loudspeakers. And so that's a little bit like the 3D, like the VR vision that we are very used to. If you have some glasses which are 3D, so that's very easy to reconstruct a 3D view, I mean that's very common because you just make each eye view the right picture. And so that's exactly what we want to do. We want to do the same thing but with the two ears. So we want to reconstruct the signal that makes to each ear what it should do, what it should hear if the signal were coming from outside, from this position here for example. So you start with a monophonic source and you want to treat and to construct the two channels, the right and left channels that you will send to the headphone in order that the listener feels as if the sound was coming from outside, from the left, from the top, from the bottom, from the right, from wherever, from behind, from in front. And so that's a little bit complicated because each ear works as a filter actually. So what happens is that when you have a sound, each ear hears the sound differently whether the sound is coming from the right, the left, the top, the bottom, etc. And especially for each frequency as well. So for each frequency and each position of the sound source, each ear hears something different somehow. And then the two difference, I mean the difference of what is in the two ears makes the brain reconstruct the localization of the thing. So that's exactly the same thing from the 3D vision. If you see two different pictures from the two eyes, the brain will reconstruct the volume and the scene of the 3D scene that you are looking at. So that's exactly the idea. So we wanted to do that and in order to have those filters, so the filters are called HRTF or Ad Related Transfer Functions. So that's a transfer function that depends on the direction of the sound and each frequency. So you have to give as an input the frequency of the sound, the position of the sound. And this gives you a filter, and that is you, which is called the HRTF, that you can either measure, and so that's typically for one position and for many people here, what happens for one position of the sound to every frequency. So that's the so-called HRTF for one position and one ear. But if you want to really do 3D sound, you have to measure or build those HRTF for any position and any frequency. And so that's what we wanted to do. And so the goal of what I would like to explain here, so after this very long introduction, is that you have two possibilities to do that. Either you do some measurements, so you put a volunteer here on this seat here, and you just send him many frequency from everywhere, I mean with those loudspeakers. And so that takes a long time and that's very painful. Or you can try to do that by numerical simulation. So numerical simulation would give you exactly the experiment that if you send, for example, a sound source from the top of this head here, and you measure what happens on the ear, on the left and maybe on the right, then you would be able, if you do, if you repeat this numerical simulation for any frequency and any position, you would build the database that enables you to reconstruct 3D sound. So that was the goal of the series. And actually, if you want to do that with numerical method, I mean that's very classical and the topic is the integral equation for acoustics. And for those of you who are not familiar with integral equations, so I have a one slide explanation of integral equations. And typically it stands on the fact that you are able to solve the so-called L-Molt's problem inside and outside the surface. So if you have a solution u, which inside the domain and outside the domain solves L-Molt's equation plus a boundary condition at infinity here, which is called the Sommerfeld R-Adhesion condition here, you are able to, I mean there is a formula that gives you u in terms of the jumps of u on the surface and the jump of the normal derivative of u through the surface u. And this is called the integral representation of the solution through two potential that we usually call the double layer potential and the single layer potential. So that, I mean, okay, so those are the formula, I will come back very, very little in the, I mean in the following because that's not exactly this which I would like to talk about today. So what happened is that those operators actually are convolution operators, so those things for example, the single layer operator here is exactly an integral on the surface of some function which is called g, which is the Green Kernel of Helmholtz equation, which is explicitly given by this, applied to lambda, which is an unknown which lives on the surface only. So typically what you need to do when you want to solve this problem numerically is to assemble this kind of operator on the surface only. And if you look at that, you would see that what happened is that this, for any x and y here, this operator, I mean this quantity here is non-zero. And so you have a link between any x and any y on the surface and so that gives you at the end, when you discretize this with finite element, for example, so that gives you a matrix which is completely full, which is completely dense. And so that's the problem. Okay, the problem is that all those boundary integral operators gives at the end matrices that are dense. Okay, so that's a problem which is very, very well known and that I would like to face today and to explain you what we did on that topic. So the idea is that here, if you compute this s, the single layer operator, for example, that would be exactly the same for the double layer, you would get something which is, so that's the Galerkin approximation of the single layer. When you discretize this with any integration formula on the surface, you get back with this approximation here and so you need to evaluate and so that's the difficulty here. So you need to evaluate the green kernel for any couple of integration points on the surface. So for, you take two integration points, okay, any two, and you have to compute this matrix here and so that's the problem because if you have typically n, capital N integration point on the surface, this matrix is dense and that gives you n square numbers, complex numbers in that case, but numbers. And that limits a lot the any method, so typically the BEM method because typically what happens is that if you want to compute and store this very big matrix here, this is impossible, so the storage of the matrix is proportional to n square and that's typically impossible for a number of integration points which is bigger than 10,000. And 10,000, I mean, in 3D, on the surface in 3D, is very small. It's very, very little. So on today's computers, that's maybe you can go to 20,000 but not further than this. Okay, so you could say, I don't want to store the matrix but I could still compute the matrix vector product by just since I have a formula for this guy here, I can compute it on the fly and do a matrix vector product to solve the linear system at the end. And what happens is that not only it's impossible to store but it's very slow if you want to do that, this is, so the matrix vector product takes n square operation, typically something comparable to n square, and that's very slow, again for a number of integration points which is of the same order. And so that was not, so we were stuck a little bit on this. So what we did actually at the beginning of the PhD thesis was to write a BEM code, actually we did that in MATLAB, and so we assembled the matrix, we solved everything, and so we were stuck because of this problem here that this matrix could not be stored if typically the numbers of integration point was higher than 10,000 which is very, very small in our application and insufficient for what we wanted to do. Okay, so there are actually not many but a few methods that enable you to overcome this problem here, and so those methods are called the FMM for fast multiple method or H matrices, or I mean things like this, and I would like to present you the SCSD today which is yet another method actually. And so the idea behind all those methods is that actually there is a trick that you could use in the case where your kernel here, so remember that this is the green kernel of your problem, so in our case this is e to the ikr over r, and but if this green kernel would have split variables in x and y, so if you could write this green kernel as a sum of quantities that depends only on x multiplied by things that depends only on y, you would be very happy because so that's an example, so if the green kernel were exactly this, so x minus y to the square, you could expand this expression here, and then if you want to compute the matrix vector product with this g, you could do that in O of n operation instead of O of n square because everything simplifies, so for example you would see that here you have a sum of three quantities here, and each of them is a product of something that depends on j multiplied by something that depends only on i, so this is typically here, so for example for this term you would compute for all j, so this sum, once for all, and then distribute it for all i's, so this guy you would compute, I mean given uj you would compute the sum of xj square uj, and then this is constant in i, and so you just need to distribute this quantity over all i's, and so this is O of n operation to compute, and then the distribution is yet O of n operation, and the same the same rise is here, for example for this quantity, this quantity given the uj you can compute the sum once for all, and then distribute it on all i's and then multiply by x i square, and so again this is O of n operation, this is not n square, and that's exactly the same here, you would compute this sum once for all and then distribute it on all i's and compute the scalar product there, and so the idea behind all those compression methods is that you want to write your kernel in a form which splits the x and y variables, so if you are able to do that you are in very good business because then you could use this trick here that you compute everything that depends on y first and then you distribute it to the x's, so that's the idea. So unfortunately I mean my kernel is not this, this is not x minus y square, this is some exponential complex exponential which is complicated, and that seems very hopeless to be able to write this, although that's exactly like this that the FMM works, so the FMM which is called the fast multiple method is a method which is rather complicated, but at the end of the day what happens for M-Morse equations, so for this kernel is that you have an expression like this and so you are able to write it as some limit when L is very large of an integral over A square, so the unit sphere of R3 here, of some exponential of something that depends only on x multiplied by something which is fixed, that depends only on M1, M2, I will come back to what is M1, M2 just here and multiply by something that depends only on y, and so if you see this, you have this kind of splitting of y and x expressions and so that's very good compared to, I mean in view of what we want to do for fast products except that you have an integral, but an integral when you discretize it is a sum, and so this integral, if you use any quadratic formula here you get back a sum and you have a sum of expression that depends only on x multiplied by something that depends only on y, and here I mean the principle of the FMM is that you group all the points together, so the idea is that when you have the original problem is that you have points here that should speak to any points there, so all the x's are here, all the y's are there, and you have an interaction between all ex and all i, and all y's, and so what you do actually is that if you say that for example this block here is far from this one, you can group those points, say to the center, and then transfer it to the M2 point and then re-gather to all the y points, and so the idea is that when you do that you compute only one interaction instead of many here, and so that's the idea, this is very complicated, this is recursive, hierarchical, and so that's rather difficult to implement in practice, and that's very problem dependent, so that's the, I mean that is the FMM, this is very very famous, and this was elected one of the top ten algorithm in the 20th century, so that's very famous, but still this is very complicated and very, I mean not easy to put in practice, and so since we were for example in MATLAB, so if you come back to that idea that we are doing a MATLAB toolbox, so that's very complicated in MATLAB to do a kind of arch-ecology tree and to work out this thing without putting many nested loops, which is a disaster in MATLAB, so we were a little bit stuck at that level and we wanted to try to do something new, and so that's, so we went to this new formalism, which is called the SCSD, so the idea, so this is new, okay, so that's what we did, and what I would like to, I mean to try to properize, so I think there are very interesting ideas behind this, so I would like to just explain to you how it works, so the principle is that all our operators are convolution in space, and so if you think of that, convolution is very nice, not in space actually, but in a four-year space, so in four-year space convolution are only products, and so the idea is that instead of all the methods that I've explained to you right now, we will try to work in the four-year space, so the one, I mean I have the first remark here is that if my kernels were only the cardinal sign here, so the four-year transform of the cardinal sign is only the direct mass on the sphere, okay, so maybe I should write to you here that the kernel I have is the exponential, so that's e to the ikr over r, and this has two pieces, one which is a cosine, so that's very obvious at that level, plus i times the sign here, and so what I'm telling you is that this sign part here is a cardinal sign, so this imaginary part of the kernel is actually a cardinal sign up to k which is a constant factor, which is the wave number, but I mean we don't care about constants here, and the idea is that actually if you just write the kernel of that part here, so if you forget a little bit this for the time, for the moment, but you concentrate on that part here, it's four-year transform, so the four-year transform of that is very simple, so that's only direct masses on the sphere, and so you can represent it by saying that this function, so that's z is in 3D, so you take z in 3D, you can write it as the inverse four-year transform of its four-year transform, so that's the idea, very simple, so you just say that this function is actually an integral over s square of e to the is.z, ds, so I'm just saying that here this function is the inverse four-year transform of that, and okay, so that's fine, and so if you just plug here, you replace z with x minus y here, you just replace that, you play a little bit with constants here, you get this formula which is very well known actually in the community of Helmholtz solvers, and you get this formula here that gives you that the imaginary part here is k over 4 pi square, the integral over s square of this, and here you immediately see that you have exactly split the variable x and y, so you have split that, and you are exactly with an expression which is very close to what you add in the FMM business, except that it's much more simple actually, and so that's a simple separation of the x and y variables, and so if you go a little bit further and you write a kind of a quarter or two formula for s square here, you would have a sum of function that depends on x and y, but separately, and so that's the idea, so you go a little bit further, you write a quarter or two formula, you write exactly, so you have a continuous version of that which is simply that the imaginary part of my kernel convoluted with the my unknown here is given by that, so I have just used the quarter or two formula on my domain here to write this expression, I have put the integral over s square with the previous formula here, and that's my cardinal side, and when you discretize this s square, you get a discrete formula, so you just have a sum here over a certain number of quarter or two points on s square, and the quarter or two points on the domain here are given by nq, and you regroup terms and you get exactly this expression, and you see immediately that what happens is that knowing the unknowns here, you want to compute this for all sp, so sp are quarter two points on the sphere, and you want something that computes for all q this expression for all p, then you multiply by sigma p, and then you have to get back, so you have an expression that no longer depends on y, that you just send to all the x's, so you have something that depends on p, and you have to get back to something that depends on r at the end, so you have split the computation, and actually all those operations, although they seem to be full, I mean like a full dense matrix vector product here, because you need to compute this for, so you have an expression that depends on q, so that's the data, and this is a matrix which is fully populated here, actually, so this seems to be n square operation again, and actually this is not thanks to a method which is called the new FFT, so a new FFT for non-uniform FFT in 3D, so this is called type three because the sp and the yq are not regularly spaced in 3D, and so thanks to this algorithm which is called the new FFT, you can compute that in nlog n, and so that's the idea, so the idea is that for all q you have this data, you compute this for all p in nlog n, you compute this, you multiply by sigma p which is all of n, and then you compute again the back new FFT in nlog n, and so that gives you a way to compute the convolution in nlog n instead of n square at the end. Don't you need something like your n to be something to be 2 to the power something? No, so you don't need that because the points are not regularly spaced in 3D, so there are any points. So there is no hierarchical structure, actually there is, but it's hidden in the new FFT algorithm, so the basics of the new FFT algorithm is that actually you have a regular grid inside the algorithm that you don't see because it's in the function, and this regular grid you do a classical FFT on that, and the FFT is hierarchical, and so the hierarchy is hidden there, but it's hidden in the function, it's not in our method actually. So we use the hierarchy inside the classical FFT things, so that's exactly the idea. So the idea is that you could understand this new FFT for those of you who don't know that, a little bit as I have a cloud of points in 3D, I interpolate that on a regular grid, I do a classical FFT on this regular grid, so I get a regular grid in the 4-year space now, and then I interpolate that on the cloud of points in the 4-year space. So I mean, at least thanks to two interpolations and one FFT, I could write, so there are much more difficulties than this, but I mean that's exactly the idea. The idea is that thanks to two interpolations you can get from the classical FFT to this new FFT, which is really non-regular, both in X and XI. And we need, I mean, we really need this thing to be non-regular in the XI variable at least, because our XI are actually in that framework points that live on the field. So they are not on the cube, they are not, or whatever, I mean they are not regularly spaced, so it's a mesh, it's a quadrative point, I mean on the sphere, so they are not regularly spaced. And I mean in X as well, since our problem is acoustic over one head, so the X points are typically boundaries of the head, so they are not regularly spaced as well. Okay, so that was for the imaginary part, and so that's only one piece of the kernel, and so we were very happy, although, I mean at that level this is very fine, but this is very easy somehow, and that's far from being the whole kernel, so in the whole kernel you also have this part, and so what do you do with that? Okay, so we were a little bit stuck for a few months at that level to deal with that, we tried to work out some formulas comparable to the one with the cardinal sign here, and without success, and actually at the end we got this, which is very clear that, I mean that's, so we have a clear explanation why this should work, I mean why this works, and why this is normal that it works. And so the idea is that if you do exactly the same ideas as before, what you get is that the Fourier transform of the cardinal cosine now is given by this, so that's 4pi over xi square minus 1, so this is singular for xi equals 1, but this is completely dense in 3D, so this is not regular, and this goes up to xi to infinity, so that leaves in the world 3D space somehow, not only on the sphere. And so if you get back, so you use this formula here, and you write exactly as before the cardinal cosine as is inverse Fourier transform of that, so you get this formula here, and so the trick is that again, so if you think of putting x minus y instead of z here, you would have x minus y here, and you would split the two exponentials exactly as before, except that you have something in front, which is an integral, I mean no longer over x square, but over r3, so I mean there are many more points in r3 than in x square, so that's, I mean that's a problem, okay. But okay, let's go further, so you write this, and you write this, and so you write, and you use the fact now that everything is radial, so the green kernel is radial, Fourier transform is radial, because Fourier transform of radial function r radial, and so you can write this, so you make exactly a spherical change of variable, and so what you get is that the cosine, the cardinal cosine now is an integral over r plus, of rho square over rho square minus 1, times an integral over the a square, which is the, I mean the spherical change of variable that gives you this, of this quantity that does not depend on rho, actually there is a rho here, but that's it. So that's exactly the change of variable that I did here, and so therefore, since here you recognize the cardinal sine function, so that's exactly the same that we had before, you get an expression which is this, that the cosine, the cardinal cosine is, I mean it's an integral of cardinal sine, okay. So you should think of this formula as expressing this real part of the kernel as linear combinations of cardinal sine, okay. So you know how to do that, and you write it as an integral of such guys, and since you have an integral of such guys, I mean an integral is again a sum when you discretize everything, and so you have a sum of cardinal sine, and so you can do the cardinal sine, and you get back to the strategy which is very clear, and so that's the idea. So the idea is that you, okay, I just rewrite the formula I had before, and so if you, so I want a discrete version of that formula, so I know that this formula is exact, I want a discrete version of that, and so I want to write the cardinal cosine as a sum of some weights actually that would play the, so those are the discrete version of that weight here, multiplied by some cardinal signs of some row M that you also need to, I mean to compute to discover some of. Okay, so although I mean the continuous version is rather clear, I mean the discrete one is much more delicate, I mean the discrete one version is very difficult because how do you propose alpha M and row M numbers, and how many such numbers do you need to write such an approximation in the best possible way? And so we, again, I mean we thought a lot on this, and we, I mean okay, I could speak a lot on this problem here, I mean we really worked a lot on that, and we discovered a lot of correlations between this and the decomposition of Fourier series, and in 1D actually, but okay, so let's give you the answer, so the answer is that you should take row M is equal to this, so I mean you should, okay, maybe I should say that in this order, so you should constrain yourself to be over an integral, so you cannot hope for having such a good formula for any z. So this is impossible, if the z is very large, for example, this is, so the, I mean the higher the more points you need in your quadratic formula, so the thing is that you have to constrain yourself over an interval which is bounded from above. Maybe what is more surprising, but maybe I'll explain you why, is that you also need to be constrained from below, so you will try to find exactly this kind of formula, but not for all z, but for z in an interval a, b, which is fixed depending on your problem once for all. And so why is it that you need to, I mean you need to be constrained from below, is that, so if you think a little bit of, so you simplify by one over z the left and right expression, okay, and so you want to express a cosine as a sum of sines, and this is very difficult because the cosine function is even while the sine function is odd, so I mean you are trying to approximate an even function with odd functions, and so that's very bad near zero, I mean near zero you cannot hope for a very good result. And so that's why actually you need to constrain yourself from below here. But once you, okay, you give yourself a and b, then you can compute, so that gives you, you can compute those row m here, okay, you take any capital M numbers like this, so they are really explicit, and once you know the row m, you can compute the alpha m by solving a least square problem here, which is written here, so that's a linear problem, so the row m are fixed. You give yourself a set of z i's, okay, and you compute the alpha m which are the coefficients of that, and so that's, we solve this with a least square problem, least square approximation, so that's exactly a linear system that you solve. And what happens is that actually you can work out something that gives you, so I mean you can study this method here, and you get at the end that the number of such couples, alpha m or row m, is proportional to the log of the error that you want to make in this formula here, so the log of the error multiplied by a plus b over a, okay, so that's something which we've constant that are independent of everything, okay. So that's something that you can study, so there are very little numbers like this, so at the end, depending on the error, the numbers are very little, and so that's what you get, for example, typically for, so you have a unit here, a wave number which is 13 here, and so that's the numbers, so this alpha m, depending on the row m, are given in blue, and so this blue curve is the cosine approximation in terms of the signs, and the red one is the sign which has only one number, which is the k here, because there is only one sphere somehow which is sufficient to decryptize the sign approximation, so that's, so the sign is given in red while the cosine is given in blue, and for these blue numbers here, so there are very little numbers, actually there are maybe 22 numbers, so you have, so they go only from 2 to 2, I mean you see you have the number 1 here, then 3, then 5, etc. they go at the end, so there are roughly maybe 22 numbers, and so for these 22 quadrature formulas, when you reconstruct the, so the cosine is given in red, and the sign, the approximation, the sum of signs, the sum of 22 signs is given in blue, and that's something which is bad between 0 and a, so that was, I mean that was from the beginning, we knew that, and that's probably bad farther than b, but in between, between a and b, that's, I mean that does a pretty good job, so you write a cosine in terms of signs, and so once you have this, you can work out everything, so you can write this cosine in terms of some, some of signs here, and okay, you, you happen, the, what happens for the imaginary part on the blackboard, so the sign, the cardinal sign that, that works for the, for the imaginary part of the kernel, and then you write the whole kernel as a sum of cardinal signs, and so that's why we, we call that the cardinal sign decomposition, because the idea is that you need to expand your kernel in sums of cardinal signs, once you know how to do that, you can work out exactly the, the method for any kernel, and so you, you write this as a sum of signs, this cardinal sign you work out as an, as an integral over s square, and then you discretize everything, so you discretize this s square integration here, and you get something which is discrete, which is exactly like this, so at the end, the convolutional we, we do your kernel g is exactly as before, a sum of, I mean, a sum of things that depends on y multiplied by things that depends on a, on x, and you pass through the Fourier variable, which is psi p here, so you go from the space to Fourier here, so that's on the one end you do that, then you multiply by weights, and then you go back to the real space by an inverse Fourier transform, and we also have exactly an, an estimation of how much numbers you need to, how much number integration point in the 3D space in the xi p, you need to do that, and they are given by this number here, okay, so that's roughly speaking very good, and if you think a little bit of what we did, this is a discrete version of something which is fairly obvious, so maybe if I go here, you will recognize something which is absolutely obvious, which is the fact that the convolution of g with your data here can be written in the Fourier space as, so I have to Fourier transform the data here, I multiply, I just multiply by the Fourier transform of the kernel, and then I get back by the inverse Fourier transform, okay, and so we are doing exactly, so if you write exactly the formula of this, you get this, so you get the Fourier transform of the data, you multiply by the weights, which is, I mean, the, the, the, the kernel in Fourier space is only a weight, it's only a Fourier multiplier, and then you get back by the inverse Fourier transform, and if you look between this and that, I mean, you see a clear, a very clear correspondence between what we did and the, I mean, the continuous formula, which is pretty obvious, and we do exactly a discretization of this formula here, so we go to Fourier space, we multiply by your weight, and we go back, except that we have a way to express what are the good xipids, so the good points for the integration on the 3D Fourier space, okay, so at the end, okay, let's finish on, on, on all what we did with that, so what we did is that, so the algorithm is very simple, actually, so you have to fix yourself an interval a, b, okay, and so you will do the SCSD quadratus of finding the, the points xipid in 3D, the weights omega p in, I mean, in 3D in the Fourier space here, so that's the numbers in Fourier space here, you do a type 3 new FFT to go from space to Fourier space on the data here, you weight the result with the weight omega p, which actually stands for the Fourier transform of the kernel, omega p, you get back in the real space on the result that you got here, and then, so you are done, I mean, for every interaction which is between a and b, you are done, and between 0 and a, you have to do some correction, you have to do something, I mean, some, some fix because you are definitely wrong for the small interaction, and what you do is that you compute another matrix for the closed interaction between 0 a, and so this matrix actually is, is a sparse matrix, and so that's only a corrective matrix that we compute here, that handles all the x minus y that are between 0 and a, so this matrix is small, it's sparse, and then the work kernel is approximated by the far component which is done through the matrix vector product here, and plus the correction, the correction which handles the small, I mean, the small, the closed interaction. Okay, so we, we work out, I mean, the operator has a sum of two, two pieces and everybody does that, as well, I mean, we are not the only one. So at the end, we put that in the, in our MATLAB code actually, and then we discovered a little bit later that actually there is a MATLAB function, so if you go to GreenGuard's website, there is a MATLAB function that does the FMM, so too bad, but okay, but so we did that, I mean, that made us discover a new method somehow, but so the FMM, so we could compare actually, we could compare our method to the, to a Forthorn FMM routine actually, so that was, that was a good possibility to, I mean, to make a comparison, and we store from him, I mean, from GreenGuard, so there are, on this site, there are, at least, I mean, there are many softwares, but there is one for the FMM in Laplace, Elmol, Stokes, et cetera, and there is also the new FFT routine in 1D2D3D, and they are all in native Forthorn, and they are, they are wrapped, wrapped in MATLAB, so you can, you can call them directly from MATLAB, but beside this, everything is written in MATLAB, and we also wrote, some Matthew did, wrote an HMATrix algorithm for, for all this in MATLAB, in native MATLAB, and at the end, what happened is that we, we were able to compute, I will show you some results, but we are able to compute typically acoustic problem with up to, say, 1 million of degrees of freedom, and which really means that you have up to quasi, 10 to the 7 particles, which I mean, integration point on, on your surface, so to show you the, the performances what you get, so that's a complicated graph, okay, so what happened is that you have many methods here, so maybe start from the blue one, so the blue one is the FMM code of Green Guard for the Elmol's kernel, okay, so you, you call it blindly, there is no preparation, you just call, you, you send your data and you get back the, the kernel applied to the data for, I mean, from, from, from this routine, and it scales from a number of degrees of freedom from 10 to the 2 to 10 to the 5 here, and so that's the, somehow the reference, the FMM reference, so instead, in our case, or maybe I should also speak about a little bit about the, the light blue curve here that you almost know doesn't see, you don't see it, I guess, but maybe you, okay, I mean, you, you kind of see it, and so this blue curve here is what happens when you compute the direct product, which is typically in N squared, and so that's what you don't want to do, so when you are in N squared, the time you spend to do a matrix vector product scales like N squared, and so at some point, I mean, you are really in a very bad shape here, and so you are stopped typically for, for a number of degrees of freedom, which is, in that case, something like a 5,000, I would say, and because 5,000 really means, so you have six more integration points than degrees of freedom in that case, and so that really means 30,000 points in the, in the kernel that you need, interaction that you want to compute, so we are blocked here for the reason I told you at the beginning of the, of the talk here, and so, so this, this thing here, when you compute the matrix, it's rather bad, but once you have it, once you have this matrix, computing a matrix vector product is very fast, so that's the time for computing the matrix vector product, so the matrix is very big, it's, I mean, it's, it feels the whole memory, it's dense, but so if you are able to store it, that's very good, I mean, the matrix vector product is very fast, and so that's what you see here, much faster than the FMM, actually, for, for those numbers, okay, and so this is the H matrix, so for those of you who know the H matrices, what happens is that they are very complicated to construct, but once you have them, the matrix vector product is again very fast, and so that's what we see here, so the preparation time is quite high, I mean, the price to pay is maybe almost two orders of magnitude compared to the FMM, but once you have this, the matrix vector product is one order of magnitude below the FMM, so which is very fast, okay, and so that's our method here, so our method scales like the FMM, so N log N, and what happens is that you have some preparation time, so the building of all the quadratic formulas and all that stuff, I mean, all the variable that I told you about, I mean, the closed, the closed interaction matrix and all that stuff, and so that takes a little time, slightly more than the FMM, than one iteration of the FMM, but once you have this, the matrix vector products in that case is something like four times faster than the FMM, than the classical FMM, okay, so that's, so we were very happy because you really pay the matrix vector product when you solve the linear system at the end, and so that's what we wanted to be very fast. So that's, I mean, okay, so that was for L-molds, and so for Laplace problems here, you take k equals zero, and so what happens is that you roughly have the same story, except that we are a bit worse now than the FMM, so that's the FMM for Laplace kernel, so here, I mean, you should understand that you need another routine to do the FMM for Laplace, so that's not the same than for L-molds, so you have a brand new routine that you need to interface with your code, and in our case, for the SCSD, actually, we use exactly the same routine, so we use, actually, what we do to do Laplace is we take k equals zero, so that's exactly the same code, you just take the wave number equal to zero, and you get exactly something that works, so this is the same code, so while for the FMM, you did another, so a faster actually in that case, but another code, okay. We validate that on the acoustic program, so this is the so-called BRAC, a governor formulation for the acoustic, so maybe I don't, so that's the diffraction problem, the scattering problem for the acoustic, and at the end, so there are some results on which you can validate the code, so what I mean is that if you have an object which is a sphere, you have some known expression that you can compute, and so you can compute the so-called RCS, which means the Radar Cross-Section, so at infinity, what happens at infinity, and you can compare here, so what gives our code, which is called MyBam here in blue, with the analytic version, so the exact expression, which is in red, and so the two curves just fit one on top of the other, and so that's exactly the same for one million of degrees of freedom here, so that's a sphere with one million of degrees of freedom, that's the Radar Cross-Section of the sphere, so that's the analytic expression, say, and our code just fits exactly this curve, so the computation is right, it's clearly right, and then we put all this into a MATLAB code, so that was designed at the very beginning to do only BEM, so that's a BEM library, and so that was the beginning of, say, our project in this same rack today, I will come to that at the very end, and so here you can see typically what happens, so that's an interface while in MATLAB, so all the graphics are done in MATLAB, and so we solve everything in MATLAB, we did a little bit of GUI here to have an interface and to solve the scattering problem, and so at the end, that's all what we put, what we output actually, because now there are more than this inside the code, so what happens is that you have a lot of, I mean a lot, a few finite elements, say, typically P1 for acoustic, then we did some Ravier-Thoma elements on triangles in 3D, okay, we did our SCSD method, we have the new FFT-Mex file coming from Gringart's site, we have the fast multiple that comes also from Gringart's site, the H-matrix, which were completely written in MATLAB, and okay, we can do radiation at infinity on the surface, on the volume or whatever, there is some preconditioning, BRCA governor, regularization, etc, etc, etc, and that's what we get at the end, so that's to give you just an example of a movie, so here you see a sound, a wave that goes from the top of the head here, and if you really look at what happens inside the ear, for example, you would see that actually the ear amplifies the sound, so the ear, this is more red and more blue than everywhere else, so what happens is that actually, in that case, for a sound coming from the top, the ear works as a resonator, so it amplifies the sound, okay, so you have to do this experiment for all directions, for all frequencies, and then you plug that into a rendering engine to do 3D sound, so that's another case which is known to be somehow difficult, which is the case of cavities, so that's a box here, so that's a cubic box, inside you have a cubic cavity here with a small window there, so I don't know whether you can understand this picture, but this is a box, okay, this is the inside of the box, and you have a small window here, and so the box is here, in that picture, and you send away from the top here, and so you have a big cavity, somehow you have a wave that goes from the top and you want to understand what happens, and that's the picture at the end, so you get something, some resonator modes inside the box that you can compute with this matter, so that's a Maxwell problem here, so after that we did some Maxwell problem in integral equation as well, and so that's a known test case which is called the NASA almond, and that was proposed by the NASA actually, and so this NASA almond is discretized with one million degrees of freedom in radio tome elements, and you have a vertical polarization, but the wave goes to the tip here, and so that's the solution you get, and it took 125 iterations in two hours to compute that, and so that's, I mean, that's one million degrees of freedom problem, so that's not obvious, so that's a case where we can compute Laplace problem, so that's, so this is a Laplace problem where you compute the magnetic field given by a magnetic, magnetic board uniformly magnetized, and maybe I should, okay, I should go fast, so that's, I just would like to show you this thing here, so this thing is a toy galaxy making here, so you start from a big star here, you will have a cloud of small stars everywhere, you use the gravitational potential between the big star and the sun and the middle and all the small guys everywhere, and so what you see is you see a kind of gravitational interaction, I don't know whether it's seenable or not, but at the end you start to see forming planets somehow, just around this big sun here, and so here what happens is that you have 10,000 planets which interact all with themselves, I mean 10,000 to 10,000, and you solve the dynamics using a Runge-Kutta method here, Runge-Kutta 4, and so you use that, and then when two planets are sufficiently small you just glue them together, and at the end you see that, and so that takes 10 minutes on the Matlab code to solve, so this is not time. Okay, and so that's what we did here, so just to conclude we have a new method which is able to handle fast convolution for many kernels, so typically for kernels which are radio, so that's very important, and typically Laplace and Morse Maxwell's stocks, we did that in stocks, so that's one of the novelties of the library now, we have a kind of object based Matlab library for doing this, and we typically have validated this up to 10 to the 6 degrees of freedom, and then from the SamRack 2016, so there are new features, and so the new features is that we have now a 3D edge finite element inside the thing, we just tried our first GPU application on this, and we also have developed a kind of experimental FETI domain decomposition method, so this is thanks to Mathieu and Nicole, the 3D Nedeleg is thanks to Emil here, and we also have an integral equation for stocks equation, and a complete 2D theory because everything I told you about is 3D, and if you want to do exactly the same in 2D actually, so you can do that, and there are some subtleties that were clarified somehow by Martin here, so thank you very much for your attention, and I'm ready for any question. Thank you Francois for this nice talk, those do you have questions? Once you know that you can use the sign for expanding the cosine, if suppose you did the least square projection to find the coefficient, would you find the same? We do that actually, so the difficulty is to find the, so we have an exact expression for the rows, so maybe I could, so I was very fast I'm sorry, but we do exactly the least square approximation for the coefficient. So let me go back to that, okay so that's here, so I want to expand this cosine, so just simplified by one over a C here, so you want to expand this cosine as a sum of sign, so the difficulty is that you have to know the row, but once you know the row, actually the coefficients are given by least square approximation, so that's exactly what you are saying. So the only difficulty was to find what were the good rows somehow, and the idea, so what we discovered is very simple, so it's super simple, is that if you think of this formula as being the Fourier expansion, so the Fourier series expansion of something, actually, so you take a function, maybe I could do a picture on this, so you want to expand a cosine function which is somehow like this, as, I mean in Fourier series, in Fourier series, but you want to do that in sum of signs, so in order to do Fourier series that gives at the end sum of signs, you need to have an odd function, okay so let's audify this cosine function, so you take the odd version of the cosine function, but that's not enough because when you do the Fourier series expansion, you also need to have something which is periodic, so what you do actually is that you want to find a window in that that gives you a periodic function, okay so let's do that, you go from there and maybe you stop there, okay, so in that case you have something which is periodic and which is odd, okay and then this you can expand in Fourier series that will give you not only signs but also exactly the row m that are here, I mean the which are typically proportional to m, I mean that's the sign nx when you expand any odd function in Fourier series, so the row m that are written here are nothing but the Fourier expansion, I mean the frequencies that are given by the Fourier series expansion of a cleverly tuned cosine function here which is audified and that's I mean that's only this, so that gives you the row m and once I have this, okay I compute the alpha by this square and the I mean one of the problem is that you have to if you want to study that theoretically that's not obvious at all to understand the error that you make here from the least square approximation that you got to compute the alpha and typically one I mean that that really looks like problems that were studied by Albert Cohen for example on the Vaud, so they did exactly, so the idea is you have a kind of least square approximation but you want to understand your error in L infinity norm typically, so that's one of the things and that's exactly one of the problems we have here and okay we looked a lot in the literature to find some result that we could use but that's what we did. Other questions? All the computations, all the examples are on MATLAB, right? And what do you use for parallelization? Actually, so okay, so let's explain our path somehow, so we started, so we wanted to be very fast and so the idea was to first do a BEM code in MATLAB because it was very fast and so we end up with a BEM code in a few months, so that was not very difficult. Then passing from this full BEM where the matrix is completely computed to a version which has the SCSD to something say one year to understand all the methods and implement it in MATLAB so that was a little bit costly in time somehow but we were very happy because at the end we ended up with something which is new and so that's the good news. But then and we were still in MATLAB and then we imagined at the beginning that we would be stuck with MATLAB so at some point MATLAB would be inefficient to do the computation and actually we discovered that's not true. We could do very far by sticking on MATLAB and so we also asked Emil for example to compare with Python and so we also have the same toolbox in Python just as a sake of comparison and what happens is that Python and MATLAB behave very similarly somehow. I mean they are both very fast and I know a few for-torn native code for example that are way slower than what we showed here. So we were quite happy with this and then so the next step, I mean the next question is how far can you go to that thinking, to that, I mean how far can I push the method of MATLAB to go into a higher and higher problem, bigger and bigger problem. So the idea and the proposition of what we tried to do with stem racks was to study a little bit the parallelization of this and is it, I mean for example, is MATLAB able to handle parallelization and the answer for the timing, so I don't know the full answer of that but for the timing is that MATLAB is very, so handle the parallelization on one node very easily. So typically this is done through instructions called SPMD or par for loops and so this is very easy. And then so that was, so we did that. We did only this for the timing and then the next step is what happens on clusters, could we handle clusters, so that's a question mark for the timing or could we handle GPUs and so that's the beginning of our work during the stem racks to try to use a GPU in MATLAB and try to see whether we got some improvement. And I have to say that I'm very happy of how easy it is to use parallelism in MATLAB. So I cannot say about efficiency but it's very easy and at the end you get some very nice improvement compared to what you work out in second show code. So I'm pretty sure that they're doing exactly the same code than what we did in C or Fortran for example would be faster but to the price of maybe a few years of development. But while we did that in a few months. So I think at the level where we are and in what we are interested in, this is not worse. But maybe, this is because we know how to write MATLAB codes. So we are very MATLAB students somehow. So that's the idea. Okay, thank you. Was that a question? Yes. So in terms of parallelization, what do you expect in one scale between fast multiple and a lot of different matrices in your approach? Okay, so in our approach, what I think, so I know that fast multiple has been parallelized by many people. But already fast multiple is very difficult to write so you need experts on fast multiple to do that. But I mean, I know that fast multiple is not only parallelized with open MP for example at the level of one node but it's also parallelized in MPI when you want to use that in many core, I mean many nodes, computer. So that works and I know that for example some people won the golden bell prize maybe five years ago on typical application like this. And so this works. But this works if you have a team that can develop this. For H matrices, actually this is a very good surprise because Matthew discovered that this was very parallelizable, very, very much. And so that's very good news. So the only problem with H matrices is that they are a bit slow to compute somehow. So they are a bit slow to construct. Not a bit slow, they are really slow. I mean, so you lose a lot of time to compute H matrices. And for our method instead, so we have a problem because our method actually needs an FFT at some point. And the FFT, I'm not sure, is very good. And so FFT in shared memory is fine, but in memory which is not shared, which is distributed, I'm not sure that it's very good and that it's very efficient. So I would try to stick on new FFT or FFT inside one node to do some parallelizing and then try to do, so to cut. So our idea for the time being, let's, I don't know where, maybe in one year from now I will have changed, but our idea is to split the problem into domain decomposition methods. And so that's where Nicole is involved. And try to do some SCSD inside each of the domains because they are smaller and they can fit into one node only. And then what happens between nodes should be under like H matrix version or something like this and something which is a distributed memory architecture. So that's the idea. But I mean, okay, I wouldn't say that it is the real truth. That's my feeling. We try to push this kind of ideas. Was that a question? So thank you, Hansa. Have a great ride.
When solving wave scattering problems with the Boundary Element Method (BEM), one usually faces the problem of storing a dense matrix of huge size which size is proportional to the (square of) the number N of unknowns on the boundary of the scattering object. Several methods, among which the Fast Multipole Method (FMM) or the H-matrices are celebrated, were developed to circumvent this obstruction. In both cases an approximation of the matrix is obtained with a O(N log(N)) storage and the matrix-vector product has the same complexity. This permits to solve the problem, replacing the direct solver with an iterative method. The aim of the talk is to present an alternative method which is based on an accurate version of the Fourier based convolution. Based on the non-uniform FFT, the method, called the sparse cardinal sine decomposition (SCSD) ends up to have the same complexity than the FMM for much less complexity in the implementation. We show in practice how the method works, and give applications in as different domains as Laplace, Helmholtz, Maxwell or Stokes equations. This is a joint work with Matthieu Aussal.
10.5446/57308 (DOI)
So, thank you very much for the introduction. Tom and I guess with that just for time I'll dive into things so this presentation we talk about mapping so organic carbon and so profiles using imaging spectroscopy so this is this work that I completed as part of my PhD and published in geoderma which I'll present the citation at the end. So with this presentation I'm going to just go over kind of the objectives of the project, the methods we used. I'm going to spend some time talking about signal processing technique that I used as part of this project, wavelet transforms and then spending a little time talking about that to just give a little more background about them. I think that particularly for kind of noisier data that we get with imaging spectroscopy they have a lot of potential value and I think they're worth kind of exploring and considering as part of the soil spectroscopy toolkit. I'll present some results as well as the conclusions. So the overall objective of this study was to map soil organic carbon throughout the soil profile and then to investigate rotational effects on soil organic carbon distributions. One of the things that I was putting this project together I had been thinking about was that there's a lot of great papers on method development with soil spectroscopy and I think there's still a need for that. My personal I guess hypothesis was that I think we're getting to the point with spectroscopy that it can help us answer research questions in a way that we couldn't with other more conventional tools particularly with the resolution of data that we can get. So just to give you a little bit idea about the site before I kind of jump into the methods for the research project. So this site is located in the province of Alberta in Canada which is in western Canada. So this is actually on long term research plots they were established in the 1920s to look at developing farming practices for these types of soils in western Canada. This particular plot then was looking at incorporating forages into the mix and that was done in the 1970s. So these plots have had this treatment on them for about 40 years. The types of soils are in the Canadian system, orthogrey luvosols. I don't expect necessarily many of you to be familiar with the Canadian system. They translate as borals in the USDA which my apologies I'm not as familiar with the USDA system and then albic luvosols in the world reference base. So they're kind of key features of a clay translocation out of the A horizon. So the rotations on the study I wanted to investigate were agroecological rotation they call that which is really a forage-legume grain mix, a continuous forage rotation, continuous grain and then sort of the traditional which is not really done much anymore but wheat follow mix in the prairie. So it gave us same soil but different rotational effects so he was hoping would give us a nice picture of different carbon dynamics throughout the profile. So in terms of collecting the data, so I have a picture of example of just some soil cores. So these would be the cores, the cylinders that we were imaging. So the spectra collected with a Sissu rock hyperspectral imaging system. So this is produced by a company, Spessum in Finland. It actually has two cameras, a visible light and a shortwave infrared but for this study I focused on the shortwave infrared data. So it collects data from 1000 to 2500 nanometers and 256 spectral bands and I guess what the really exciting part for me was that with this setup we were able to get a 0.2 millimeter spatial resolution for the pixels. So really dive into the variation and find spatial scales in the sample. Just for our reference data for calibration was all done by dry combustion using a cost tech elemental analyzer. You know while this is kind of specific just to mention how we build the calibration model. So I tagged where we're going to get our samples for each of the image so once we collected the image then we took like a slice for lining up to this point for the entire slice homogenized it and then for building calibration models took the average reflective spectra for the region where the sample came from because that is one of the challenges is how do you calibrate your data to your lab samples. So one of the challenges I guess I would say with some of the imaging spectroscopy which you know if you're used to working with potentially let's say ASD equipment in the Viz near infrared or MIR data the spectra can be quite a bit noisier than what you might be used to seeing. So this for example is the is a single pixel spectra that I retrieved for an example and this is the spectra after averaging. So a lot of this a lot of this fine-scaled noise from a single spectra doesn't really translate out when you average it out. So thinking about how we deal with some of the noise in the spectra is a really kind of important issue with some I think at least with this system in particular. But the nature I think of imaging spectroscopy is you're just going to more likely have noisier spectra so you can see a lot of this fine-scale variance which does create challenges for things such as taking derivatives because those are going to amplify some of the noise. So one of the techniques that I'm using this study that I'm going to now kind of maybe spend a little bit time talking about because hopefully this is one of the takeaways is that I used a process called wavelet transforms on the signal. So this is our signal processing tool that have been around for quite a long time in the digital signal processing world. And what it does is to let you transform a single spectrum into a number of coefficients known as wavelets and each then of these wavelet coefficients will let you capture variance in the spectra at different scales. One of the advantages of it is that the shapes and magnitudes are preserved of features using wavelet analysis so you can use the results for modeling unlike say continuum removal which is really valuable for visual spectra analysis but you won't necessarily get consistency depending on the baseline variance. So what we'll let you do is this is just sort of an example figure. I'll show you some from spectra as it's going to let you break down variance then at different scales. So there's a range of different types but in terms of the two kind of starting places to explore would be what's called continuous wavelet transforms and discrete wavelet transforms. So continuous wavelet transforms preserve you know for every one of your like wavelengths you're going to retain a value for those wavelengths whereas discrete will reduce the number of data dimensions because there is redundancy in the data. The advantage of the continuous wavelet transforms are they're more easily compared to your original spectra and you can more easily some different scales together to use multiple wavelet scales. The discrete wavelet transforms are a little harder to translate to the original spectra to make comparisons and to sort of explain okay well we have this feature that was important so that's probably this covalent bond that was driving that signal. And there's a range of what are called different what are mother wavelets or what they're called but there are different wavelet functions that are fit to the data so if you want to explore this there's a whole range of them. The second order Gaussian wavelet is the one that I stick to using with spectral data and the reason for that is that a lot of individual spectral features can be represented by Gaussian or quasi Gaussian functions. And I want to stop and point out that a lot of these ideas around wavelets are really drawing off of the work from Benoit Ravard's group at the University of Alberta who was on my committee for my PhD. And this paper from 2008 if you're interested is a good kind of theoretical or more a better kind of bigger picture discussion of wavelets and how to apply them to spectroscopy and some of the concepts. My theory is one of the reasons maybe some of this hasn't had much attention is that I think if you look at the literature the machine learning models have had a lot of focus, which I think is probably rightfully so is I think they have a bigger benefit for model performance. However, I still think these tools have a role particularly in sort of our field spectroscopy imaging spectroscopy things where you have a lot of variance in some of those conditions that you're scanning. So if we look at some examples, we take this this is just an example of a shortwave spectra from the study in the shortwave region. So these absorption features are relating to the water at 14 and 1900 nanometers. So you're breaking down your different scales are all to the power of two. So when you're looking at the first scale it's really variance across like two two bandwidths scale to you're looking at four three looking at eight. So you're starting to get smoother and smoother features because they're corresponding to wider wider aspects in the spectra. So we can often think about these scale one is really representing a lot of the noise in the spectra so this lets us for denoise the spectra by removing it. And then as we move again this is the original spectra to higher scales you start to get the broader features. So very much by the time you get to these higher spectra such as or sorry higher scales such as six, seven and eight we're really looking at those like baseline things relating to the baseline so overall illumination, different scattering effects from particle size, a lot of those aspects. So the wavelet analysis with the spectra let's us both remove the noise and also let's us removes or at least reduce some of those non compositional effects in the spectra that we don't really are not trying to model directly. So that was the kind of goal of the wavelet transform was to remove those non compositional effects and also the noise. So to give you an idea in this study the optimal looking at trying to testing through cross validation on a training data set only and it being the sum of the second third and fourth order wavelets that were optimal. So we're taking this this example spectra here on processed and we're then transforming it into this this wavelet coefficient values where we're removing some of that fine scale noise that we can't really see on this plot. But if we were to zoom in you'd see as well as removing some of the underlying structure in the data. Just to mention if you want to explore these tools in our there's the WMTSA or wavelet packages either and in Python it's pi wt is the the packages to be used. So I guess to kind of close up my discussion on wavelet transforms here I just wanted to close up by I'd mentioned discrete wavelet transforms which reduce the date dimension melody of the data so for example we're looking at the scale like the third order scale for continuous wavelet transforms here we've got. We retain a value for each one of our bands and the discrete wavelets because a lot of the. A lot of the curve fitting. Leads to data points that actually aren't necessary anymore because you know that they're going to be resulting from a particular mathematical function so the dimensionality can be reduced in this case because it's 256 original bands and scale three is leads to a value of eight you're able to reduce from 256 to 32 band or 32 values so an eight fold reduction in the dimensions of the data while preserving things such as for example this feature here relating to water this feature here relating to water you get some of these the nature of the the wavelet analysis does lead to these kind of peaks appearing but you are retaining absorb. If I may interrupt you so the other two like there's many mathematical methods to transform so like principal components and just using derivatives what's the what's the advantage of wavelets why why why why not do this principal components so you could use for example the continuous wavelet transform and do principal components so you could use the continuous and do a data dimensionality reduction that way the the reason I'd say to use this approach over other processing tools being such as the taking a derivative because the derivative is going to increase the noise in the data whereas I mean then you can do other signal processing tools such as civetsky goalie smoothing which again you have window size choices there I have found at least with this type of data that the combination of noise reduction and that baseline removal with the higher order wavelets I've got better results than doing the kind of derivatives and civetsky goalie smoothing I haven't actually tested the discrete wavelets as a dimensionality reduction tool as compared to principal component analysis or partially squared regression I think that's worth that would be worth exploring in the literature someone examining the use of these tools for for dimensionality reduction so kind of moving on out of that which I'm happy to answer more questions later just for time in terms of the predictive model in this study I tested a range of different predictive model types what I wanted to point out is actually the Bayesian regularized neural nets I had the best and now results for this data set I think probably many of people on the call will probably agree that the optimal model will vary depending on the data set like sometimes I've had support vector machines that sometimes QS models the thing I wanted to point out about the Bayesian neural nets is that so there are two layer neural net and what's particular about them is they're using what's called the Nguyen-Widrow algorithm to assign initial weights and then a Gauss-Newton algorithm for optimization I found them to be a lot easier to optimize and fit to data and had a lot more success than using some other types of neural nets particularly with smaller data sets so it's something if you're want to start exploring different neural nets I would actually recommend it's something you kind of play around with and see prior to doing any predictions I then did a three by three median focal filter on on the image the hyperspectral image and the reason for that was to reduce some of that noise that I'd shown in the initial spectra and then one of the I guess the big advantages of collecting the whole cores is that we could then do some comparisons to look at things such as more and I to look at a spatial aggregation of carbon separate from sort of physical aggregates and also use tools such as spatial generalized linear models or least squares in order to look for treatment effects and really tease out where exactly are treatment effects are happening in the profile. So I just for time I'm going to spend more time kind of going through the actual imagery that we are able to generate but overall our it's quite happy with the fit of our carbon and nitrogen models here within our square root of point nine four for carbon and point eight eight for total nitrogen and this was then independent validation results we'd split our data into a training and test set and did all of our optimization on the training data only. We also did clay contents as well which had an R square root of point eight not quite as good. My thoughts there are that I have to explore it more but I think that potentially some of the the carbon features kind of dampening some of the clay features might might have been a factor and the other reason is just there was more error in our in our clay training lab data than our carbon data I think clay content the calibration methods are always kind of a challenge. So we look at our results. We're able to pull out kind of exactly where our horizon boundaries were occurring based on the carbon content and also the clay content in this profile just one of these example profiles. It's difficult to see with this scale but if we kind of zoom in and if we change the scale you actually can see that we have a lot of fine scale variation and so carbon contents and a lot of kind of aggregation of so carbon it's by no means homogenous and that's one of the things that we can really pull out with this method. Same thing with clay content as well. So these soils again are characterized by a loss of clay from the a horizon into the B horizon and we can see that there is some variability with the aggregates. The other thing I think the really valuable part about imaging spectroscopy for this project is that we're able to really precisely measure what depth we're starting to see the treatment effect differences so this continuous forage rotation led to increase carbon all to about five centimeters and then the forage or the forage green mix we had increased carbon down to about 11 centimeters and we're able to see kind of exactly where that was occurring because of the fine spatial resolution of our data. This was tied to increases in carbon and nitrogen ratio and also we're able to pull out things like the carbon wasn't uniformly distributed was exhibited spatial aggregation and was increasingly aggregated when we're looking at those those two forage mixed treatments and then one of the things is we didn't see much difference in carbon contents at depth between these treatments. Again I wouldn't conclude that as of this is what means for forage part of this is probably because that high clay B horizon kind of restricted carbon additions deeper in the pro-sof profile which could be very different in a churnizemic or molysoic soil compared to these lupusolic or boroth soils. Just kind of cognizant of time here. I think kind of overall from this study and from some other work I've done I've been really finding wavelet transforms have been a valuable tool when you have data sets that have noise or non compositional effects that you want to consistently remove and standardize and I think the other thing maybe as a wider community that I've been thinking is that I think that reflectance spectroscopy and imaging spectroscopy is really starting to mature into a tool that we can use to help answer research questions and we should be thinking about it that way and how can we use this higher resolution of data to better understand soil on landscapes or within the profile and I think also these imaging spectroscopy tools are going to allow us to really understand sort of dynamics in soil at very fine spatial scales in a way that you can if you've got to piece the samples up to get discrete measurements. So I have made the code from this project available on GitHub and then I've also made code available for spectral pre-processing scripts using wavelets that are annotated better and cleaner code than some of this other code. So if you're looking at exploring some of these tools you can go there. I'm happy to answer any questions and then I've also included this citation from this study if you're interested in reading that and getting more details. So with that I'm happy to answer any questions if there are any and thank you very much for the opportunity to present here. Thank you. Keep the slides on because the questions will be on slide so just keep them on. Yeah, Preston, thanks so much for a fantastic talk and I'll see. I'll give the audience. I know the wavelet stuff was maybe a bit of a whirlwind. There's a whole world of detail to go into with that. Okay, well, all the audience comes up with some questions. I'll ask you a quick question. How transferable do you think the model is that you built from this like one-time sample collection? Do you think you could go at a different season with different moisture levels and apply to models or do you think you'll have to do some recalibration of the model? So I will say with this study I didn't mention I air dried all the cores before imaging them so I did remove that moisture effect. It wouldn't be transferable to field moist samples. I think the wavelets help a bit with the moisture contents like that overall baseline removal from the water but I think you probably select the water effects that dampen the individual features. There's no fix for that and so maybe some of those moisture correction tools like external parameter or thought realization might be something to be explored in conjunction to see if that helps normalize things. But the goal of this study was very much to generate the better mapping of the core. So but no, I think there's still those questions to be thought about for transferability and I think water content is the bane for all of us right for field spectroscopy. So can I also ask a question, John, about matching the spatial scale? Like you said the image is very high resolution like a 0.2 centimeter right? And so how do you then match that reflectance of that 0.2 centimeter pixel with the calibrated laboratory value of solar-grinic carbon? How do you match that? Yeah, so in that case I was matching based on an average spectra from the point that we took the sample from. So there are some questions you could ask about how much the individual pixels match an average spectra because you know we see more noise in the spectra. My solution to that was twofold to use a median focal filter. So we do really lose some of that spatial resolution by focal filtering but it does reduce some of the noise and then also yeah but it's a legitimate question that I don't know because the calibration plots to show these are actual laboratory estimates of solar-grinic carbon and laboratory needs like 100 gram I don't know 200 gram right? So and this 100 gram I mean they will occupy at least you know five by five centimeter. So for sure there's a mismatch between laboratory and the spatial resolution. There is yeah and so I'm very much my laboratory. The spectra I'm using are an average corresponding to the same spatial scale as the laboratory sample but it's a good question. I don't think there's a... I mean potentially if you could do a laboratory analysis of very of micro samples like I don't know one by one centimeter you know and then correlate that exact pixel you could get a much better accuracy right? You probably could yeah I think that would improve things for sure. And I have one more question if I may if nobody else is asking. One more question. You did only nitrogen and carbon right? I did play content as well I just didn't focus on it. And it's because you couldn't get all the other laboratory analysis because you know you could have done also some nutrients micro nutrients I don't know so that will be also very interesting. Yeah and there's been some different historical treatments. My guess my thoughts I mean it's the whole debate of some of the macro nutrient patterns would largely have probably been a carbon because the macro nutrients would have been largely just associated with the clay and carbon that's my theory is what we would have been mapping anyways but... And just give us an idea how much does it cost one scan? You have to dry it right? You take the... Yeah air dried it and then so yeah the scans really the marginal cost is minus school. I'm not sure what they're charging now the specimen I think it was maybe 250 the 100,000 ish I'm guessing on what the whole setup. I know Dr. Ravard was involved in some of the early helping specimen get the early designs done so I don't know what they're charging for the system now but to scan a core it takes I don't know five minutes or it takes maybe one or two minutes once it's in. Okay there's a couple questions in the chat now in the Q&A so the first one what are your thoughts on using hyperspectral remote sensing for mapping of the solar organic carbon? I mean I definitely think there's great potential I mean it's one getting the calibration set up and the variance and moisture content we'll have to figure out and then making sure we have fair... Well if we're trying to directly sense it right we have fair surface only and then there's some questions about how do you translate the top millimeter to a depth function so there are those challenges. If we're using it as a covariate compared to multi-special I think that I would probably need to defer to some of the vegetation remote sensing experts about how useful is hyperspectral remote sensing for pulling out things like nitrogen dynamics and plants can we correlate that or not but I think it's something that for getting better spatial representation I think it's definitely worth exploring. There are I mean there's some challenges but that's the fun part right so. Yeah I know I think your answer really I kind of think I feel like there's almost this bifurcation in the field right now those that are really focused on directly sensing soil carbon from that hyperspectral data and others that are just using it as an additional data layer in a larger... Yeah my only thoughts on the direct sensing which I think has great potential and you can be then I think really confident about what the relationship is is that we're still going to need functions to translate the surface measurement to a depth profile right so. I see the next question time about tropical soils the overall baseline spectra has a strong correlation with soil texture not only absorption features moving the higher order wavelets may have impacted the prediction capacity for clay. Yeah I don't have much experience with tropical soils so I'll have to defer right to the Jose's expertise there it could be something I didn't think of and I think it might be worth circling back to. I think some of our clays and the dynamics and the soils here are very different from tropical soils and a lot of the soil but I think a lot I mean in our soils I'm talking about here it's the carbon content carbon content and water content tend to be the two things that really drive overall baseline reflectance but there are circumstances where I think certain clay types are darker and I think you're probably right so that's a good point to consider. Maybe I can mention also the European Space Agency starting next year with the CHIME it's a high-perspectral satellite it will be public data the same as Sentinel and so we'll have basically a high-perspectral imaging from sky and we could then eventually have both high-perspectral from the sky and on the ground right and do like a fusion like really match the exactly the same wavelengths and then and then try to correlate the same wavelengths from the satellites that we also detect in situ let's say. This is not the question I'm just just introducing it but yeah things are changing rapidly. Yeah and I'll say for bare soil sensing whether that's multi or high-perspectral you know I'd be interested what other people's perspective some other regions the challenge we have in the northern Great Plains in North America is that we have very little bare soil now because conservation tillage is pretty extensive so we almost always are going to have crop residues in the mix so maybe there's spectral and mixing techniques I'm not sure that'll be valuable but I'm not sure from other parts of the world right that's a different story in terms of bare soil surface frequency. In this instrument have you tried it with organic soils like with the 30-40 percent organic? And it would be interesting to know if we're going to have issues with saturation I'm not sure if anybody here wants to chime in has experienced with other you know with other let's say with ASD equipment or point spectroscopy with organic soils I wonder what the signal to noise is going to look like but that's a hypothesis I've never done it. Yeah that's a good point and I mean I definitely I mean my experience in the NIR range and you start getting very it's yeah your this soil starts absorbing almost all the light and so you start getting a lot of noise when there's really dark PD soils but you know it would be interesting and I think that depends a lot on your optical configuration and with kind of the light source being and the detectors being pretty far removed from the core you might actually do all right with the it's that it would be cool to test out especially given the extent of peatlands in Canada the yeah a good country to be pushing that work. Yeah and I'd say as much as I think the wavelets are a useful tool for denoising if there's something in your spectra that's dampening the absorption features it doesn't really help there right because you still have a weaker signal so I'm just thinking if it's really wet or if it's really high carbon right and that's masking the other absorption features. I want to say I like the imaging the imaging of the whole like a scan and reminds me of all the problems in the soil taxonomy where they they want to come back you know like once you make a team is you can save it forever right yeah so sometimes they want to come back to the soil profile descriptions and and then they would like to really see like you know the transitions and reclassify. Yeah I think if we could scan basically every every soil if we could scan it like this and then keep that you know basically forever that people can come back to that it would be really major major progress I think for soil science. You know wasn't the focus in this study I think some work with high horizon boundaries would be really interesting to do because I mean just you can see I mean we have a lot more irregular horizon boundaries particularly in this was a sample that was had forages for the last 40 years so you know if you have your plow layer right you tend to have a lot of those stark boundaries but I think some of those irregular boundaries and less disturbed soils would be able to examine a lot of interesting dynamics. And now this image is so they may be available to public like. I can make them available I well maybe I'll have to chat Tom about how we can make stuff more you know I was I was new to figuring out some of the stuff about availability of making data available. I think that would be a great great contribution and we will dedicate an article on our source microscopy website. And we would open open it for users to test different type of analysis and modeling. So how many scans do you have by the way. I had to double check how many cores it was I think it was like 20 cores about. Okay, and there's one question just wanting from Andrew Grant about lab method so it was done by dry combustion with a cost tech elemental analyzer. President I had a question about kind of scaling this up to thinking about kind of the carbon market monitoring applications I have what where do you see this as being a really viable tool for monitoring carbon change at whether field like project level monitoring. I don't know if imaging spectroscopy would necessarily be the right approach on that like I don't know you'd really have to do a cost benefit of what the extra spatial resolution gets you versus point spectroscopy point spectroscopy is going to be more convenient in the field I think to deploy. I personally in a camp that I think, you know as a community we absolutely need to get, which I'm not saying we're there yet but but spectroscopy is one of the tools for carbon verification. Because you know I think quantitative validation of carbon data is going to be essential to make sure the real and you know the dry combustion and all the labor involved with that is going to be quite expensive. I think there's definitely potential but I think some of those dynamics with water content for ability is going to be a key thing. Okay, there's a question in the Q&A from Andrew Silla asking about like it looked like there's a lot of noise at the low end of the carbon and the model performance was probably driven by the fit across that 7% gradient in carbon. Yeah, I think that's a fair comment. I think you're right, I think the performance was started to break down at the lower content. I'm not confident that we can really reliably predict like below 0.2 or 0.3% with this particular data set and study. I don't know if anyone's working on really trying to nail down like low carbon measurement error like what's the detection limits. It's a question I've often had but yeah, that's a point. Yeah, I think in general, I mean if you look generally across the Coscopy results when you have a very small range in your analyte invariably you don't get good predictions. And I don't, I mean that's a big problem if you're really trying to intensively study a region that doesn't have a lot of variation in that property and you're pretty much just predicting the mean value and that's not very useful. Yeah, and so I think, you know, my view is how to fairly interpret some of these plots is that we are definitely pulling out where we have higher carbon in the topsoil and that we're able to pull out pockets of higher carbon in the subsoil but some of this variability could be just that error. I'm not sure. But definitely I think some of the, sorry I'll say, you know, to say, oh yeah, this is 0.2% carbon here and this is 0.3% carbon. Yeah, I think it's fair to say we probably can't be confident that we're able to detect those kind of differences. But we could probably pick out nodules of higher carbon in the subsoil, which I think we could be confident do exist. So a practical question Preston is the course, you need a mechanical instrument to take the course, right? Yeah, I mean, yeah, you really do. That's expensive, that's the cumbersome really, right? Because you cannot do it manually. Could you take this, like there are one meter course or something, no? Yeah, the thing though, I think you'd have to circle back and look at the economics though, because you're right, it takes the instrument, but it's much faster. Like, I mean, if you were only doing, you know, a surface, you know, 10 centimeter sample with a shovel, that's fast, but with the right kind of setup, coring could be much, much faster. So in the right soils, you could get really good cores in less than 10 minutes, you could be on to your next core, but yeah, so Canadian prairie is great for coring. Oh yeah, I should say I'm biased. We're all talking about glacial. This was all glacial till sediment that I have. I think it was the question draped over till, so there's not even the stones to worry about. So yeah, we're kind of ideal for pushing these cores in and just getting nice, you know, we occasionally get a sandy or deposit and we get poor recovery, but yeah. But yeah, you know, that's not the same everywhere. So yeah, but I mean, definitely, I mean, I would love to see this level of data, like basically every long term agronomic trial, I would love to see this resolution of data coming out of them to really understand this like the impact of the different rooting systems and stuff on carbon, carbon and other properties. Yeah, I see there's one comment here about some machine learning problems, do show problems or misturning with extrapolation. When you calibrate the average over multiple pixels, the information hot spots might be diluted, so if problems finding them in the prediction map, it's important to reduce the area of reference sampling and therefore the number of average pixels for the model as much as possible or use different. Yeah, I mean, I think that's a really, really good point. And that's something I think coming out of this talk or since I've done this study that I think if you're building calibration models for this kind of imaging spectroscopy, how can you try to have like your calibration models be smaller? I think that's all super valid improvements that could be done on any kind of follow ups. Yeah, you definitely say, I mean, depending on your elemental analyzer, I mean, some of the micro element analyzers, you only need a few milligrams sample. So you could do a pretty good, you could, I mean, you can't, you can't sample at point two millimeters, but yeah, you could get like a one centimeter square out and easily have a good sample for analysis. And yeah, I think that would be a better way to do it as far as a future, any kind of future applications. Great. Well, I think we've gotten through some really good questions and those good discussion. And so I'll just thank everyone for tuning in and some really great thanks to Chris Austin for giving a really interesting talk. And yeah, if anyone, I'm happy to share code and if anybody wants to explore this stuff and wants help with anything, I'm happy, happy to. If you're willing to share the images, you register, please get the DOI and we will be happy to post them on our website and write the blog. Okay, I'll follow up with you, Tom, about that. Yes, that would be fantastic. Thank you so much, Preston. Yeah, thank you. Thank you again for the opportunity to present here.
Imaging spectroscopy has the potential to enable measurement of soil properties in intact soil profiles at spatial scales previously not possible. There are unique challenges associated with imaging spectroscopy compared to point spectroscopy. Particularly, signal noise and the influence of non-compositional effects on the spectra. One way to manage these effects is by using wavelet transforms, which are a signal processing technique. Combining wavelet transform processed spectra with machine learning techniques can be used to improve predictions of soil organic carbon throughout the soil profile with imaging spectroscopy. In one study, intact soil cores were analyzed using a SisuROCK automated hyperspectral imaging system in a laboratory setting collecting shortwave infrared reflectance data. Predictive models were then built for soil organic carbon using a combination of wavelet analysis and Bayesian Regularized Neural Nets. The combination of wavelets, machine learning and imaging spectroscopy enabled mapping of soil organic carbon throughout the profile, and identification of the magnitude and depths that rotational treatments were having an effect. This webinar is based on the following publication: Sorenson, P. T., Quideau, S. A., Rivard, B., & Dyck, M. (2020). Distribution mapping of soil profile carbon and nitrogen with laboratory imaging spectroscopy. Geoderma, 359, 113982 https://doi.org/10.1016/j.geoderma.2019.113982.
10.5446/57338 (DOI)
So it's a great pleasure to be here and well it's really nice to see you know like all here in person. Well, and thanks for the online people who join us virtually all over the world. So what I want to talk about today, I want to talk about decomposition results in rational dynamics. And not only that, I want to talk about some connections to other fields including geometric group theory, self-similar groups and maping class groups. So I want to point out that, well I'm not the first one, not at all, who is talking about building these connections. So just at this conference we heard John, Volodya, Loran, Berhart, Dylan, Caroline, Insung, Beka, Mario and we will hear Sabia speaking about such connections. But my talk would be about decomposition ideas in rational dynamics. Let me just start with some general observations. So what is the decomposition ideas? If you study some complex object, it's really natural to try to decompose it into several pieces of some particular types and then try to understand each piece separately and then understand how they actually glue pieces together. So this idea worked really well. In certain squares topology implies geometry and resulted into several important, well, fundamental results in geometry of three manifolds, theory of surface automorphism and as we all know dynamics of rational maps. Before I start talking about Thurston's decomposition for rational maps, let me start by discussing with you geometrization of surface automorphism. So let's suppose that we have some closed oriented surface with a finite set of marked points, then the Nissan Thurston decomposition tells you that any automorphism F can be canonically decomposed by some invariant multi-curve into pieces of two types, either periodic or pieces that admit pseudo-nossal structure. So there are several things that appear on the slides that deserve to be defined. So how do we actually decompose some surface automorphism using an invariant multi-curve and what are the types of the decomposition? So let me start by first formalizing the types of the decomposition. So if I have some automorphism of a surface, I say that it's periodic if some iterate is either topic to identity. So for instance, you could imagine writing this surface of genus three and this is going to be an example of a periodic surface. So the other type are the so-called maps that admit pseudo-nossal structure and what is that? Well, we suppose that we may find two transverse fallations on the surface so that my automorphism F stretches lines in one fallation by some factor lambda and contracts line in the other fallation by lambda. So locally, the picture should look like this. So I have some rectangle and this rectangle should be stretched in one direction by factor lambda and contracted in the other direction by factor lambda. So this pseudo-nossal structure is locally modeled on the unossal structure for torrari. So if I have some hyperbolic elements of p cell to z, well, I may find the corresponding eigenvectors and eigenvalues so I will have two eigenvectors v1 and v2 and well, my matrix action would stretch one vector by factor lambda and it would contract the other vector by factor lambda. So what I could do afterwards, I could project this map on the torus and as we will see, well, if I look at the lines parallel to this vector v1, they would project on the fallation of the torus and they would be stretched by the factor lambda and the red lines that are parallel to this guy would be projected to another transverse fallation and they would be contracted by lambda. So those are the two types of maps in our decomposition. So how do we actually now decompose our surface into pieces? Well we decompose them using multi-curves. So if we have some close simple curve gamma in the complement of the marked points, we see that it's non-essential if it bounces this with at most one marked point. So in particular, this green guy here is non-essential and this green guy here is non-essential because it bounces only one marked point. However, this two red curves are essential. So what's a multi-curve? Multi-curve is a finite family of essential curves that are pre-vised disjoint and non-anthropic relatively the marked set. In particular, these two curves would form such a multi-curve. So now let's add dynamics. So let's suppose that I have some automorphism. Let's suppose that this automorphism promotes these two curves up to isotope and well this gives me actually a way to cut my surface into pieces. So by definition, a multi-curve is called invariant if the image of every curve is isotopic to some other curve from your multi-curve. So if I have such a situation, what I could do? I could take scissors and cut my surface into pieces along my invariant multi-curve. So in this particular case, this surface would be cut into three surfaces. One of them would be, well, so okay, like I would have one surface of genus two with two marked points that correspond to my curve. And I also will have two torii with two marked points, one which is so to say survived and the other which corresponds to the boundary curve that I cut it. So now if I look at this homomorphism, phi, what I can say? I can say that it must promote the pieces of this decomposition up to isotope. So in particular, this subsormophism phi would actually up to isotope map this white guy to itself and it would promote these two guys. So what does it mean? So I could, for each small piece, consider first return map and this would be a new surface automorphism defined up to isotope. And what does now Nissen-Sturzmann theorem says? Well, that for any automorphism, you could find some canonical invariant multi-curve that decomposes this automorphism into maps of two types, either periodic or the ones that admit pseudo-nozzle structure. So that's the story of the Nissen-Sturzmann characterization of surface automorphism. So now let's talk about rational dynamics. So in my talk, I'm going to try to be rational, so therefore I'm going to only restrict the association when we talk about post-critically finite branched covers of the sphere. So let's recall that the post-critical set consists of the forward iterates of the critical points and I'm going to denote it by PG and the map is called post-critically finite, if the set is finite, meaning that each critical point has finite orbit. So what Serson was trying to understand, he was trying to understand the following question. When such a branched cover is realized by some rational map and here, similarly to the world of map and class groups, we see that something is realized by a nice geometric structure if you have conjugacy up to isotope. So we really want f and g to be conjugate up to isotope related to the corresponding post-critical set. And as we all know, well, we have this fundamental theorem of complex analysis, sort of complex dynamics due to Serson that says that any post-critically finite branched covering, well with some extra condition, is realized by rational map if and only if there is no Serson abstraction. And what is Serson abstraction? That's again some invariant multi-curve and that multi-curve must satisfy certain mapping properties that were measured, for instance, during Mario's talk or during Dillon's talk. So I'm not going to define Serson abstraction as the complete generality, what I'm going to do instead, I want to define the simplest possible Serson abstraction that was already mentioned during Baker's talk and during, let's say, Seryozha's talk. And the simplest abstraction are levy cycles. So what's a levy cycle? It's a collection of curves gamma 1 up to gamma m such that essentially my map promotes them up to isotope. So more formally, for each curve gamma 1, gamma j, I could find some preimage so that this preimage is isotopic to a curve for my cycle and it's being mapped onto the corresponding gamma j by degree 1. So up to isotope, my map just essentially promotes this curse by a homomorphism. For such, multi-curve is an obvious abstraction for F being rational because this is an obvious abstraction for expansion and we all know that rational maps are expanded with respect to the orbital geometry. So let's keep this in mind that these levy cycles are abstractions for the map to be expanded. So now, what does the decomposition version of the Serson's theorem says? Well, it says that if you have any post-critically defined branch covering map, now without any restrictions, you could always find some canonical multi-curve so that F decomposes into maps of three types. Hormomorphisms, double-covers of torosendomorphisms, or rational maps. So again, we have this structure. We have some decomposition in the, well, in this case, three possible types. So let me maybe remark that if this decomposition is non-empty and F has a hyperbolic orbital, then this multi-curve gamma is exactly the canonical Serson abstraction that was mentioned during the backer's talk. So that the curve, so this is exactly the collection of simple closed curves whose length goes to zero under Serson's iteration. But to really understand this theorem, what do we need to do? We need to actually understand how do we decompose rational maps or more generally post-critically defined branch covering maps along multi-curves. So we discussed how do we do this in the case of surface homomorphisms. But now let's try to understand how does decomposition work in the case of branch covers. And here there is one subtlety that I actually need to define invariant curves rather carefully so not as usually the Serson's theorem is, the classical Serson's theorem is stated. So I need to request the following two conditions. So I have a multi-curve gamma and I need two properties. So first of all, I want the image of gamma to be inside gamma to isotope, which means that each essential component of the preimage is isotopic to some curve in gamma, so that's the classical condition. And I also want to have another one that each curve in gamma appears as some component. So I want to request also this condition. So let's look at the example. So that's an example of invariant multi-curve. So we see that this purple curve has two pre-images, pullbacks, one isotopic to gamma one, another isotopic to gamma two, and gamma one has two pre-images, one is isotopic to itself, and the other one is peripheral. So that's an example of an invariant multi-curve. And now if I have such a situation, what I could do, I could actually try to decompose my map. But before the compose map, I want to first decompose sphere. So a small sphere is a connected component of the complement of a multi-curve, which will you as finitely puncture sphere. So imagine that you have this sphere is a given multi-curve, and now what I do, I pinch the curves in my multi-curve. So in this case, I will have three pieces, I will have three components, and each of them we may view as puncture spheres. So this guy would correspond to this component, it has two odd punctures plus one extra puncture coming from the curve itself. And well, I also have two other small spheres. And now we could add dynamics. So on this picture, well, we see gamma, we see the pre-image of gamma, and this guy corresponds to the corresponding, this guy corresponds to the pinching model for this curve system f inverse of gamma. And now the important thing is that since I assume that gamma sits inside the pre-image, I could actually see that for each surface, so for each small sphere on top, I may find the corresponding sphere below, up to isotope. So this guy is isotopic to this sphere, this guy is isotopic to this one, and this guy is isotopic to this one. So now, well, I have this identification between small spheres below and small spheres on top, but I also have dynamics f. So each purple sphere in this case is only one is covering the purple sphere on top, these two white guys cover this one, and these guys cover this sphere. And now what I could see, I could see that, well, I have dynamics on these spheres, up to the identification of small spheres on top and small spheres on bottom. In particular, this sphere would just essentially map to itself up to isotope, and this sphere, well, it maps up here, which is identified with this sphere, sorry, this guy is identified with this one, and this goes here. So essentially I have two cycles on these two small spheres. So what I could do, I could consider for each periodic small sphere the corresponding first return map, and we call this first return map a small map, and those would be our small maps in the decomposition. So in general, each small sphere in this decomposition is pre-prioritic, so I'm looking at the periodic cycles and I'm looking at the corresponding first return maps. So now we may understand completely this decomposition version of Sersens' characterization. So if I have any post-critically fine branch covering map, I may find some canonical invariant multicurve, which might be empty, such that each small in the decomposition is either homeomorphism, double-cover photorescentomorphism, or a rational map. So that's the classical Sersens' theorem. So let's remind you that where did we start from? Well, we started from topological models of post-critically fine rational maps, and we were trying to understand the structure. But one could ask the following question. What are the natural models, metric models for post-critically fine rational maps? And the right natural models were introduced by Laurent Bartholdi and Dima Dutko, and they called them the Bochik's Spaded Maps. So let's suppose that we have a branch covering map F of degree at least 2, and we see that it's Bochik's Spaded, if two conditions are satisfied. So first of all, I want to have a metric on the complement of the orbits of periodic critical points that is expanded by F. So I consider set A infinity, which are the orbits of all periodic critical points, and I want to have a metric on this complement that is expanded by F. And the second condition, I want to have a nice local picture at periodic critical points. So I want my first return map at every periodic critical point to be locally conjugate to a power map. So put it differently, I want to really have Bochik's coordinates at my periodic critical point. So every post-critically-fine rational map is Bochik's Spaded. And Bochik's Spaded maps are really nice, and what you could do, you could actually define natural notion of Julia and Fatou sets for them. And this Julia and Fatou sets would have properties very similar to the standard properties for the Julia and Fatou sets of post-critically-fine rational maps. So in particular, the Julia set would be compact-connected, locally-connected set in the sphere. The Fatou set would consist of those points that eventually converge to periodic critical cycle. Each Fatou component would be simply connected. The boundary would be locally connected. And we may define Bochik's coordinates for our Fatou components, and Bochik's coordinates allow us also to define internal rays that will land on the boundary because the boundary is locally connected. So we have these nice, now, matrix properties, matrix models for post-critically-fine rational maps. And what can we say about them? Well, first of all, what Dima and Laurent showed that post-critically-fine branch covering is isotopic to a Bochik's Spaded map if and only if it does not have levy cycles. So as we discussed before, levy cycles are obvious obstruction for expansion, but in fact, the converse is also true. So that's the crucial thing in the theorem to show that actually, if you don't have levy cycles, then you may define some nice matrix so that all the conditions that we wanted to have so that it's going to be expanded with respect to F. And well, Dima and Laurent showed not only that, they also produced a new decomposition for post-critically-fine branch coverings, which is called canonical levy decomposition. So what they showed is that you may find some canonical levy obstruction such that each small map in the decomposition is either Bochik's Spaded, a double cover of heterosendomorphism, or a homomorphism. So put it differently, what they did, they really embedded nicely the levy obstruction inside the canonical source obstruction. So if you have some post-critically-fine branch covering map, well, you first find, sorry, post-critically-fine branch covering, yes, you first find levy obstruction. This decomposes into some sort of say easy pieces that we understand or transfer the work to the mapping class group community. And what is left are the Bochik's Spaded maps. So those were the news from Dima and Laurent. And the natural question to ask is, well, those decomposition essentially work non-trivial only for obstructed maps. But is there some natural way to decompose rational maps? So again, for rational maps, all this levy decomposition and canonical decomposition could be trivial. But can we find some natural way to decompose post-critically-fine rational maps? So in principle, we already saw, let's say, in Caroline's talk, some ideas how we could try to decompose rational maps. So we could try to use the beautiful construction by Daudi and Hubbard, which is called mating of polynomials that combines together two polynomials to form a branch covering. You see here some two pictures of matings. But what's the problem here? The problem here is that, well, if you would try to decompose or unmate some rational map, then it may not be always possible to do this. And even if it's possible, this un-mating procedure may not be canonical. So in particular, this picture provides two different un-matings for the same map. So this is a so-called phenomenon of the shared matings. So what we want to do, we want to have some canonical decomposition of the rational maps. And what is the idea? Well, the idea is that we will use the structure of the Julia set. Namely, we want to use touching for two components. So when I say that two components touch, it means that they have intersecting closures. So let's look at these two Julia sets. So there is a huge difference between these two. This one has for two components with disjoint closures. So this Julia set is, in fact, homomorphic to the Czerpinski cup. But this guy satisfies this following observation. So whatever two for two components you pick, you may always find some paths that would intersect the Julia set in countably many points. So you may connect any two for two components by a path that would intersect the Julia set at countably many points. And that's the idea of our decomposition. So we want to decompose our post-critical defined rational maps into maps of two types. The first will have many touching for two components. And the other one will have no touching for two components. And the idea is that we want to extract the clusters of touching for two components. In particular, for this example of a Czerpinski cup tuned with a rabbit, we really want to decompose it into some Czerpinski cup and a rabbit. So let me now show you the theorem that we proved with Dima and Dirk. So before I show you the theorem, let me tell you what are the two pieces of the decomposition. So the first piece of decomposition would be the maps with no touching for two components, which we call Czerpinski cupid maps. So F is called Czerpinski cupid if the Julia set is homomorphic to the standard Czerpinski cupid. And the second type is what we call crochet maps or Newton-like maps or doily maps. And those are the maps where we could connect any two points in the post-critical set by a path that intersects the Julia set in countably many points. So those are examples of maps where we have many touching for two components. And now our decomposition theorem says that given any post-critically defined rational map with non-empty for two set, we may always find some canonical multi-curve that decomposes our map into pieces that are either crochet map or Czerpinski cupid map. So that's the main theorem of this talk that I wanted to communicate to you. So any questions to me about the formulations of this theorem? All right. So is canonical multi-curve invariant? Yes, this multi-curve is going to be invariant. So I want to, well, quickly tell you that the theorem is also true in the more general setup of Boeutcher expanding maps. But for most of my talk, I would just talk about rational maps for simplicity. So let me tell you some ideas. How do we actually prove this theorem? And let me also mention some examples of crochet maps. So the general idea, which is essentially due to Kevin, is that when you have two touching for two components, then they must intersect at a preprioritized point. And therefore, you should have some preprioritized internal rays inside these for two components that land at the common point. And with this idea in mind, well, what do we want to do? We want to connect postcritical points by a graph that uses only prepriotic or periodic internal rays. And it's actually really easy to do this for polynomials. So what you just need to do is essentially look at the external rays, and then you add some internal rays. So essentially, you want to connect everything through infinity. And postcritically polynomials would be crochet maps. Other examples are given by critically fixed rational maps. So there is this construction of T-shear graphs, which is a union of all possible fixed internal rays. And this graph is connected, so therefore, it's going to be also a crochet map. Then there are postcritically defined Newton maps, where the situation actually gets much, much more complicated. So let's look at this Julia set. So we again will try to connect everything through infinity, through the fixed point at infinity, so we could connect the roots. That's easy. But we also have to connect this point, and that we could do, so to say, using only finitely many internal rays. But then there is this point up here, and here we really need to use this infinitely many prepriotic internal rays. But still it's possible. So there is a theorem due to Russell Lodge, Janimi Kulich, Dirk Schleicher, and Kostadraj, that you could actually construct graphs, so-called extended Newton graphs, that connect all the postcritical points in a finite graph. And there are, of course, some meetings. So we already saw an example of a rabbit mated with basilica, and there you could also find such connections. So that's the basic idea. We want to build graphs using prepriotic internal rays. And the next idea is that we could do this iteratively. So let's suppose that we have some finite invariant zero entropy graph G. So what I'm going to do, I'm going to take pre-images of this graph, and I'm going to look at the components of the pre-image that contain G. So I assume that my graph is invariant, so there is always a component, so let's call G1, that contains my graph. So what I could do afterwards, I could look at this set key m that consists of part two components that intersect my graph, and I take the closure. And afterwards, what I'm doing, I'm looking at this set key n, I take the union, I take the closure of this union. And this thing is called the cluster of the graph G. So in particular, let's say if you started up here with this graph, and you try to look at the corresponding cluster, then you will see the complete field in Julia set. You will just, by taking this pre-image, you will capture this whole field in Julia set. And what are the crucial ideas now? Well the first idea is if you have some pre-periodic point on this cluster, then you may actually add this pre-periodic point to your graph. So there is some graph Gx, which is f invariant and zero entry. So f invariant, zero entry, connected graph G, that contains old graph and this new pre-periodic point. And the second idea is that if you have two clusters that have non-empty intersection, they must intersect at a pre-periodic point, and essentially this means that we may combine the cluster. So we may find some other graph that would actually capture both of these clusters. So those are the one part of the ideas that we use, and these essentially results in this, what we call crochet algorithms. So we really do the crocheting process. We one by one, we try to connect all the types of photo components. So we compute the maximal clusters of touching photo components. And if the cluster is a maximal, it means that they disjoint. And once they disjoint, I could actually look at the boundary multi-curve of these clusters, and this curve would be invariant. So once I capture the clusters, I may look at the boundary multi-curve of these clusters relative to the post-critical set. And this curve would be invariant. And I may use this curve to decompose my map, and now I'm in good shape, I could iterate. So I split it some, so I extracted some crochet maps. And now, well, I just iterate. So we iterate until each small map in the decomposition is either crochet or serpin. So in this situation, well, once I find the clusters, maximal clusters, well, my map decomposes into the rabbit and to the serpin's key carpet, and essentially the process stops. Because well, for these guys, this maximal cluster is a trivial. Here the maximal cluster is the whole Riemann sphere. Here the maximal clusters are just disjoint for two components, the process stops. But in principle, after the first iterate, you may need to do more steps until you get the complete decomposition of the crochet and serpin's key carpet map. So in fact, we do one more step, which is a bit artificial right now. Like I will try to explain why we do this at the end. So we actually want to glue some crochet maps after this decomposition. So in any case, in principle, this decomposition is already good. It's a canonical decomposition into small crochet and serpin's key maps. But for some topological reasons, we actually want to glue some crochet maps afterwards. So to understand this topological reasons, we need to introduce the notion of a cactoid. So what I want to do, I have some post-critical defined rational map, and I want to consider equivalence relation on the sphere that collapses all the photo components. So put it differently, I'm adding to my equivalence relation all the points in the same, in the closure of the same photo components, and then I consider the smallest close equivalence relation that contains this subset. So the quotient space would be Hasdorf, and in fact, it's going to be a sphere cactoid. So it's going to be some tree-like collection of segments and spheres. So in this case, well, each photo component is connected to any photo component by a countable chain of type 2 photo components, so therefore the quotient is a point. In this meeting, the quotient is also a point. In this tuning, well, this quotient would collapse the small rabbits, but otherwise, and would also collapse all the photo components, and I'll just get a sphere. So what's interesting is this map, so thanks, you shine for the picture. So in this picture, if I do this quotient, what I will actually get is an interval. So this point, let's say, corresponds to the inner photo component, this point corresponds to the outer photo component. So in this picture, we see this necklaces of touching photo components, which means that they would be collapsed to the same point, but what we also see up here that we have some, okay, like, we have some counter-like multicurve that sits in the Julia set and would actually, and this would actually mean that the quotient space is an interval in this case, so this counter-like collection of multicurve, of multicurve, would actually forbid you to identify this inner photo component with the outer photo components, and then you will have an interval. And as a general situation, when we have some sphere cactoid, well, you should really think of spheres corresponding to small Cipinski-Cuppitt maps, segments corresponding to counter-like multicurve sitting inside your Julia set, and points corresponding to small crochet map, so that's what you have, what you should have in your mind about this decomposition. So let me give you the precise formulation of the decomposition result. So if I have any post-critically finite rational map with non-emptifer two set, I may find a unique invariant multicurve such that it satisfies two conditions. First of all, each small composition is either Cipinski or crochet, but I also want something extra. I want the following to be true for the quotient map that is coming from this identification of two components, cacti and photo components. So I want Julia sets of small Cipinski-Map maps project on the spheres. I want Julia sets of small crochet maps project to points, and not only that, I want different small crochet Julia sets, sets project to different points. And exactly for this, because of this last condition, in this algorithm that I showed you before, we had this step four where I was identifying some crochet maps. I was gluing them together because we want to have this last condition. So one corollary that you could get from the theorem is that maximal Cipinski-Capted maps are actually well-defined. So the Cipinski-Capted maps in our decomposition, they are maximal Cipinski-Capted maps that you could get. So questions about this result? All right. If not, then let me tell you something more that we deduced from this decomposition theorem. Namely, using this decomposition theorem, we could discuss some alternative characterizations of topological complexity of Julia sets. So while we already saw the definition of crochet map, any two post-critical points may be connected by a path that intersects the Julia set at countably many points. So this happens if and only if the quotient space under this part two identification is a single point, if and only if there is some finite and invariant connected graph G such that the intersection with the Julia set is countable, if and only if there is some finite and invariant connected graph G connecting the post-critical set, size of the topological entropy is zero. So this graph G is really built out of periodic or periodic internal ways. And because of that, it has this topological entropy zero. But we may also characterize Cipinski-Capted maps as the so-called Cipinski-free maps. So I did not put Cipinski-Capted characterization. Let me talk about maybe something more interesting, which is characterization of Cipinski-free maps. So what are those? So we say that the map Cipinski-Capted is Cipinski-free. If with respect to our decomposition, we have no small Cipinski-Capted maps. So we only see question maps. And this happens if and only if the quotient space is the dendrite, which is equivalent to the following topological property. So for any two points in the Julia set, we may find some countable set S that separates these two points, X and Y. So really, using this decomposition, we may understand the topological complexity of Julia set. So what I want to talk about at the end of this last lecture about the global dynamics are some connections to geometric group theory and self-similar groups. So one connection that we already saw in the talk of Mario Bonk and also in the talk of Inson Park, if you attended it, that, well, one natural question that you may ask is, all right, so we were measuring topological complexity of limit spaces. What is the natural way to measure geometric complexity of limit spaces? So in principle, you may just look at hazard of dimension, but that's sort of, say, not invariant for natural fractal limit spaces as boundary of gromohypobolic groups. So what we do instead, we consider this alpha's regular conformal dimension that Mario introduced yesterday, which is the infinum of all the hazard of dimension for all metric spaces Y that are quasi-symmetric to X. So this alpha's regular conformal dimension is really natural invariant for limit spaces of the boundaries of the gromohypobolic group. So let me tell you the result that Inson proved using criterion by Kevin and Dylan. So Dylan talked about this criterion for the alpha's regular conformal dimension with this critical exponent during his talk. And what Inson showed, so Inson showed that if you have any hyperbolic, post-critically fine rational map, then it's crochet, if and only if the alpha's regular conformal dimension is exactly one. So I want to note that one side, so one implication, really, really uses our decomposition result. So in which sense? So let's suppose that the map is not crochet. If our map is not crochet, then it means that it has non-trivial decomposition. And having non-trivial decomposition means that either you have a Serpinsky carpet inside your Julia set, or you have this counter-like collection of multi-curves sitting inside your Julia set. But both of these things actually obstructions for the alpha's regular conformal dimension one. So once you have either this counter-like collection of multi-curves or a Serpinsky carpet, you immediately get the surface regular conformal dimension strictly larger than one. Well and the other thing, the other implication requires, of course, more work, and that was done by Inson. I want to mention briefly, and while Inson discussed much more on this topic during his talks, I really encourage you to look at the recordings afterwards, that this result by Inson is another entry to the saliphons dictionary. So there is a parallel statement, is a geometric group theory that also characterizes the situation when you have alpha's regular conformal dimension one. And what you may ask naturally, well, can you use our decomposition theorem to say something more about alpha's regular conformal dimension? Well, and in principle, there is this conjecture that the alpha's regular conformal dimension of the complete Julia set should be bounded below by the dimensions of the small Julia sets plus this quantity, this extreme quantity q0 of the multi-curve that Mario mentioned in his talk. So there is some natural conjecture which provides you the lower bound on the alpha's regular conformal dimension in terms of the alpha's regular conformal dimension of the small pieces plus the decomposing multi-curve. So that's one connection that I want to mention, and the second connection that I want to mention is the connection to the self-similar groups theory. So what D.M. Laurent showed, well, they have a large series of papers, I think that the moment it's 5, 4, 4, all right, and we are waiting for the fifth paper. So they show that if you have a post-critical defined branch covering map, well, with this extra condition, hyperbolic orbital 10 degrees at least, too, then it's isotopic to a bocce expanding map if and only if the IMG is contracted. So you have an algebraic version and algebraic characterization of the expanding maps. And what they also say, well, this canonical level abstraction that we have is computable, however, this, I mean, it's not efficiently computable because it's essentially the moment only countable search you need to search for the level abstraction. So the work in progress that we are doing with D.M.A. would prove actually that for crush maps, we have the following algebraic characterization iterated Mondromy group is generated by an automaton of polynomial growth. So there should be some algebraic counterpart to this condition that F is crocheted. And not only that, this, once you start from a bocce expanding map, so once you have some finite nucleus, then you could use this machinery of bisets to actually efficiently compute the canonical crushes of the Maltic of decomposition. So independently of this progress, our result also implies using, well, several results. So like using the characterization for minability by Eustenka and the Crescendo-Lazale and recent work by Volodya, Kevin and Dylan, plus our decomposition, we could get that F is crocheted if and only if, sorry, not only if, sorry, we could get that F is crocheted, then the iterated Mondromy group is amenable. So in principle, it would be really, really interested to study what happens if F is not crocheted. So, well, there is one conjecture that all groups generated by automaton of polynomial activity growths are amenable. There is another conjecture that says that all contracting groups are amenable. So this essentially is the intersection of these two as Volodya, Kevin and Dylan observe. And well, if you want to make progress towards this conjecture, what you could try to do, you could try to consider some examples of maps where the decomposition is not real. So in particular, we saw this example where the quotient space is an interval, and I consider this a very natural example to try to study amenability question. So I want to finish my talk with some further questions that one may want to look at, and some of them we already discussed with Dima. So first of all, our decomposition theorem provides natural way to localize certain obstructions. And if you have some obstructing curve gamma, and you start from a bocce-expedent map, then either gamma sits inside the decomposing multi-curve, or it's inside the small Trapinski sphere. So if you want to localize and understand certain obstructions, essentially, after our theorem, we get that you should understand obstructions for bocce-expedent maps where the Julia set is Trapinski copy. And well, once we have this decomposition, it's also very natural to ask, can we use it for the cabinatory classification problem? Well, there are several obstacles on the way, but in principle, you may hope that our result provides this idea for the divide and concur strategy. So the next question to ask, or write, we have these crochet maps, is there some natural decomposition of crochet maps? And there are some ideas, and particularly, you could try to decompose crochet maps by resolving local cut points. And finally, the last and maybe the most general question that Vlad could ask, in my setup, I just restricted myself to post-critically fine-trational maps, meaning that I work on the planar station. But in principle, we may look at limit spaces of abstract contracting self-similar groups, so they don't have to be planar, and is there some natural decomposition similar to ours in this case? All right, so that's all. Thank you for your attention. Thank you very much. Are there any questions or comments? Did I hear you to say that every Thursday an obstruction contains a levy cycle? No. I thought that was the content. I mean, there were many theorems that I didn't understand, but I thought that that one was one that you wrote in one year's slides. So what I was seeing that, it's really hard to scroll. It was near the beginning of the levy. So the levy obstruction, which might be empty, must belong to the canonical system structure. So the canonical levy obstruction that Dima and Laurent construct belongs to the canonical system structure due to pilgrim, but this levy obstruction might be empty, so it might be nothing. So I do not say that any system structure must contain a levy cycle. My understanding was that this confusion came up when you were composing the topological side and one of the decomposed maps could itself be obstructed. So this is not the maximum obstruction, but one obstruction into these birdshelves expanding maps. Yeah. So, well, the levy obstruction does not find the complete obstruction. So, okay, so you only find the levy obstruction and then there might be more obstructions sitting inside the botching-experient maps. And essentially, this last slide that I had shows that this obstruction either sits inside our decomposition curve or it sits inside the small spin-skicker. Yes. So you showed that the butcher-expanding maps, right, all have these decomposition into these two pieces. Is it clear that you can always take the two pieces and glue them back together to get a butcher-expanding map? And if so, do you know which ones give you actually rational maps? It's more or less clear. If you glue along a multi-curve that does not have levy cycles, then you can do this. So there is a description. Sorry? Can you say which ones give you rational maps? So, well, I mean, probably not. So if you have, you know, like, if you start with butcher-expanding maps, okay, like if these butcher-expanding maps are obstructed, then, okay, it's obviously that the glue thing will not get obstructed. And if you glue things not along the obstruction, then it's also not a good idea. I think Daniel, if you're talking, we can't hear you. Okay. Can you hear me now? Yes. Okay, sorry. For some reason, there's always, I don't know, there's always just a song. Okay. So, when you have, you know, your, well, canonical obstructions, so either in, you know, like what you talked about, or, you know, let's say Kevin. So what happens on the level of the iterated monodromy group? Can you then decompose the iterated monodromy group? I don't know, have some amalgamated product or I don't know, something along those lines? Yeah, I mean, like, that's what one of the papers of Dima and Laran is about, that you may actually find, so there is an algorithm that finds the subtraction and, well, you decompose your group as an amalgamated product. So I don't know. Okay. Yeah, okay. Laran, did you talk about this in the Stony Brook seminar on? Yeah, so in principle, there are like, there should be recordings of the last talk in the Stony Brook seminar. Okay. Laran talked about this before. So in your decomposition, you collapse your, your Coshi maps to points, right? So would it be possible to do the reverse operation to kind of blow up the point and sort of glue in some Coshi map, or well, maybe glue in some polynomial to sort of, let's say, go around along, I don't know, let's say the Mandelbrot set and, you know, to move around and have different types of maps? I mean, in principle, yes. So like, you know, like you could decompose and combine maps in the spirit of Kevin, but I mean, the rally can like, all these questions about the realizability of these maps, but we don't know about it. Okay. Thank you. I have two questions. First one is ProjeMap has zero entropy. Yes. Is that graph you can only call it fine or? Say it again. So is that graph canonical in some sense? No. So right now it's not canonical. Okay. So we just find some zero entropy in the variable connection. Okay. And the second question is, can you talk about any, well, growth rate of iterated monodermic graph of ProjeMap's growth rate for the heligraph? ProjeMap's like, is it exponentially growth or? I mean, you could have anything. I mean, so just, I mean, just, okay, each polynomial is an example of a Krashemap and already there you see sort of say that like this path was I has IMG of well, of intermediate growth, Basilica, let's say, have IMG of exponential growth. So in principle, I believe at least that once you, so once your Julia set is not a dendrite, the growth is exponential, unless some exceptional this like, what type of cases. But that's, you know, like, really hard to prove. Are there any more questions here online? Well if not, let's thank our speaker again. Thank you.
There are various classical and more recent decomposition results in mapping class group theory, geometric group theory, and complex dynamics (which include celebrated results by Bill Thurston). We will discuss several natural decompositions that arise in the study of rational maps, such as Pilgrim's canonical decomposition and Levy decomposition (by Bartholdi and Dudko). I will also introduce a new decomposition of rational maps based on the topology of their Julia sets (obtained jointly with Dima Dudko and Dierk Schleicher). At the end of the talk, we will briefly consider connections of this novel decomposition to geometric group theory and self-similar groups.
10.5446/57340 (DOI)
So, while I was preparing this talk, I realized that the title that I submitted, Characterizing Thirst and Maps by Lifting Trees, was actually not quite as accurate as the title, A Combinatorial Thirst and Theory, and I thought that that also fit better within the mini course on Thirst and Theory, but maybe to be modest, I should say that this is a special case of a combinatorial Thirst and Theory. In particular, what we're interested in is advancing bridges between complex dynamics, or within complex dynamics, that's the theme of the conference, but a bridge that I'd also like to emphasize is the bridge between complex dynamics and low-dimensional topology and geometric group theory. So, the bridge that I'm most interested in is that one. And I'll also maybe just give a plug this top image. Here is an image from ICERM, which is having a semester in braids at the same time as MSRI, but for one week, both Dan Margoliet, my collaborator, and I will be speaking about this related work at ICERM during MSRI, just to advertise that. All right. So, we're focused on Thirst and Theory. I said we'll be talking about a special case, and we have this dichotomy of William Thurston that says if we have a post-critically finite topological polynomial, then either it's equivalent to a polynomial or it has a levy cycle. And here, by the way, what I mean by topological polynomial is a branch cover from the complex plane to itself. In the spirit of this being a mini course, I should also define what I mean by equivalence because I haven't seen that elsewhere. So, we say that branch self-covers of the complex plane to itself that preserve a set of points P. Those are our topological polynomials, and we say that they're equivalent if they differ by a change of coordinates of the complex plane or, more precisely, two branched covers with the same post-critical set. F and G are equivalent if I can draw this commutative diagram, that is, if they differ by orientation-projerving homeomorphisms, H1 and H2, that are isotopic, rel, the proper set. All right. So, back to the idea of Thurston's theorem. Thurston's theorem gives us this dichotomy of post-critically finite topological polynomials as either being equivalent to a polynomial or being obstructed in the special way of having a levy cycle. And one interesting problem is to effectivize this theorem. That is, to give an algorithm to determine whether or not a given topological polynomial is, in fact, equivalent to a polynomial or whether it's obstructed. And in the case where it is equivalent to a polynomial to determine which polynomial it's equivalent to. So, in the spirit of this being a combinatorial algorithm, we're going to use the characterization of a polynomial as the existence of a Hubbard tree. So this equivalence, one direction is due to Doherty and Hubbard, the other can be attributed through a different set of conditions due to Poillet, and that's what we'll focus on later. But what we give is a combinatorial algorithm to decide whether a post-critically finite topological polynomial is a polynomial or not. And so what Belk linear Margoliet and myself find or do is we find this algorithm that determines the Hubbard tree if a topological polynomial is equivalent to a polynomial or it finds an obstruction and the obstruction that it finds specifically is the canonical obstruction. So the strategy that we use is one motivated from geometric group theory that we build a simplicial complex, we define a simplicial map on that complex, and then what we show is that by iterating that simplicial map, we converge to something, and that something is either a finite set in the case of polynomials or it's an infinite set with a specific structure in the case of obstructed topological polynomials. And once we've found that set to which we converge, we simply check a neighborhood within that set to find either the Hubbard tree or the canonical obstruction. This is an overview of what we'll be doing. And by the way, this procedure might sound familiar and that's rightly so because we are contributing a new bridge in already rich theory of algorithms effectivizing Thurston's theory. So I've listed names, I won't say all of them out loud. Okay. What we are really introducing here is an approach to mapping class groups that hasn't previously been applied in the context of complex dynamics. So there's many ways of approaching the mapping class group, but two of them of specific interests to understanding topological polynomials are that of geometric group theory and that of combinatorial topology. So Bartoljnick or Shevich use iterated monodromy groups. Specifically they're viewing the mapping class group as the outer automorphisms of the fundamental group of in this case a punctured surface. And this is really a geometric group theoretic perspective on the mapping class group. In contrast, what Belclanier, Margoliet and myself use is we use trees specifically through the Alexander method approach to the mapping class group. And so this is more of a topological approach to understanding the mapping class group. And so what I mean by Alexander method for mapping classes is if I have a surface and I want to look at a homeomorphism of that surface. What I can do is I can look at curves. And the reason it's enough to look at curves is this Alexander method says if I have two homeomorphisms of the surface and the image of a sufficient set of curves, namely a filling set of curves and maybe a few others, if the image of that set of curves is isotopic under these two maps, the two maps themselves are isotopic. And so we can reduce the isotope problem of maps of homeomorphisms to an isotope problem of curves. Similarly, we'll use an Alexander method for branch covers. So the version that I'll state is due to Belclanier, Margoliet and myself, but it's also a special case of or the quadratic case is also a special case of Shepletewska and Timmerin, which says if I take a tree and now instead of looking at its image, I look at its pre-image under two branched covers. If the two branched covers or if the two pre-images are isotopic as trees, the branched covers themselves are isotopic. I also need some additional data. I need to know how the map works, maps between the pre-image. I need the actual action of edges between the pre-image and the original tree. However, the information of a tree, its pre-image, and the action of the branched cover suffices to determine a topological polynomial. Suffices has enough information to tell you what map we're referring to. Okay, so as a summary, this Alexander method is a combinatorial topological approach to understanding mapping classes and then further branched covers when viewed as higher dimensional mapping classes. So specifically what we're doing is we're viewing a topological polynomial as a tree map. This holds some themes to what Dylan was talking about on Tuesday, and we can view the equivalence class of a topological polynomial in correspondence with the equivalence class of a pre-image of a tree. And here I'm specifically saying equivalence class because there's two possible equivalence classes that we might be interested in. We might be interested in the isotopy class in order to say that two branched covers are isotopic, but we might also be interested in the Thurston equivalence class or the homeomorphism class. And the correspondence holds for both types of equivalence classes. Okay, so this Alexander method gives us a combinatorial topological approach to mapping class groups, which we're then translating to a combinatorial topological approach to branched covers. Okay, so back to Thurston's theorem that a post critically finite topological polynomial is either equivalent to a polynomial or it has a levy cycle. In order to understand this combinatorial topological approach to Thurston theory, we'll first focus on this case where f is equivalent to a polynomial. And as a reminder, the idea that we're going to use there is that our topological polynomial has a Hubbard tree. Okay. So a fact is that every, a fact due to Doherty and Hubbard is that every polynomial has a distinct Hubbard tree. And what I mean by this is the Hubbard tree suffices to distinguish the topological polynomial. For instance, the Hubbard tree for the rabbit is shown here within the Julia set. And the features that will be important to us are that the Hubbard tree is invariant under a lifting operation. So what I mean by this lifting operation is I first take the preimage and I've shown what this means both abstractly as an abstract embedding in the complex plane and also as an embedding in the Julia set. We take the preimage, but then we forget any edges that are not part of paths between the post critical set. I call this operation taking the hull. So the two operations here together of taking the preimage and then taking the hull are what I'll call the lifting map of a tree. I'd also like to point out that we need extra data that is the action of the topological polynomial on the tree because the tripod that I've shown here is both the Hubbard tree of the rabbit and of the co-rabbit. But in the case of the rabbit, the preimage rotates the edges clockwise and in the case of the co-rabbit, the preimage rotates the edges counterclockwise. So we need additionally the data of how the topological polynomial maps the preimage to the original tree. And by the way, many of you may be familiar with the Hubbard tree of the airplane polynomial as being a path of length two thought of in the real axis. But what we're doing here is we're allowing for abstract embeddings in the complex plane. So I could choose three marked points in the complex plane to not fall along the real line and the Hubbard tree for the airplane polynomial would still be a path of length two. It would just look a little bit different. And so one of the advantages is that we're not constrained by the actual numerical data or I see that as an advantage. Okay. So the key feature here is that the Hubbard tree is invariant under the lifting map and we'll use as the definition of the Hubbard tree. This characterization by Poirier that says that the Hubbard tree for a polynomial is the unique tree that has an invariant angle assignment under lifting. Non-zero angle assignment under lifting. So what I mean by that is I can assign to every angle in my tree in angle. I can lift that or I can take the preimage which naturally assigns an angle to every angle of my preimage. And then when I forget I have to combine angles so that means that I add some of them together. And Poirier says that a tree is a Hubbard tree for the given topological polynomial if it has an invariant angle assignment. In this case we see that the angles just rotate so we have two sort of separate systems or separate equalities within this system of linear equations. One of which determines that the angles are pi over three around that central vertex and one of which determines that the angles along the outer vertices are all two pi. Okay. So that's an example of a tree that does have an invariant non-zero angle assignment. Let's look at a tree that does not. So here I have a tree that's invariant under some topological polynomial and it suffices to tell you the topological polynomial that I'm talking about by telling you its preimage. That's what the Alexander method says. So I'm telling you this topological polynomial by first finding the preimage. I then eliminate any of the edges that are not part of the hull among the post-critical set. And when I combine the angles I have to add their measures and I end up with this system of linear equations and you might notice that I have these two equations that are in conflict namely this equation here and this equation here cannot both hold with non-zero angle measure. So if theta 3 equals one half theta 5 and theta 5 equals theta 3 they must both be 0. Therefore this tree does not have a non-zero angle, invariant angle assignment. The other condition of Poirier is that there are no periodic edges between Julia vertices. And so again what we're using here is this Alexander method that says that a tree its preimage and the action between the two suffices to determine not just a polynomial but also a topological polynomial. This is what the Alexander method says. But when we can say more, when we can find a Hubbard tree, that Hubbard tree will then suffice to determine the equivalence class of our topological polynomial and we need to consider trees up to equivalence where here I just mean up to orientation preserving homeomorphism and isotope. So for instance while these two trees are not isotopic they are in fact homeomorphic, they're both paths of length too. And so if I discover two different topological polynomials each of which has one of these trees as its Hubbard tree those two topological polynomials will themselves be equivalent. So our goal here is to find the homeomorphism class of the Hubbard tree. And so the natural question is how do we do it? And this is where we appeal to geometric group theory. We build a simplicial complex. And the simplicial complex that we'll build we call the tree complex. We fix a set P of marked points in the complex plane and we'll define the tree complex, relp, where the vertices correspond to isotopic classes of trees and the simplices correspond to sub-forest collapses or alternatively sub-forest expansions. So what I mean by this is given a set of four marked points in the complex plane these two trees are both correspond to vertices of the simplicial complex but there's this edge E here where if I contract it in the tree on the left it will yield the tree on the right therefore these two trees are adjacent in the tree complex. Basically we could think of adding in an edge to go from the tree from the right to the tree on the left and that's what I mean by sub-forest expansion. Okay, so for example if we have three marked points in the complex plane the portion of the tree complex is this well the tree complex is the 3-2 bipartite regular tree and the portion of it looks like this here in the center you can see the hubbard tree for the rabbit polynomial and adjacent to it you see these three paths of length two each of which you obtain by contracting one of the three edges. And so a couple facts that will be useful about this tree complex the first is that the tree complex is locally finite so for every vertex it corresponds to an isotope class of trees there's only finitely many edges that you can contract there's also only finitely many edges that you could add without ending up outside of the hull of the marked set so the tree complex is locally finite. An equally important but harder theorem or proposition is that the tree complex is connected and in fact it's simply connected but this can be seen due to work of Hubbard and Mazer or alternatively Penner that this tree complex is dual to a triangulation of tecmalar space. So for example we have our tree complex here with three marked points and the dual triangulation of tecmalar space is as such. Okay. But what we'll be now interested in doing is defining a simplicial map on this simplicial complex and the simplicial map that we'll use is actually the lifting map defined for individual trees. So if we have our post critically finite topological polynomial F we can lift any given tree and remember what I mean by lifting is I take the preimage and then I remove any edges that are not part of the hull among the marked points that's our lifting map and then we can extend that to the entire tree complex. It's known that there is a fixed point namely the Hubbard tree is a fixed point of this lifting map and our strategy will be to lift any tree until you end up at the Hubbard tree. So for example for the airplane polynomial we can start with any tree we want such as this one and our algorithm says you take the preimage, some of the edges of the preimage are not part of the regulated hull among the post critical set and so you forget those edges. The tree that you obtain is not the same as the tree that you started with therefore you repeat this process. You take the preimage, some of the edges are not part of the hull of the marked set so we take that hull, we simplify and the tree that we get some of you may recognize as the Hubbard tree for the airplane polynomial but if we didn't know that we were using the airplane polynomial all we know is that it's not the same as the tree that we started with so we have to repeat this process. We take the preimage, some of the edges are not part of the hull, we forget those edges and we in fact obtain the same tree therefore we've landed on an invariant tree and our hope is that this process will always land on the Hubbard tree, that we found an invariant tree that's great but in fact that's not true. For instance for simply the rabbit polynomial we have cyclic permutation of the three trees that are adjacent to the Hubbard tree in the tree complex so what I mean by this is here in the center we have the Hubbard tree and the three vertices that are distance one from it just permute of cycle three and so it's too much to hope for that there's this globally attracting fixed point so the next best possibility would be that there's a finite nucleus that is that there's a finite set of vertices that underlifting we end up in that finite set and what we prove and what makes our algorithm work is that that's exactly what happens so if we have an unobstructed post critically finite topological polynomial this lifting map will converge to a finite set as a finite nucleus and this finite nucleus contains the Hubbard tree. So let's look at a proof of this theorem, I'll do it in two steps, the first step will be that there is a nucleus and the second step will be that it's finite. So the nucleus I propose are the periodic trees and the existence of this nucleus comes from three basic facts, one is that the lifting map is simplicial and how we check that the lifting map is simplicial is we need to know that vertices go to vertices but all that says is that when you lift a tree you get another tree. The harder condition to check is that we have to check that simplices go to simplices but remember what adjacency means in our simplicial complex is that if two trees differ by a contraction they're adjacent and so what we need here is that contraction of a tree lifts to contraction of another tree. But we can see that this happens if we just highlight the edges we plan on contracting their pre-images will be edges in the pre-image and we can contract those so contraction in fact lifts to contraction. So f star is a simplicial map and in particular that means that it's distance non-increasing, we need that it's distance increasing but we know that our simplicial map has a fixed point and we know that our simplicial complex is locally finite and these three facts together are enough to say that all trees eventually are in some periodic cycle. In other words under the lifting map all trees are eventually are pre-periodic. So that tells us what our nucleus is. Now we're going to study our nucleus to determine that it's finite. So what we're going to show is in fact that every periodic tree is distance at most two from the Hubbard tree. So to do this we'll use Poiller's conditions again. Recalling that Poiller's conditions say that a tree is a Hubbard tree if it has a non-zero invariant angle assignment and that it has no periodic Julia edges or no periodic edges between Julia vertices. And we'll use these conditions to fix any periodic or invariant tree to bring it closer to the Hubbard tree and we'll show that we can do this in two steps. So first of all we know that our hull consists of periodic trees. We're going to take some power of the lifting map just to make a periodic tree invariant so that we can work with invariant trees. And what we'll then do is we'll fix the tree so that it satisfies Poiller's conditions. We'll first fix the angle structure. And what I mean by that is if we start with this tree that does not have an invariant non-zero angle assignment from the previous example, we saw that there was this system of linear equations and there were two equations in particular that required that theta 3 and theta 5 had to be zero. What we'll do to fix those angles is we'll just fold so we'll take the zero angles, we'll actually make them zero. And what I mean by this is I highlight the angles, I collapse the edges until I identify them as or I identify the first half of the edges to be equivalent edges. So I go from this path of length 3 to this four pod on the right. And I'd like to highlight that this process of folding the edges gave me something that was adjacent in the tree complex because in particular I can contract the two blue edges in the tree on the right to obtain the tree on the left. Alternatively, if I started with the tree on the left, I could expand at the angles by adding in an edge. By the way, this is one way to go from the tree on the right to the tree on the left is through this single edge. If we're not careful, we could also add a tree, an intermediate tree in between. But that's sort of irrelevant to my point because my point is that these two trees are adjacent, not that there isn't a longer path. Okay, so this is how I fix the angle structure. I can think of fixing the angle structure as a force expansion of the tree that I started with. I add in some edges. Next we're going to focus on eliminating periodic Julia edges. That's a little bit easier. So if I have this invariant tree, the marked, the red vertices are my post-critical set, and I'm also considering them to be my fo-to set in this particular example. Or my fo-to vertices in this particular example. So the vertices that aren't marked are Julia vertices, and the edge between them is what we call a Julia edge. And this tree is invariant under the same map that I was looking at previously. The map, by the way, happens to be the three-yeared rabbit map, but that's irrelevant to understanding the example. So this tree is invariant under my previous lifting map. However, I notice that the edges permute. So in this case, all of the edges are periodic, but only one of them is a Julia edge, and that's this middle edge. And we want to eliminate all periodic Julia edges, and we simply do that by contracting them in the original tree and then contracting their pre-image in the lift. So when we do the contraction, we have now this lift of the forepod to the forepod. So eliminating periodic Julia edges is just a force contraction. And what I'm claiming here is that by starting with a periodic tree, we can move by the sequence of expanding at certain angles and then contracting at certain edges to obtain the Hubbard tree. What we do need in the process is we need that the second step, the contractions, don't introduce zero angles, but in Poirier's conditions, he actually only requires that you have an invariant angle, an invariant non-zero angle assignment on the fo-2 vertices. And since the contractions are along Julia edges, they don't affect the angle assignment that matters. Okay, so that's fine. So just to summarize this proof, if we have an unobstructed post-critically finite topological polynomial, it has a finite nucleus, that finite nucleus contains the Hubbard tree, the nucleus consists of periodic trees. We take a power so that each tree is in there or that a tree of interest is invariant, and that tree we show is distance to that it that is an expansion and a contraction from the Hubbard tree. This tells us that the nucleus is contained in a two neighborhood of the Hubbard tree. So let's look at a couple nuclei. The nucleus for the rabbit polynomial exactly consists of the Hubbard tree and these three trees that are distance one away from it that cycle periodically. And earlier we looked at an example of the airplane where we lifted and actually ended up at the Hubbard tree, and that's always what happens in the case of the airplane because the nucleus is exactly the Hubbard tree. So all trees will eventually lift to the Hubbard tree for the airplane. Okay. As an application of this, Belklinger, Margoliet and myself observed that we're making a statement that trees converge to a finite set. We can also make a statement that curves converge to a finite set. And so this answers in the case of post-critically finite topological polynomials, a question of pilgrim that curves have a finite global attractor. And what that means is if I start with any multi-curve and lift, it will eventually end up in some finite set of multi-curves. And the proof is as follows. I start with any multi-curve. I find a tree such that that multi-curve surrounds a neighborhood of a sub-forest of the tree. I know from our theorem that if I lift that tree enough, I'll end up in some finite set of trees. But the curve will have to surround sub-forests of trees in that finite set. And those are all finite trees. So the possible sub-forest is also finite, or is it also a finite set. And therefore, I end up in some finite set of multi-curves. Okay. So I've been focusing on the unobstructed case. Maybe now is a good time to see if there are questions before I move on to the obstructed case. So you used Thurston's theorem to prove the existence of your fixed point. You didn't reprove it. The claim that there was an invariant nucleus depended on the statement that there actually was a fixed point. Yes. And in particular, Poillet's conditions require Thurston's theorem. I have one more question. Do you know in unobstructed case, maybe after iteration, this map you're considering is homotopic to a strictly contracting map, metrically? I do not know that falls with it. So we don't understand this lifting map very well yet. And so that's maybe a next direction for us or for other people. Okay. So let's move on to the obstructed case. So in the spirit of this being a mini-course, I'll remind us what levy cycles are. So if we have a post-critically finite topological polynomial, a levy cycle is a multi-curve with the property that the components of the multi-curve cyclically permute with degree 1. And they map to each other with degree 1. And a theorem due to Thurston, Burstein, Levy, Shishikura, Tan, and the only place that I've seen it written in generalities in Hubbard's book is that my post-critically finite topological polynomial is equivalent to a polynomial if and only if it does not have a levy cycle. The proof of this goes through Thurston's theorem and uses the pullback map on Teichmeler space. I just wanted to advertise this, though it's not at all used in what we're doing. It does somewhat mimic what we are lifting map. And I think that this is, again, an interesting question to consider in the future. Okay. So what we will focus on are not just levy cycles, but on canonical obstructions. First defined by Pilgrim, where Pilgrim says that an obstructed topological polynomial has what's called a canonical obstruction. He defines it as the curves that converge to having length zero under this lifting process. But Sellinger reformulates it as the canonical obstruction is the minimal multi-curve with the property that the exterior of that multi-curve is actually a polynomial, or the first return map on the exterior is actually a polynomial. Okay. So with this in mind, we're going to generalize the notion of trees to that of bubble trees. And we'll use bubble trees to help us find the canonical obstruction. Okay. So bubble tree is a generalization of a tree in that now we allow vertices, line segments, or curve edges, and then also simple closed curves that surround two or more marked points. And like with trees, we can lift them by taking their pre-images, or we can take their pre-image, and then we can also take a hull. Or taking a hull means we remove any edges or curves that are not part of paths between marked points or essential bubbles, where an essential bubble is one that contains two or bounds of disc containing two or more marked points. And the bubble trees will be in the boundary of our tree complex. So here I've drawn it in tight, Mueller space because I think it's a little bit easier to see, but an example of a bubble tree lies along this boundary. Okay. And a proposition of Belk linear Margoliet and myself is that every unobstructed topological polynomial has a Hubbard bubble tree. Thank you. Every obstructed topological polynomial has a Hubbard bubble tree. It's also true that every unobstructed topological polynomial has a Hubbard bubble tree. It's just the Hubbard tree with trivial bubbles. So my other statement was also correct. Thank you. And again, these bubble trees lie in the boundary of tight Mueller space, but the theorem that is actually central to the obstructed side of our algorithm is that a similar process to what we used to sort of find the Hubbard tree within a nucleus works for obstructed polynomials, but because we're looking for something on the boundary, a neighborhood of the Hubbard bubble tree is an infinite set. So if we lift, we'll eventually end up in a two neighborhood of the Hubbard bubble tree, except in this case, that two neighborhood is an infinite set. So for instance, this is what a twisting of z squared plus i looks like when we lift it. We sort of swap between these two paths of, or there is a path in the tree complex, and we sort of skip adjacent vertices in this lifting process. Okay. But our two theorems together that within a two neighborhood of any invariant tree is a either Hubbard tree or Hubbard bubble tree gives us an algorithm to effectivize Thurston's theorem, which says you start with any tree, you apply the lifting map. At any stage, check a two neighborhood of that lifting map, either for a tree that satisfies Poirier's conditions, or for a tree that shows you a canonical obstruction. And by shows you, I mean has bubbles that are the canonical obstruction. If you don't find a Hubbard tree or a canonical obstruction, just repeat the process with your previous tree. Okay. Actually, let's go back before I move on to another application. Yes. But you said that the two neighborhoods is an infinite set, so you check everything in an infinite set for Poirier conditions and canonical obstructions. Is it right? No. So you're checking for the tree at the lifting stage that you're at, and because your tree complex is locally finite, anything you get from lifting a tree will be another tree in your tree complex. And so the two neighborhood of any tree along the process is a finite set. So at any stage, you only have to check a finite set. Okay. Thank you. Thanks. Okay. So I will now discuss an application of this theorem to the twisted rabbit problem. Many of you in the room are experts, but for those of you who aren't, I'll just give a brief overview that we start with the rabbit polynomial. It has a post-critical point that is three periodic. We mark that set within the complex plane and draw a curve surrounding the ears of the rabbit. We then post-compose the rabbit polynomial with a dain twist around that curve. And due to bursting and levy, we know that this post-composition of the rabbit polynomial will be equivalent to a polynomial. The three options are the rabbit, the co-rabbit, and the airplane. And we want to know which one. And Hubbard made this into a broader problem where instead of looking at just a twisting by one power of the dain twist, he asked, can we give a function where we raise the dain twist to a power k and the output is the polynomial that the resulting map is equivalent to? And Bartolj and Neckar-Shevich originally solved this problem in 2006. And this was huge because the problem had been open for 25 years. What we do is we give a different solution using our techniques that allow us to answer similar twisting questions for higher numbers of post-critical points and even higher degree polynomials. And so I'll demonstrate how that works. We start with our twisted rabbit. And we guess what the Hubbard tree might be. We'll then form a lifting map. And in this case, because the post-critically finite topological polynomial is a composition of the rabbit polynomial and a dain twist, we'll undo those in the opposite order. So with our guess, we'll first untwist, then we'll lift or pull back to get a different tree. And because this tree is different, it is not the Hubbard tree. The first one is not the Hubbard tree for this topological polynomial. So we perform the lifting algorithm with this resulting tree. We untwist, then we pull back. And the resulting tree is the same or at least isotopic to the original. Or this path of length two is the Hubbard tree for the topological polynomial, which tells me that in fact this twisted map is equivalent to the airplane polynomial. And so the solution to the original twisted rabbit problem as given by Bartoljnick or Shevich is that it's a four attic solution. But we chose to look at the twisted more-eared rabbit problem, which is still four attic, but only barely. And it works as follows. If I twist the, in this case, three-eared rabbit by the dain twist gamma, powers of the dain twist gamma, the topological polynomial that it's equivalent to depends only on the congruence class of the power mod four, unless it's a power of four, then you divide by all powers of four until you obtain one, two, or three mod four. And so the resulting topological polynomial is either equivalent to an airbus, a cocopelly, or a basilica tuned with basilica. And it turns out this answer is true for a twisted rabbit with any number of ears. So here I've shown where you need to add the ears in and where additional post-critical points would be added in in the solution maps, but you can twist an n-eared rabbit and the solution to the twisted rabbit problem will depend only on the power of k mod four, and it will be a generalization of an airbus, a generalization of a cocopelly, or a generalization of the basilica tuned with basilica. Oh, I think so. And I think that's true. Exactly. I think that's true. And the reason is, is because there's more relations in the mapping class group of a punctured sphere that has more than four punctures. I'll just repeat the question. Oh, yes. Okay. So thank you. So Dierick's question was, is the original, so what we're showing is that the higher mark point cases are easier than the original that Bartolj and Nekorchevich only did the only hard case. And I said yes. I think they did the only hard case. Yes, I think it's, it's really very close, in fact, because if for k equals zero, mod four, then you have to do recursively. The same thing exactly as for the classical rabbit. Now it so happens that if you just have three points, then there aren't three cases, airbus, cocopelly, and basilica. There are just two other cases. So it gets folded a little bit. So the case three, mod four is also something you have to do a recursion on. But really, it's the same kind of argument anyways. Now the way we answered the question is that we considered t to the four k composed with p, and we showed that it's really the same thing as p composed with t to the k. Yes. And we use the similar method. So probably you also, yes, you can see by these graph lifting. Yes, exactly. That you can get rid of multiple of four by doing it in the other direction. Yes, exactly. The difference, or one thing that we were able to make easier in the higher postcritical case is we were able to choose our post set representative differently for different situations where there's only three, where there's only two cosets, there's not a lot of choice of post set representatives or not a lot of need for choice for post set representatives, but we use different post set representatives for the different cases or different stages of lifting even. Yes, thank you. It's very similar. I guess I needed to include my joke, begin an airbus, a cocopelly. For here I've shown only a basilica, we actually need a basilica tuned with a basilica. And maybe I'll just advertise that our method can also be used for pre-periodic polynomials. Bartolian Necker-Shavage also did this. Looking at a twisted z squared plus i, our methods similarly work for such a, for similar cases. So the solution to the twisted z squared plus i was given by Bartolian Necker-Shavage, but like with rabbits we can also generalize to higher numbers of postcritical points that are still pre-periodic. And it turns out that in that case we actually get a periodic solution to the twisted generalized z squared plus i polynomial question when there's four or more postcritical points. Also Justin Lanier and I have been working on some additional twisted, twisting problems. So a couple years ago, this was more impressive two years ago when I gave this talk, but a couple years ago, Howard visited Michigan and asked what happens when you twist a cocopelly. And at some point we thought it was definitely, it definitely had a 16 adic solution, but there's two cases where we're really having trouble to finishing and that's why this is not out yet. But we think it's, well we know it's at least a 16 adic solution and there's two cases that might be even more. Justin and I have also solved a twisted cubic rabbit problem where there the solution is nine adic. Okay, thank you. Thank you. Any questions from the online audience? I have a few questions. I suppose you have an algorithm, so I start with this. So you have an algorithm, so maybe there's no hope for this, but do you have any estimate on the time that it would take it to converge and how one might be able to bound this? And is this something, how easy is it to implement and have you implemented it and how does it work in practice? Okay, so there's two questions here that I'll address separately. We have not considered the speed of the algorithm in part because we think that this is a great undergraduate project and we're just waiting for the right set of undergraduates to approach it. And for effectivizing the algorithm, actually Will Warden and with some help from Mark Bell are working on programming it. Thank you. So suppose that you have a rational function, which is post critically finite, but some, are some genuinely pre-periodic points. You can act on this by the mapping class group of the complement of the post-critical set. Do you have any idea what the structure in that mapping class group of the set of obstructed or the set leading to any particular polynomial might be? I really am trying to get it, not just powers of Dane Twiss, but a subset of the mapping class group. No, we also have not thought about it. That's a great direction to think about. One thing I'd be interested in that I think our algorithm might give us some power to consider is what's at, I think, the core of your question, which is what sorts of mapping classes might you apply to obtain an obstructed map? And therefore coming up with some sort of condition on what it takes to obstruct a map or what sorts of twists can cause a map to become obstructed. I think that that's a really interesting question and we haven't thought about it much. Thank you. I have two questions, one question or one answer. So first for what Hamel just asked. So there is such a thing called the EDT0L languages. And that's really saying that you iterate some substitution. Well, you have a finite collection of substitutions that you iterate on finite initial data and then you do some projection. So that's the general structure. If you take an invariant cyclic subgroup, such as the twist of the ES, you're iterating some number of times, multiply by 4 or multiply by 4 plus 3, and then you're projecting to get a rabbit, co-rabbit, or airplane. So I don't think you'll be able to see more than that, but that's still a reasonably small class and you can do it for the whole mapping class group. You're also encoded by generators, for example, and substitutions. This substitution is exactly the inverse of the, well, it's the expanding map on modular space or the expanding correspondence on modular space. Now I had a question for Becker, which is why do you do this with trees and not general graphs so as to address rational maps? It seems you can also lift graphs and project and do iteration entirely symbolically in this way. Well, certainly one of the limitations is that the Hubbard tree is already known, it's already known how the dictionary between Hubbard trees and branch covers work, but with rational maps there's a number of difficulties that we found in different cases. I need a moment to remember what these difficulties are. Well, first of all, you don't have this reduction. You can lift the graph and then what? For trees you lift and then you can cut off all the extra stuff. In general, you won't be able to do it for graphs. So can you hear me? Yes. Thank you for the great talk. I have kind of a related question to what Laster was asking. So you pull back the trees and so I guess we learned from Le Bon, I guess it was mainly because last week that the bicep is sort of, well, let's say, I don't know, an algebraic way of encoding, well, pulling back curves. So can you encode pulling back the edges that you use to pull back your trees and sort of, I mean, can you use a bicep to encode that or to make an algorithm that you can, let's say, implement in the computer? I don't think we had considered using biceps to encode the edges. Or maybe something else. I mean, some algebraic or what? I'm not sure. Okay. Yeah. Thank you. I have another question. So when you have a hover tree, in a way, the information that you really need is the tree on the singular set on the critical values. So the critical points that are kind of not in that tree already, they're kind of really only included to tell you which Hobbits class you're in. And if you know, I think, which Hobbits class you're in, if you have your covering, then you can always reconstruct the pre-image of that one. So I was wondering, in your algorithm, if you only did iteration on trees that includes the critical values, but not necessarily all of the critical points, would that still work? Is that enough? So do you mean contain the critical values and all post-critical points? Yes, yes. Sorry, all post-critical points. So the whole Hobbits, forward Hobbits of the critical values themselves, but not necessarily the critical points, that sits out of the way somewhere. We do not require that our trees contain the critical points if they're pre-periodic. Okay. Okay. Thank you. Yeah. Okay. We are running out of time. So we'll ask questions during breaks. So let's thank the speaker again. Thank you.
Thurston proved that a post-critically finite branched cover of the plane is either equivalent to a polynomial (that is: conjugate via a mapping class) or it has a topological obstruction. We use topological techniques — adapting tools used tostudy mapping class groups — to produce an algorithm that determines when a branched cover is equivalent to a polynomial, and if it is, determines which polynomial a branched cover is equivalent to. This is joint work with Jim Belk, Justin Lanier, and Dan Margalit.
10.5446/57341 (DOI)
Yeah, so my name is Sergei Shenekov. I finished my PhD here at X-Message University and what I will talk about today, the transcendental Thurston theory is a part of my PhD thesis and still work in progress. The history of Thurston theory starts in the 80s when William Thurston answered the question which topological models of post-created finite rational maps are realized by actual rational maps and it was nicely written in the paper of Dwardin Hubbard in 1993. And this paper I think was mentioned quite sometimes already at the conference, for example, by Dylan. And this result had quite some importance for the world of complex dynamics. For example, using Hubbard trees for a year did the classification of post-created finite polynomials as dynamical systems. Then similar classification was done for post-created finite Newton maps by Lodz, Mikulich and Leiche. And the similar result was also proved for critically fixed rational maps, the O'Helushenkant, and the Algorithma, a couple of names to mention here. And when speaking about Thurston results, I cannot skip mentioning his big vision. So he had, except for the topological classification of rational maps, he also had two other quite important, quite big results. Capabilization of three manifolds in classification of surface automorphisms. The first, about the first problem we had a nice introduction by John Hubbard on the very first lecture at the conference. And this was an important step in the geometrization problem that was finished by Grigory Perlman. And while looking at this picture, I once again want to stress the subtle importance of complex dynamics for the geometry and particular for hyperbolic geometry. Well, but I work on transcendental Thurston theory, that's what I'm interested in. The attempts to generalize the rational theory were made ever since the 80s. I would say that the milestone publication was done in 2009 by Habar-Schleich and Shishikura, exponential Thurston maps and limits of quadratic differentials. And in this paper, they answered the Thurston question for posingular finite topological exponential functions. And while many people at the conference already explained to you about the posingular, posingular finite functions, posingular finite functions, so these are the functions where the singular orbit or critical orbit is finite. And I want to mention at this point that all the functions that I will be considering in my talk will be posingular finite. And so this paper coined in a nice idea, the thick-thin decomposition for quadratic differentials, and it's quite a general tool that allowed them to tackle this transcendental case. So the next publication that I would like to mention is PhD thesis by David Sprantz in 2019. In this thesis, he generalized the theory of Habar trees to so-called homotopy Habar trees for all posingular finite entire functions. And the final publication that I want to mention, a publication by Kostya Bogdanov, infinite-dimensional Thurston theory and transcendental dynamics, which were put on dark after this year. And while it's... You will hear some more from Kostya, he will speak right after me. But you probably can notice that by looking at the schedule of our conference, that quite some talks go into the world of transcendental dynamics. And as you can see, the theory of Habar trees was promoted to the full power for entire functions. And the tools that you can see in this exponential paper, although I general enough, so I want to make a statement that transcendental Thurston theory is ready to blossom. Okay. So let's maybe get a bit more precise on what Thurston theory is. And first, you want to start with a topological model for some homomorphic function. And as an example, already it was on the conference. I have an example for topological polynomial. For example, topological polynomial is a finite-degree branch covering of a plane to a topological exponential. This is, as you can see on the slide, it's a topological cover of a plane to one-spuncher plane. So, when you have your topological model that you pick, you ask a question, can this for singularity find a topological model, have be realized by some homomorphic or meromorphic function g, depending on the class that you consider. And the way to make it precise is by Thurston equivalents. So let's have a look at the definition of the slide. If we have a topological post-singular finite function f, and then it's Thurston equivalent to some meromorphic or homomorphic function g, if we have the following diagram. So on the left-hand side, here you can see your topological function f from a sphere with punctured post-singular set to itself, to a sphere with the punctures. On the right-hand side, you see an entire meromorphic function g from Riemann's sphere with punctured post-singular set, and phi prime and phi double prime are two homomorphisms, which are isotopic relative in the set of post-singular points. Well, and let me maybe give a couple of ideas to the people who are new to Thurston theory. I won't go into too many details, but I just have to say this. The idea is a particular Thurston iteration, and you run this iteration on Taichmaner space. And the Taichmaner space, which was mentioned, for example, by Lasse and his talk, the Taichmaner space of a sphere with finitely many punctures, this would be the interest for us, is the space of conformal structures. So to be more precise, this is the space of homeomorphisms from one finitely punctured sphere to finitely punctured Riemann's sphere, modular sum, isotopic equivalence relation. And the thing that we iterate on this Taichmaner space is the Thurston pullback map. And it was already advertised by Becker this morning. But let me draw the diagram on the slides for you. So you have on the bottom of the commutative diagram, you have a phi n, which represents a point in Taichmaner space. This is some homeomorphism from punctured sphere to punctured Riemann's sphere. And on the left, you have your topological model F. So then you take the conformal structure from here, and you pull it back by the composition of phi n and F. And you uniformize this by getting the homeomorphism here, and the diagram closes up with an entire function on the right and side. And we say that phi n plus 1 is a Thurston pullback of phi n, and this is defined on Taichmaner space. So a couple of notes here. You can see from this diagram that entire functions gn plus 1 are from parameter space of topological model F. And you can think about this. If you have a topological model for exponential functions, then gn's would be from an exponential family. And also, you can see from this definition that a fixed point of this iteration will give you the solution. If you have a look here, the diagram will close up, and the function g on the right hand side would be the Thurston equivalent function. OK? OK, so what's? The map phi n plus 1 is only defined up to post composition with an automorphism of C bar. And gn plus 1, as defined, has no dynamical meaning because post composition with an automorphism of C bar will destroy any dynamical. Right. That's true. We have to fix some particular points in our family by a business. That would be more correct. So it turns out that the Thurston problem has a positive solution unless there is an invariant topological multi-curve with a particular impossible geometric mapping properties. And by topological multi-curve, I mean a collection of simple closed curves with at least two punctures, at least two points from post-singular set inside and outside. And one kind of topological obstructions that will be important for my talk is a levy cycle also, something I've already heard. And a definition of levy cycle goes like this. If f is your post-singular finite topological model, then a levy cycle is a sequence of disjoint, essential, simple, closed curves such that some component, gamma i prime of the pre-image of the curve gamma i plus 1 is homotopic to gamma i, relative to f, right? This is your cyclic behavior. And this map from the pre-image to image is a homeomorphism. So it's a one to one mapping. And the conceptual idea of my talk, if you have two entire functions that can be obstructed only with levy cycles, then the composition can also be obstructed only with levy cycles. So maybe that's a good point to ask whether you have some questions. Yes? I think as defined, well, the question was whether levy cycle is a dynamical thing or not. And Dirk has an issue with me stating that levy cycle is a dynamical object. Yes, well, this is the idea of my talk is a concept which I don't want to define precisely. This is just a taste of what you will see. So I don't want to digest this couple of lines on the bottom of the slide too precisely because it will require me some talking and to get into the things that I don't want to get in my 30 minutes that I have for presentation. OK. That's good. So first, I would like to speak about quadratic differentials. Quaradic differentials were the powerhouse of the exponential paper that I mentioned. And the reason for this is that the space of finite norm mirror morphic quadratic differentials is cotangent to the tight mirror space. And this was beautifully shown by John Hubbard on the first talk. Well, and why is this important for us? It's important because contraction in the cotangent space implies convergence in tight mirror space. And maybe I can make an analogy here with something that we all are familiar with. If you have a rubber duck that swims on a D minus two dimensional pond and you push it with a force that is ever decreasing, then at some point your duck will stop. And this is quite similar to what we do, but we do it in discrete time. So a conceptual take away of quadratic differentials is that they help to split the problem into functional question and dynamical question. It will be in how a particular function pushes forward the quadratic differential. And dynamical meaning what combining those push forwards along the orbit, what do they tell us about the dynamics of the function? And let me get a bit more precise. So a mirror morphic quadratic differential q is an object on a Riemann surface that locally looks like q of z, dz square, and this gives you the coordinate change between the charts on the intersect. q is a mirror morphic function. And an important notion for me that I will mention a lot is the norm of quadratic differential, which I will be calling mass from now on. If we have a quadratic differential q, then the norm on some domain x would be the following integral, norm of qz, norm of dz squared, this was expressed by John Dexdy. And a couple of properties of mass that will be important for me. That's the first mass is a zero on open domains, but also mass can concentrate on small domains. So for example, if you have a double pole of q, then the mass around this double pole will be infinite. And if you have a couple of simple poles that come together, then the mass on the small disks around them will be very big, and it will decrease exponentially away from the poles, at least in some neighborhood. Okay. And I also want to say about push forward of quadratic differential by polymorphic or mirror morphic function. And a push forward is just a quadratic differential that is defined on the image. Even surface. An important property for me would be this property with the composition. So the push forward by a composition splits into two consecutive push forwards. And also I will be speaking about how push forwards of quadratic differential by functions affect mass of quadratic differential. Push forwards by injective functions preserve mass, but usually some masses cancel. This means that this ratio that you can see here of mass of pushed forward differential to the original differential is usually less than one. And I will say mass cancellation that would mean one minus this factor, and the estimating mass cancellation is the main work which was done in the exponential paper. So maybe again, other questions about quadratic differential? Okay. So once again. So I'm wondering if the push forward operator is always well defined in this case. Yes. So infinite to one mapping still is perfectly fine with push forwards. The thing is the push forward of quadratic differential by function may have many inverse branches. So you will have a couple of domains that are mapped on top of each other. And their quadratic differential will interfere in some way, but it's still perfectly defined. Okay. So maybe let me move closer to the main point of my talk. The first definition that I have to give is a definition of a Levy disk. Again this is not a dynamical definition. Let m be some modulus and g be an entire function and p be a finite set in a complex plane. And you might think about p as the more singular set that we saw earlier. So let a be an annulus of modulus m, like on the picture, and c be its bounded complementary component. And we denote d as a pair AC or sometimes we abuse the notation and we say it's a union. So the d is an M Levy disk for function g. If g is by holomorphic on the union of a and c. And a is essential with respect to mark points p. That means that there are at least two points from this set e inside and outside of the annulus a. So once again, the important thing is about Levy disks. They have modulus m. Function is by holomorphic, so it's one to one on Levy disk. And the annulus is essential with respect to our mark point. This is a Levy disk and the next definition of Levy type functions, perhaps the most important definition in my talk. So I'll give it on various levels of details. So Levy type functions are the functions where mass can survive the push forward without consolation only on Levy disk. So being a bit more precise, if the mass loss in push forward of q by a Levy type function and g is small, then almost all mass of q is located on Levy disks for g of big modulus. And big modulus is important. So now to the precise definition of Levy type function, let p be a finite set. And again, you may think about the singular set and g be an entire function. Then we say that g is Levy type. If for any quadratic differential, q with poles in p, any modulus m, any fraction of mass outside disk epsilon, there is a fraction of preserved mass eta such that once the mass constellation is small, so the fraction of preserved mass is bigger than eta. Then there are m Levy disks for function g of modulus bigger or equal than m such that all but epsilon mass sits on these disks. And this definition might sound a bit technical, but in fact, in the exponential paper, it was proven that exponential family of functions are Levy type functions. And another result that we have proven recently is that these multi-era functions are also Levy type functions. And if you have a look at the formula, it doesn't tell you too much. But in fact, functions of this kind, where p is a polynomial, are the functions with finally many acycotic tracks, and that's why they have interest to us and they really have some meaning. And why Levy type functions are so nice is because of the Levy disk principle. And Levy disk principle tells us, if you have some topological post-singular function f, then you assume that all of the entire functions in its parameter space are Levy type. Then such topological model is realized unless there is a Levy cycle. So Levy abstractions are only possible abstractions for function f, for which all the entire functions are Levy type. And here I have to apologize because it's the only slide in my presentation that requires some detailed knowledge of person iteration. But I want to say a couple of words about the proof, about the ideas of the proof of Levy disk principle. All right, so assume that our function is not realizable. And this necessarily means that there is an efficient, so-called efficient sequence of functions gn and quadratic differential qn along the orbit of our iteration in tight mirror space. And efficient means that the ratio of mass cancellation, well, this ratio goes to 1. And functions gn are Levy type, and this means that far enough along the orbit almost all mass sits on Levy disks, like you can see here on the picture. And I hope people online can switch from the slides to the blackboard to see what I have drawn here. You hear a bunch of Levy disks for function gn, and we know that almost all mass sits on them, and they have big modulus. And we can map them forward by function gn into some other moduli, any line of big moduli. And when we do this sufficiently many times, we map them around, and it necessarily has to happen for some Levy disks like this, that in a finite number of iterations that depends only on combinatorics of the function, it travels around, and its image comes back into its homotopy class. OK, so this means that the core curve of this annulus has the cyclic behavior, and if you remember the definition of a Levy disk, the biomorphicity of the map on it was a key point in the definition. So this means that all of the functions between core curves will be one to one, and this red curves on the picture give you a Levy cycle. So this is the proof, if the function is not realizable, then you get your Levy cycle. OK, so the... So you're claiming here a theorem which 50 people have worked on 20 years, and I feel you were a little swift. OK. I didn't actually manage to understand your definition of a Levy disk. I'd like you to go back to it. I mean, if you could actually justify that that's theorem is true, well, you have a fields metal coming to you. That's an interesting statement. I mean, it's not something to dismiss by just one slide. Well... I would really love to go into the details of the proof and explain them to you. Unfortunately, I don't think that I have time for this right now, and also I'm pretty sure that some of the people in the audience won't be able to really follow the discussion, but I definitely would be happy to convince you that this principle, which I don't claim to be proven at the moment, that this principle is true and I'm really going with the fields metal. OK. Yeah, but let me maybe quickly... Part one, we've all known... Yes. Well, my claim would be that the Levy disk principle is somehow... You can already read this through the lines in the exponential paper at this moment. So I don't think that this is such a groundbreaking thing, but again, let's maybe discuss it a bit afterwards, because if the Levy principle is a dynamic type statement, then the following theorem that actually came to be a theorem is a functional statement. It does not involve any dynamics, and it tells you if F and G are two Levy type functions, then the composition is also Levy type. Well, and we're kind of... I gather that the condition that what makes this not absolutely revolutionary is the hypothesis that all the functions G and Levy type. Right. Proving that all the functions of Levy type is really all the work that has to be done in order to put this to practice. Okay, so that's really the hypothesis that makes this theorem. Right. Okay. Okay. Yeah, so let us get back to our functional statement. And I was saying that we already kind of used this theorem as an idea for the Levy disk principle. So assume that the mass constellation for the composition is small, and then the mass constellation for each of the functions must be small due to the properties of push forwards that I mentioned earlier. So you take the Levy disks, you take the Levy disks for function G, and you map them forward, and the disks, well, kind of you can see it on this picture, the images of the disks will be almost the Levy disks for the next function F, just because they share almost all mass of correct differential Q. And then you have actually to think about a bit about the geometry of what's going on, and you have to tweak the mass constellation ratio. But in the end, you can get arbitrary big modulus and arbitrarily small fraction of mass outside of the Levy disk for the intersection of these images of Levy disks for G and Levy disks for F. So this is also, I wouldn't call this theorem a ground breaking result. But what do the Levy disk principle and the Levy composition, the Levy disk composition theorem seem to imply is that if you have two functional families for which, for each of them, Levy abstraction are the only possible type of abstraction, then for the composition, also Levy abstraction should be only possible type of abstraction, and your topological model should be realizable otherwise. So where do we go from here? One goal is to put more functions in the Levy type. This is perhaps what John would be excited to hear about. And I also work on this in my PGD thesis. So this will be in the thesis as well. There is some more work coming this direction. So the next thing, you can actually add critical points to the framework of Levy disks. If you remember, Levy disks mapping one to one was an important piece of definition, but you can actually relax it. You can allow for branched mappings, and you can develop the theory of so-called branch disks and branch disk principle. And for example, this helped me to get a proof for two-genometric family, which as you can see is a composition of a function with two critical points with exponential Levy type function. And maybe a small remark here. You see that the rational function has a pole, but the composition is entire. And I work with entire functions. It would be sufficient for me to have an entire composition. Well, in the final, and perhaps the most ambitious goal for me is to add the poles into the framework of Levy disks. So this would require quite some work because we should see all the other possible obstructions, not only Levy cycles, but all the other obstructions. They come with poles. And this would require understanding the paper, the rational paper, by Duodin Hubbard well enough. So this is a challenging task. But at this point, we believe that this should be doable. So I hope I told you some interesting stuff about transcendental Thurston theory and even for people who are not really aware about Thurston theory too much before this conference. And thank you for your attention.
Thurston's topological characterization theory asks whether there is a holomorphic dynamical system that realizes topological (even combinatorial) data, this often allows to describe all possible dynamical systems in a certain parameter space. I work on extending Thurston topological characterization theory to different classes of transcendental functions. In my talk I will start with some explicit families of functions for which we have established an extension of Thurston's theory, and then describe further extensions by compositions of such functions. The presented ideas are part of my PhD thesis under the supervision of Dierk Schleicher. This is work in progress.
10.5446/57316 (DOI)
Okay, thank you very much. So perhaps an apology to begin with. I'm not, I don't really feel like I'm an expert in transgenital dynamics. I don't know a great deal of what's going on. I have thought about a few particular problems. And so I think I have to restrict myself to the small parts of transgenital dynamics that I've thought about. So it's going to be a little bit of a narrower talk than is a terrific talk this morning. Also those of you who have heard me speak before on this topic will have seen lots of these slides and a lot of this material before. So it's intended to be maybe an introduction for people who haven't seen so much of this side. I'm worried though that it'll be too boring for the experts and too quick for the non-experts, but we'll try to go in the middle somewhere. So mostly what I wanted to talk about are these things, the Spicer class and the Aramaical Lubich class, which Anna had introduced in her lecture this morning. To state a few theorems carefully, we're going to have to talk a little about quasi-control mappings. And so I'll spend a couple of slides on that. And then there are two theorems I want to describe. One is sort of gives the structure of what Aramaical Lubich class functions look like. And the second is something she and I refer to quasi-control folding, which is sort of whether Spicer class functions look like. They're a little more specialized. And so they have a little extra structure. So I just try to describe that. And then there's a long list of possible applications that once you understand what these classes of functions look like, that you can try to prove various things or do various things with them. And I want to talk about two of these possible applications. One is specifying the singular orbits. Now the singular points and the singular orbits are very important in dynamics. And it's reasonable to ask, well, what can they possibly be? And the answer is pretty much anything that you want. And so we'll talk about that. And then the second thing I'd like to talk about is some work with Plaza Rente on equilateral triangulations and surfaces. That comes in. That doesn't sound so dynamical. But basically the proof is closely related to building Spicer class functions. And it does give rise to a lot of new finite type dynamical systems. And so it sort of introduces a new branch of a homo-effect dynamics, which I hope will be of interest in the future. So let me just repeat what Anna said before. The singular set, I take it to be a closed set. It's the closure of this set of critical values and the finite asymptotic values. Those are the finite asymptotic means that you have some curve going off to infinity and the function has a limit along those. Those are not going to be so critical to us today because most of the examples we construct are going to have critical values, but either will not have asymptotic values or will only have a very few of them. I'm trying to point them out when they occur. The two main things we're interested in are the Aramaical Lubitsch class, which has a bounded singular set. So all those singular values lie in some disk. Usually for convenience we'll think of that as the unit disk, but you can always take an Aramaical Lubitsch function multiplied by a constant. So if they're all trapped within the radius 10 and you divide by 10, then they're all trapped within a disk radius one. So usually we just take the bounded singular set to be inside the closed unit disk and the Spicer class is a finite singular set. And again, for simplicity, it's very common that we just restrict to the case when there are two or three singular values, so a rather small number. Questions so far? Okay. Well, the point about an Aramaical Lubitsch function is that when the singular sets are in this small set, so again, I'm assuming here that they're in the unit disk, as I said before, that if you look at the places where the function is bigger than one, there are no singular values there. And so what one can show is that the components of where f is bigger than one are simply connected components and we call these tracks. And so the tracks here are these dark green things. These are the places where f is bigger than one in absolute value. It's left over. So the thing which is, well, let me choose the white region, which now I'm shading green to make it look like the green region, which is maybe silly. The region between that, that's where f is less than one. So f is less than one on this white region. And that's where, if there are going to be critical points, all the critical points are scattered around in this region. Okay, now what does the map look like? So this is the place where it's bigger than one and it'll place that it's less than one. On the place that it's bigger than one, those tracks, we can think of, they map to the outside of the unit disk, but it's convenient to think about them first mapping to the right half plane. So first going over here and then going down to the outside of the circle because the first map is conformal. So just a one to one holomorphic map of each track into the right half plane and then followed by the exponential map. And so we understand conformal maps pretty well. There's a whole literature from geometric function theory about understanding conformal maps to a half plane. And the exponential function, well, we already understand that very well as well. That's a very explicit function. What's mysterious about these functions is that the white part maps into the unit disk. And that's where all the complication is. And so in some sense, if you have an Aramaic-Lubic function, where the function is larger than one, we have a very good picture of what that looks like. And where it's less than one, the place where the singular values can occur, that can be quite complicated. And that is what is sort of different about each individual function. Now, one additional thing that you can do is on the half plane, it's very natural to put dots, say unit disk, evenly spaced. Usually the distances here we think of as being 2 pi. So these all map to a single point on the unit disk. And then you can look at their pre-images. And these pre-images will be an infinite number of dots on each of the tracks. This introduces some geometry. So this is sort of telling us there's a natural scale, that this is sort of the natural unit distance on the boundary of these tracks. It corresponds to basically a unit distance in the half plane. And part of what I want to express to you today is that in the Aramaic-Lubic class, these dots are not so important. It's really just to shape the topology of the tracks, which is important. For the Spicer class, when we only have a finite number of singular values, then the distribution of these points, how the boundaries map over, what are the unit sizes on the boundary, those become more important. So the Spicer class involves some geometry, whereas the Aramaic-Lubic class is primarily just a topological thing. So here's our picture of Aramaic-Lubic functions again. So we have the tracks mapping conformally to the outside, and then we have the white region that maps to the disk. Now the tracks are connected, so the white region that separates them is simply connected, so you can actually map it. There's a map of the disk to that white region by the Riemann mapping theorem. And F maps it to the disk again. And so when you put these things together, what you get is a mapping of the disk into the disk, but it can be many to one. And this is what we call an inner function. So in particular, the boundary points here go to boundary points on the track, they go to boundary points on the disk. And so in fact, the region between the different tracks is naturally associated to an inner function of the disk to the disk. And this inner function is basically responsible for all the critical values of the function. A common example of an inner function, for example, is a Blaschke product, something of this form. So if you study some complex analysis, you may have seen this. All right. So the question is, this is an Aramaic-Lubic function looks like if you're given a function, then you have this picture. You have conformal maps on the tracks, and you have the inner function on the interior, on the complement of the tracks. The question is, can you go backwards? Suppose I don't give you an Aramaic-Lubic function, but I give you this picture. So I give you this. Is there an Aramaic-Lubic function that goes along with this? So if I just give you the tracks, and I give you these towels, is there a way to build an entire function that sort of fills in the gaps between the tracks? And the answer here is yes, but it'll take a little bit of work to get there to make it a little bit more precise. A model, oops, I didn't mean to fix that. I just wanted to highlight. So a model is a set of tracks. Omega is a set of unbounded Jordan domains. So the Omega is the thing that looks like this. There'll be a lot of these components, instantly many of them. And then from each track, you have a mapping to the upper half plane. And I don't say anything about what goes on inside. I'm just giving you that on the tracks, we have these tracks, and then we have conformal maps to the half plane. And given the tracks, the conformal map is determined. If I'm going to map a track to a half plane, really that is completely determined up to a factor of a lambda. So after I map the half plane to itself, I can shift it upwards or I can multiply by a lambda. And that's all the freedom I get. So the track pretty much is the conformal mapping. But does every model give an Aramunka-Lubich function? And I'll just cheat and I'll tell you the answer is yes. But to state the theorem, I need to say something about quasi-conformal mappings. So questions so far? You're doing OK? Can anybody hear me or am I just talking to myself? It's fantastic and we hear you very well and we already have a question. Is there any reason to think that there are finitely many tracks? There could be. The case when there's finitely many tracks is particularly interesting. If you have infinitely many tracks going out to infinity, then some of them have to be quite narrow. And if the track is quite narrow, that means that the function F has to grow to infinity very fast inside it. So a wide track, something that has like a big angle here, that can allow for a fairly slowly growing function. But a track which is narrow has to go to infinity super fast. So something that has an angle in it can grow sort of like e to the z to the alpha, to what we call finite order. But a track which say, for example, has angle zero, reviewed at infinity, it has to basically go like e to the z to the infinity if that makes any sense. That's to go faster than any, that's to have what we call infinite order of growth. So all the examples of functions that have what we call finite order only have finitely many tracks. Well, that's at least philosophically true. I might be making a technical mistake, but that's the main idea. So many of the common examples of finite growth, entire functions in this class, they will only have a finite number of tracks. But either case can happen. Does that address the question? Yes. Yes, it was a yes. Okay. No, because the speakers don't hear if we don't speak in the microphone. So John Hubbard said yes, but well, please keep going. Okay. Yeah, thank you. As I said, I'm not reading the chat and I'm not lip reading any of the pictures that come over. So I'll depend on Anna or this, the questioner to just break it. So QC mappings are homeomorphisms of the plane to the plane. We can think of them as being differentiable. And technically they're differentiable almost everywhere, but they send infinitesimal ellipses to circles. And the ratio of these ellipses is bounded. That's the critical thing. So you can't have arbitrarily thinner ellipses occurring here. You can easily teach a whole semester long course on quasi-conform mappings. I just sort of want to summarize the main things in a couple of slides. In terms of differential equations, you can take the Z bar derivative over the Z derivative that ratio gives you mu. And this complex number encodes the ellipse field. Basically the absolute value encodes the eccentricity. What's the ratio between the minor and major axes? And the argument of this number encodes the direction of the major axis. What direction does it point? And the fact that mu encodes this information, basically you just have to compute the, the linear approximation to f in, say you think of it as a function of two real variables, and you can work it out. And this is what it comes out to be. Now the eccentricity being bounded corresponds to mu being less than one, strictly less than one. So k being less than infinity corresponds to this mu being less than one. And so we have a dilatation which is a, which is associated to every quasi-conform map. I tend to like to think of things discreetly. And one of the most common examples of QC mappings is just that you take a triangulation, you can map each triangle affinely to an image triangle. So very often we'll have a domain and we'll cut it up into pieces like triangles and map it forward. And the new pieces are triangles and the mapping is just distorting each triangle in an affine way. And that, what quasi-conform means there is that the angles are basically being distorted by a bounded amount. That assuming that you began with all, say, right 45 degree triangles, it would be quasi-conform if and only if the degree, the angles in the image stay away from zero and 180. And so it's really a, instead of preserving angles, it's a bounded angle distortion condition. For people who are a little bit more expert, if one takes two triangles and fixes them to have one common edge, say zero to one, then you can easily compute the affine map which just takes the third vertex in each case to the other one. And in this case, mu turns out to be a b minus a over b minus a bar, which is the pseudo-hybrid distance between these points in the upper half plane. So if you think of zero one being on the real line, then when these points are very close to each other, you're not distorting the triangle very much at all. You have a very small quasi-conformal distortion. But if you want to map it to a triangle which looks very different, you need a lot of distortion. And it turns out it's just the sort of like the hyperbolic metric in the upper half plane, what we call the pseudo-hybrid metric. So this makes it very computable when you have an affine map between triangulations. So something like this, you can draw a picture and you can easily check that this is a quasi-conform mapping. And if you're interested, you can check what this triangle goes over to here with zero distortion. So mu is zero and this one, nothing happens. And for example, this triangle goes over to this triangle. Again, mu is zero. It's similar. You haven't distorted it. A triangle like this gets mapped to this and then you can see, aha, this one has been distorted. So that has a non-zero dilutation there. Now, the amazing thing, the thing that's so useful is that given the QC mapping, you get this dilutation, but you can go backwards. If you're given a dilutation, which has the bound that its supremum is strictly less than one, then there is a quasi-conform mapping that has that dilutation. Here I've written it as going to disk to disk because I was thinking of the dilutation on the disk. But if it was defined on the plane, the same thing would be true. If you had a measurable function on the plane, which is whose L-antianity norm is less than one, then there's a QC mapping of it. And so we're going to make use of this. And in particular, we're going to use a corollary of it, which says, suppose that we have a holomorphic function. So the holomorphic function is here in the middle. Since it's holomorphic, it sends circles to circles or it sends ellipses to ellipses of exactly the same eccentricity. That map can just turn them a little bit, but it doesn't change the eccentricity. I can follow that by a QC mapping. And the QC mapping distorts. So it can take an ellipse and make it into a circle. And when I compose these two things, I will have ellipses going to circles now. And this is what we call a quasi-regular map. It would be quasi-conformal, except the holomorphic part could be multiple to one. It could be two to one or infinite to one. And QC maps are homeomorphisms. So basically, a QC mapping, which is not necessarily one to one, is what we call quasi-regular. But if you just want to think of it as always being a holomorphic map followed by a quasi-conformap, that would be fine. But the mapping theorem says is that, given this quasi-regular composition, you can compute its dilatation, mu, and then when you apply the mapping, you can choose h so that it exactly undoes the distortion. So h will send, so really, the mapping says you can find a mapping h inverse, which takes the ellipses and maps them all to circles. Well, if the inverse does that, then h takes circles to ellipses. F takes those ellipses to other ellipses. G tapes those circles to circles. So the composition takes all circles, all infinitesimal circles to infinitesimal circles. So it's a holomorphic mapping. And this is what we call a straightening. So the basic idea is, whenever you want to build a holomorphic function with certain properties that you like, you can usually try to just build a quasi-regular thing by hand. For example, you could triangulate a region and then map that over to some other triangles and say, aha, this mapping has the property I want, but it's not holomorphic. And then you have to pre-compose it with a QC mapping and get a holomorphic one. And with luck, pre-composing by h doesn't destroy the property that you were so interested in. And most of the talk is about how this works and making this work in different situations. So for the model theorem, what happens is, suppose that you give me any track. I've only drawn one here, but again, there could be a lot of these tracks going on, many, many of them. This track maps to a half plane. So the light blue is the track. It maps to the dark blue. We have to smooth the track off a little bit. So the way we smooth it off is by taking a vertical line at some row bigger than zero and taking its pre-image. And its pre-image is a nice analytic curve inside the track. And if we had some other tracks, we would smooth them off in the same way by mapping to the discomputing back. And now what the model theorem says is that we were going to try to build an Aramaic-Lubic class function by taking this conformal map in the track. And then we're asking, can we find an inner function on the rest of the plane, which matches up with it on the boundary, defines an Aramaic-Lubic function. And if you're in the quasi-regular class, you can exactly do this. What the theorem says is that if you restrict yourself to this region, the smoothed off region, then you can find a quasi-regular function, which is exactly equal to what you want. It's equal to tau followed by the exponential function taking you to the outside of the disk. So that part you can get exactly. And off omega, out here, the thing is less than e to the row, which is about one. So row is going to be a small number. So e to the row is just a little bit bigger than one. And this is what we wanted. We have exactly the map we want. And the outside, everything outside the track is basically mapped into the unit disk. So if we multiply by e to the minus row, it'll map exactly into the disk. And the only place we don't really quite understand the function is in this little strip between where we smoothed off and the outside. On the outside, we know it's mapping into the disk. We're happy with that. On the inside, in this smoothed off track, we know exactly what the map is. It's exactly equal to something we know. We're happy with that. In between, it's continuously interpolating between these two behaviors. That's where we have to pay a little bit of attention. But this is basically good enough because we're given something which is quasi-regular, and now we could use the measure where we are mapping to correct this map to make it into a holomorphic map. So this is the picture of how one can create any error. Every error make-alubic class looks like this picture. And given this picture, there's an error make-alubic class which looks like it except for this quasi-conformal correction you have to make. But if you're a topologist and not an analyst and you don't care quite so much about the metric but only the shape of things in general, let it get in infinity, this completely describes whatever make-alubic functions can look like. Questions like this about this? Here's some technical remarks. Maybe I won't go into these so much. How do we improve from a bounded singular set to a finite singular set? So this is a little neater version of the picture I just drew you. We have a bunch of colored tracks. These are the places the functions are bigger than one. They map conformally to the right half plane and they map to the outside of the disk. The regions between them, this is where the function is bounded. That's some kind of inner function. It has all kinds of critical points all over the place and they can go to singular values which are scattered everywhere in the disk. What you can think of is that we have these sort of rigid tracks. I draw the pictures of these tracks. The conform map is more or less determined and the exponential function is completely determined so I don't have a lot of freedom on these tracks. The inner function is some kind of glue which sort of fills in the gaps between the different tracks. If you've ever done some woodworking, you might know that when you're doing moldings, you're attaching molding to a wall. You come to a corner of the wall and you want to attach the molding here. You have to cut like a very careful 45-angle degree cut in the molding so that they fit together and it looks seamless when you paint over it. If you're like me, you can never do that. The angles don't quite come out exactly to be 45 and there's always a gap between the two pieces and you have to fill it with wood putty and sand it off and then paint it over and the wood putty sort of hides the gap. In this picture, that's what the inner function is doing. You have these rigid pieces and the inner function is hiding the gap between them. We have to throw that away when we go from allowing infinitely many to finally many. Suppose that we just allow two critical values at plus and minus one and we draw a line segment between them. Then the pre-image of that line segment, these two points of all these different pre-images under F and the line segment goes to analytic arcs that connect them. Now what we're going to do, instead of taking the exponential map from the half plane to the outside of the disk, we're going to take the Cosh map. The Cosh map, oops, didn't mean to do that. The Cosh is basically, you just take the outside of the disk and you compress it into the disk by the map Z plus one over Z all over to the Drakowski map. What happens now is that now we have mappings from the outside of this tree to the right half plane and they get mapped to the outside of the segment. What we've done is gotten rid of the gap between the different tracks. It's the same kind of picture, but now instead of having a nice white region between them where we can put the wood putty or the glue, there's nothing. We want to attach each track to itself directly with nothing in between to mediate. This is harder. This requires that we can't use any putty. We actually have to cut everything precisely so they fit. This becomes a little bit harder to do. Again what we're going to do is we're going to build the quasi-regular function G which does exactly this, which basically if we take a smoothing of this, if we cut out a neighborhood of the boundary here, we're going to build a G which is exactly the map we want. It's right on. Then when we have to fill in across the boundary here, then we're going to have to do that in a slightly different way, not using an inner function, but doing it in a slightly different way. This requires a couple of assumptions on the tree. I don't know how to do this in complete generality. You can prove that you can't. You need some assumptions. Basically, you have to talk about a neighborhood of the tree. I have to talk about the edges not being too close to each other. I have to show that the lengths of these edges are well-behaved. Let me just try to go through this relatively quickly. Given an edge of a tree, we just take a neighborhood of it which is everything which is within a constant time's its diameter. A big edge has a big neighborhood and a small edge has a small neighborhood. This is a house-dorff distance. When we do this for each edge of a tree, what we get is a house-dorff neighborhood of a tree which is adapted, the neighborhood is thinner around small edges and it's bigger around big edges. If needed, you can always add extra vertices to the tree. When you add extra vertices, the edges get shorter and these neighborhoods will get thinner. That's something we often do in our applications. The main condition, though, is that the edges should be pretty smooth curves. We want everything to be basically at least a C2 curve. Usually it's analytic. Often it's a straight line or a circular arc. We only want a bounded number of these arcs to come together at one point. When they do come together, they have to form a pretty nice star. They have to be a bilipious image of just a radial star. Edges that don't hit each other cannot come very close to each other. If you know some things about extremal length or conformal modulus, if you have two edges which are not touching, then the path family that separates them has a modulus estimate. You don't have our two edges that look like this, that they come very close together without touching. We don't allow that. If you have an infinite tree in the plane that has this bounded geometry, then we're happy. The other condition is that when you map one of these infinite connected components to the right half plane, remember these dots? Well, before I was taking unit dots on the plane and mapping them back, now what I'm doing is I'm taking dots, the vertices of the tree and mapping them forward. What I want to say is that they are evenly spaced. What I want at least is that the distance between them is at least pi. They can separate. They can be much further apart. I can take spaces like e to the n, so they get much wider apart. That's fine. Well, absolute value of that. I want them to be spaced apart. This is a global condition. If you have a track which has angles greater than pi, then the mapping to the half plane is a power less than one. Evenly spaced points here, for example, become much closer on the half plane, and that's bad. Roughly speaking, this condition is saying that each of the tracks is not thicker than a half plane at infinity. It's thinner than a half plane. I won't go into that too much, but that's the general idea. Then the folding theorem says, if you have these two conditions, if you have the bounded geometry and you have the tower lower bound, these two technical conditions, then there's a quasi-regular G just as advertised. It exactly equals the cosh tau. On the light blue portion, you exactly conformally map over, then you apply the cosh function. Basically, that looks like the exponential, but it goes onto the line segment instead of the circle. You have uniform bounds on everything. As long as you had whatever the bounds on the bounded geometry you have, that's all you need, and you get a quasi-regular mapping. To get a holomorphic one, you have to use the measurable real-on mapping theorem and fix this up. But then, that's how it works. You build a quasi-regular mapping like this by hand. The folding theorem says that what you can specify is for the map out here exactly, and there's some way of filling in across the tree to make it quasi-regular. That's where the magic happens, is filling in across the boundary here. I won't say too much about that. Maybe I'll just give one very brief hint about why the name folding comes in. If this is an example of my tree, this could be two different components of it, like omega 1 and omega 2. Each of these maps to a half plane, which is both the right half plane, but I sort of draw them together, what can happen is that the one map can send this edge to something that's quite long. This thing maps it to something which is shorter. We have to glue these together. Well, how do we glue a long thing to a short thing? The secret is that we can apply quasi-conformal mapping so that if we take this piece here, it maps to something that goes like this. What you can see is that some of these edges over here are being mapped to a spike that's internal to the half plane. What's left over is exactly enough segments on the boundary so they match up one to one with the other side. Even if we had like 10 to the sixth over here and only 10 squared segments over here, we can fold up enough of them so that we have 100 on both sides and then we can glue. The difficult part is to do this folding picture, keeping the quasi-constant that you're using uniformly bounded independently of how bad it is. Even if it's 10 to the 10 to the 10 to the 10 to 3, you want to fold up all but three of these edges and leave three left so that they glue to the other side. I will not show you how you get rid of the other immense number, but basically on the other side, what you get is some not quite slit picture, but you get some complicated tree picture over here. But there's left, there's one, two, three things left on the boundary that can be glued to the other side. Creating these trees is where the term folding comes in, that you're folding the half plane into itself so that you can do this. The point of the talk though was just to state the theorems, not prove them, but to give a few applications. One where it works out pretty well, this is the simplest one, is suppose that we just want to build a function so that on the real line it goes to infinity as fast as I want. You give me a function like e to the e to the e to the e to the z squared and say, I want a function with only two critical values that grows faster than that. The tree you build is this picture, it's basically a union of half plane or half strips. Most of these things look like half strips, but there's one strip here which is getting narrower and narrower. We're going to talk about that in a second. On these regular strips, we're just taking basically evenly spaced points. You can check this is bounded geometry because all of these edges are straight lines, they're nice in C2 where they meet, they form a pretty nice star. There's nothing too bad about that. Where this thing is getting narrower, the dots spacing is about the same as the width. This distance and the distance across is about the same distance, so that gives you the bounded geometry separation. This is a bounded geometry graph. What about the tau condition? Well, if you take one of these half strips and map it to a half plane, the picture looks something like this. If you were to pull back evenly spaced points, this is basically, roughly this is the exponential map approximately. Evenly spaced points go to something which is basically growing logarithmically here. The evenly spaced points become much, much denser as you come here. If you're just taking evenly spaced points over here, what you get over here are things that grow exponentially. As you move out, the gaps get exponentially large. They're bounded below for sure. The shortest one is right here in the middle. As you move up and down, they get longer and lower. The tau condition is also okay. The folding theorem applies to this. There's a quasi-regular function which basically looks like the conformal map to a half plane on each of these things and then across the boundaries, somehow the magic of folding lets you interpolate between these. The point is that on this one track that narrows here, when you map this to the half plane, what you can think of is that in the half plane, each of these little boxes I've drawn goes to an annulus of a certain conformal modulus. Basically every time you pass one of these dots, you're going up by a factor of two in the image. By making the pinching as close as you want, you can make this function grow as fast as you want. There's no limit to how quickly you can make it grow by just making this one too very narrow. What about when you correct it? When you add in this correction factor, this is only quasi-regular. Well, now you have to use a technical thing that says, oops, that went a little too far. Now you have to say a technical thing that two-thee mappings are holder. They all grow like at most a power of Z. We're taking something that's growing as fast as we want, and we modify it by something that grows like a power. The new thing still grows as fast as we wish. We just make the original, the quasi-regular thing grow a little bit faster than we wanted to, and this is perfectly okay. This was originally due to Meronkopf by a different proof, but you can get it pretty easily from the folding. Questions on this example? Okay, let me try to spend 10 minutes on each of the next two examples. What I told you so far was what we call the vanilla folding or just plain old folding. You're mapping each track of the function to a half plane. If you want to, though, you can map a track to the left half plane instead. Instead of mapping it to the right half plane, you can map it to the left half plane. If you do that and you exponentiate it, well, when you exponentiate the left half plane, what you get is your map to the punctured disk. The yellow tracks here are tracks of the function which we instead map to the left half plane. As you go out to here, you're actually going to zero, so that you can introduce asymptotic values. It's not too hard to move that zero to some other point. If you want to approach the point one half, for example, you can approach any point inside the disk that you want. We also can introduce bounded components in which the mapping basically looks like z to the n for some power. We can break the plane into pieces, map some pieces to the right half plane and exponentiate them, map some pieces to the left half plane and exponentiate them, just introduce high degree critical points at other things. There's a lot of freedom you have. Then the folding theorem says if these boundary graphs satisfy the bounded geometry condition and the R and L components satisfy the tau condition, the things we talked about before, then you can interpolate. Then you can use folding to take these model functions and get a quasi-regular thing which looks like them away from the boundary. After the boundary, the magic happens and you can glue them together. Most of the applications use this slightly more complicated version of folding. You can even, if you want, instead of paving a z to the n someplace, you can put a 1 over z to the n. We have this notion of an inverted disk component. That's also possible. This is something invented by Kerala Zablik. We're going to talk about that next. In the singular point, f maps it somewhere. We're doing dynamics, maps it around it. We'd like to know what can happen. In the rational case, what can happen was proven by DeMarco, Cochran, and McMullen that, suppose you're given any set x and any dynamics on it. So we're given any finite set x and any mapping of x to itself. Then there's a rational mapping which mimics this. We don't know that we can get this exactly, but for each of the given points, there's a nearby singular value of the rational function. It has to be a critical value. The rational map sends this approximating set into itself with approximately the same dynamics. The precise statement is that you can find a perturbation of the singular orbit and that the action on the orbit is conjugate, or the action of the rational map on this new set is conjugate to the arbitrary map you're given on the original arbitrary set. It's interesting to know whether you can take epsilon equals zero. I don't think that's known whether you can exactly hit the set you want, but you can get with an epsilon of it for any epsilon that you want. This appeared a couple of years ago, and Kerala read this paper and he said, well, that's obviously true for meromorphic functions as well. This theorem is due to written in the paper myself and Kerala, but it really is mostly due to Kerala. He had formulated and proven it. I just stepped in at the end to grab some credit and to make a few refinements to the proof. Basically, it says if you're given any set x, which is discrete, accountable set in the plane, doesn't accumulate anywhere but infinity, and you're given any mapping of x to x, so it can map these things. They can map all to one point. They can do anything you want. Then there's a meromorphic function which does the same thing. For each point here, there's a nearby point associated to it, which is in the singular orbit of the meromorphic function. Then these singular points get mapped to each other in exactly the same way. The points, so you give it an arbitrary set and an arbitrary map, and you can approximate it by the singular orbit of a meromorphic function so that the actions are conjugate to each other. Let me try to explain how folding gives you a picture like this. These dots are the given set. I basically want to draw a tube which connects them. Basically, I'm drawing a curve which connects these things in some order. It doesn't really matter which order. I'm thickening that curve up a little bit, and then I'm making it a little bit extra thick around each of the disks. I build a simply connected region like this. This is simply connected so it maps to the upper half plane. The way I've drawn it is I drew it this way so that the hyperbolic geodesic that goes from the origin to infinity almost passes through each of these points, the most hyperbolic geodesic. Now, it might miss one, it might just go past one, but basically these points all lie on the hyperbolic geodesic connected to the first point to infinity. When I map to the upper half plane, they basically lie on a vertical line. Maybe they just slide slightly off it. This picture is pretty simple. I just have a bunch of points in the plane which are more or less vertically stacked, and I want to draw a bounded geometry tree that I can apply folding to, and so I do something like this. I just put a disk around each of these. Around each of these, I can make it look like z to the n or z to the minus n. Instead of having the critical values exactly at zero, I can really place at any point of modulus less than one. If I'm, that looks like an 11. It's supposed to look like a modulus of one. Yeah, let me, sorry. In certain disks, I can place anything in a one, whereas if I want a singular value to be outside the disk, then what I do is I put a pull there, then I perturb the pull so that instead of having the critical part of the infinity, it has it outside the disk. Then it's easy to add on extra points to make this into a bounded, oops, I got our screen here, I don't want. It's easy to add on points here to make this into a bounded geometry thing. It's a just a matter of drawing the pictures. It looks pretty much like the picture I showed you before, but all these are going to be nice R components, and then these are going to be Ds or inverted Ds. We can build a quasi-regular function which does what we want here. Then we take this picture and map it back to the other picture, so all this complicated stuff is going on inside here. Then you draw a picture, then you cut the outside up into pieces, and these are all R components. This is a pain, and it's very messy, and you can see that it's hard for me to draw this picture, but the point is I'm drawing it. Even if I have to take several pages of a paper to describe it, and I have to show blow-ups, like what does that piece look like especially? I'm not invoking any theorems here. I'm simply drawing pictures of what I want to happen. I'm checking that these pictures have the bounded geometry property and the tau property. Those are very easy to check by hand, although a little bit tedious. In any case, I can give you, construct a quasi-regular map which has exactly the behavior you want, that when you have this set X, it exactly has this orbit, and it maps it around the way you want. The point is we haven't made any mistake yet. We haven't had to use epsilon. We haven't had to approximate. This construction lets you build a quasi-regular map, a quasi-morphic map, which exactly does what you want. It uses the given set. The problem is that to make it holomorphic, you've got to correct it by the measurable real mapping theorem. You have this phi here. Phi moves everything a little bit. Even if you take phi almost to be the identity, which you can, it still moves things a little tiny bit. It's not exactly the identity map because G is not exactly holomorphic. This breaks everything. Now you had X was mapping, G was mapping X to X, but now F has to be composed with phi, so F is the composition. Now what F does is it maps this point to X, not this point. It's not mapping X to itself. It's not preserving the thing. Phi messes us up here. What Kerel saw was how to fix this using a fixed point theorem. I'm sorry, I'm trying to advance the page, but it's not advancing. There we go. What Kerel did was said, we can actually make G map X to Y. Instead of making X go to X, make X go to something else. Make the set X go to a set Y instead. We're going to get rid of this. We have enough freedom. We didn't have to make X go to X. We can make X go to Y. The black points are X's. Think of Y as being a choice of the target from inside of disk around it. It's the red point here. We can build a quasi-regular mapping which sends the given black X's to the red Y's. Let me see. Red price shows up here. The black X's are given to us, the red Y's we choose, and they can wander all around here. They can fill in the whole disk, but the black point stays in the center. When we fix this, when we quasi-conformally fix this map G, we introduce the Phi function. Phi moves some yellow point to X. The composition then maps the yellow point to the black point. G maps the black point to a red point, and Phi maps the yellow point over to the red point. Phi is taking the yellow point to the red point. What we want is we want black to black or red. We want the same color going to the same color. We have a mapping either from X to X or from Y to Y. The only way we can make that happen is if the yellow points are equal to the red. If this red and yellow point were actually the same point, then we'd have this red point being equal to the yellow point. Phi maps it to the black point, and G maps it to the red point. F would be mapping red to red. F would be mapping the approximating set into itself. How do you get the yellows to equal the reds? Well, this is a fixed point theorem. We can take Phi to be as close to the identity as we want. That's one of the things that comes out of the folding construction. The red function is filling in this whole disk. The yellow points, which are the Phi inverses the black ones, they fill in a smaller region inside the disk. They don't wander very far from the black. This mapping for each red point, we get a yellow point. The yellow point images are a subset of the disk. By the broader fixed point theorem, there's some value in which the red point and the yellow point are the same, which the mapping of the red point to the yellow point has a fixed point, and they're actually equal. Then you can actually do this simultaneously on every disk at once. There's an infinite dimensional version of the fixed point theorem. You can make this all work. This was Karel's observation that you can do this. I think Marty Pete and Shishikura also independently came up with a very similar argument. It's not something I would have ever thought. In any case, this is how you make it work. You follow an intricate bare bones construction of the quasi-regular example with a very measurable round mapping theorem and the fixed point theorem. That's very nice. In the last few minutes, let me just tell you a little bit about triangulations. I'd intended to spend about 10 minutes on this, but I don't think I can do that. We'll have to do a little bit faster. Technically, you are... Is there a question? Yes, speak up. I can't quite hear you. Technically, you are kind of about your time, but you can take a couple more minutes to finish, of course. All right. Let me just take five minutes. I put on a timer, so I'd give myself five minutes because I'm very generous to myself. If you have equilateral triangles and you glue them together, you can get a surface. This surface has a natural conformal structure because the triangles do. It's a question of which surfaces can be built in this way. It's a famous result that you can build a surface from equilateral triangles, if and only if it has a holomorphic map to the sphere with three critical values, which are usually taken to be zero, one, and infinity. If you have such a mapping, what you can think of is that the boundary... You can think of the real line that has these critical values at zero and one and infinity. It divides into the upper half plane and lower half plane. These are triangles where one edge is the edge from zero to one and another edge is the edge going out to infinity. Any other edges from infinity back to zero. The pullbacks of these are then naturally triangles on the surface. What it turns out is that not every compact surface has such a triangulation. There's only... If you're taking a finite number of triangles and gluing them together, maybe this edge then gets identified with this, there's only countably many ways to glue together finitely many triangles, but there's uncountably many different compact Riemann surfaces. Not all of them can be done this way. It was Belly who characterized when they can be done. That's why we call this a Belly function, is that a compact surface has such a Belly function, has a triangulation, if only if it's algebraic. That is, you can write it as the zero set of a polynomial of algebraic integers as coefficients. This is related to growth in the theory of Desons d'Enfall. For compact surfaces, the answer is sometimes yes and sometimes no. What Lasse and I were interested in was the non-compact case. The plane is obviously has a equilateral triangulation. I'm showing you a picture of it. Many of you are familiar with triangulations of the unit disk. This is also an equilateral triangulation. It doesn't look equilateral to our Euclidean eyes, but in terms of the conformal geometry, this is in fact, these are all equilateral triangles. There's a reflection that maps each one to the next one, and that's only possible when you have equilateral triangles. The question was which non-compact surfaces have an equilateral triangulation? The idea is based on the folding theorem that if you're given some smooth piecewise smooth Jordan domain like this, and we divide up its boundary in a bounded geometry way, we can apply the folding theorem to create folds in this so that from this point, if you were to map this to the disk, all these edges would have about equal harmonic measure conformally. If you do a quasi-conformally, you can actually make them have equal harmonic measure. What that means is that the inside of this can be mapped to the outside of the disk with equality, that all these points end up being equally spaced. And when you apply the Drakowski map, this maps to the outside of the segment. This is the same kind of picture we saw before. What this means is that each individual line segment here maps to the line segment from zero to one. And so if I had two pieces that were built this way, this piece maps to the line segment from say minus one to one, and this piece also maps to that segment. And everything is arranged so it's continuous, that we can glue a point here to a point here so they go exactly to the same point here. So pictures like this can be glued together to form a bigger picture. We can glue things together like this. And this is quasi-regular here, and this is quasi-regular here, and when you glue them together, you get a quasi-regular map on the whole thing. This is something you can't do with holomorphic functions, but you can do it with quasi-regular. And so the idea is take your favorite surface, cut it up into pieces. On each piece, you can do this construction. You build a quasi-regular mapping, which is basically a quasi-regular belly function. You can do that on the adjacent pieces as well. You can then glue them together. And what you get is a quasi-regular belly function on each piece. It's a quasi-regular mapping to the sphere that only has critical values at the three points. Now you solve the Beltrami equation, and you get a holomorphic covering map. The problem is that when you solve the Beltrami equation, it's on a different surface. So you change it. So the whole secret on the non-compact surfaces, I won't go through this in detail, but is that when you change a compact piece of the surface, you can re-embed it into the original surface. So this is the secret. So if this was the entire surface and you solve Beltrami, you get a different surface. But in a non-compact case, by wriggling the boundary a little bit, the new surface re-embeds in the original one. And now this picture repeats. You can make the QR construction. Oh, that's my time. And this is it. So let me just finish off here. Sorry for misjudging the time. But basically, you sweep the problems off to infinity. You can't do it in the compact case. But in the non-compact case, every non-compact surface has a belly function. This has a couple of nice consequences. The one I really wanted to mention, though, today was that when you have any, say, open set on the sphere, you get a finite type mapping of this region into the sphere. And this gives new examples of dynamical systems of finite type. So I apologize for running over. That's the bad manners on my part. We'll finish there. And thank you for your patience. Hi, Eir. So thank you, Chris, for your speech. Can we do you have questions in live or online? I have a couple of questions, but maybe the live audience. I'll do the microphone to the questioner. Yeah, you can ask, although probably you know the answers. Are these the kinds of questions that you really don't know the answer to already? Are you able to define any natural glowa action on triangulation in non-compact case? So I'm sorry. I'm having no trouble hearing you. Let me turn up my volume to the maximum. Please ask again. In the non-compact case, are you able to define any natural glowa action on equilateral triangulations? Is it possible? Do they have any arithmetic? Yeah, I'm not aware of any arithmetic. So as a little background to everyone else, I mentioned very briefly, I meant to spend a little bit more time on this. In the compact case, the surfaces that come from equilateral triangulations are very closely related to algebraic integers and algebraic number theory. And there's a connection with the galois theory of the number fields. In the non-compact case, we still have these equilateral triangles, but now a compact surface is a countable union of triangles instead of being a finite union of triangles. My expectation is that it would be really neat if there was some kind of analogous algebraic theory, some kind of maybe galois theory for transcendental extensions of the rationals. But I'm not aware of that. I don't know how to make that work. I haven't thought really seriously about it, but it would be perhaps a good idea to be trapped in an elevator for a few hours with an algebraic number theory person, force them into explaining some of that. So I'm not aware of an analog of that for the non-compact case, but that's more a case of my own ignorance than it doesn't exist. Thank you. I think that's a really fascinating question. So I think there was a question online. Is that correct? Yes. I wanted to ask maybe a couple of questions about the post-singular set. So one question is whether you really need meromorphic functions in order to get the post-singular set, or is it just a matter in effect of the… So could you deal with entire functions as well? Yeah. It might be possible, but the proof would have to have some new ideas in it. So I don't think the standard versions of welding would work. The standard versions of quasi-conformal folding would work. The point about quasi-conformal folding is that basically the original version was designed to deal with bounded singular sets. So really, we're very good at placing critical values that say zero and one, or at points that are not too far from zero, one by perturbing them. If you had a discrete set which stayed bounded, in other words, it was finite, then you could certainly do it by… without using any poles. But we could do it with rational functions as well by the theorem we quoted from DeMarco Cochrane, McMullen. If you're seeing an orbit that you're trying to match is unbounded, then the standard folding construction will not work. What you would need to do is to take… See, the standard folding construction would basically place the critical values inside these D components which are always mapped to the unit disk or some fixed multiple of the unit disk. What you would need is a version of the folding where you allow different D components to map to bigger and bigger and bigger and bigger disks. And then when you glue them to the R components, you would have to take that into account and the boundary of the R components would also have to be growing larger and larger. Logically, I guess that can work, but it would not be the standard thing. You would have to write down a new version. Now, Carol Izubnik and Jack Burkert, two of my students, have basically done this. So they've written down more general folding-like constructions where you glue together annuali which are getting bigger and bigger and bigger and bigger. And so they have done a folding-type construction where the boundaries are going off to infinity. And perhaps you could use that construction to place singular values anywhere on the plane that you wanted. I haven't thought about that and I haven't asked them about it. But that seems quite plausible. But if you want to make a small perturbation of the vanilla quasi-conformal, it's easier to insert a pole and perturb the pole. And then you hit singular values near the origin with one kind of decomponent and you hit singular values near infinity with a second kind, the inverted decomponent. And then that only requires like a small change to the standard presentation. So that was a technical answer, but you're an expert. So I was addressing you. Hopefully I didn't lose the rest of the audience in the same way. I don't know whether it's time for one second question. Well my understanding is that there's a coffee break so people can go off and get coffee and we can just chat ourselves if we want to. I think the next talk is at half past. So you mentioned whether you can make the set actually, the posting of the set actually equal to the actual set. Right, instead of just getting the absolute pros and cons. I guess for rational maps it's clear that you can't because there are only countably many posts could be fine at rational maps. But in the transcendental case, I guess you, there's no reason to expect that you couldn't. So I mean, one might think because you're already using, if one was naive, you're already using the fixed point theorem to hit some kind of values exactly. You might hope that you might actually be able to hit the original values exactly as well. But I think that doesn't work, it's an easy way to say why. No. Okay. Well there might be, but I don't know. I'm almost attempted to ask Karel to jump in if he's in the audience to address it since it's, as I said, I already gave him credit for the fixed point theorem and he's applied the idea to some other applications. He's probably a little more familiar with it. So as I understand it in this argument, there's no evident way of actually hitting the set you want only hitting a small perturbation of it. But maybe we're just one clever idea from fixing that. I mean, yeah, maybe, maybe if you built like two functions and compose them, you could get the thing to go back to where it was before or something. I don't know. But yeah, that definitely seems like something that's worthwhile. I mean, this is just something which was done in the last couple of years. So it's, hasn't really been followed up on yet. And so there's, I think we don't know not because it's impossible, but just because we haven't tried hard enough yet. Okay. Thank you very much. So clearly we cannot discuss with the speaker over coffee break. So if there are any other quick questions, maybe we could go over it. There is one more question down there. I'll run to the question. Are you getting more of a workout than I am? Thank you. Yeah, I was just curious if there was like, if I'm just missing it, maybe, but was there an upshot to the, the belly triangulation to these meromorphic transcendental functions? So so there was those are two sort of separate application. One was to specifying the postcricl orbit. So the hope was that the audience would be interested in, in, in the orbit of the singular points and how well you could specify them. And so that was one application. The second application was to the triangulations of, of surfaces. They're really a separate applications, but they both sort of have their foundations on the idea of quasi conformal folding to build quasi regular models of what you want and then use a, you have to take a QC correction map to change the quasi regular model into a holomorphic model. And in both cases, you have to come up with some kind of argument that the fixing to make it a whole morphic doesn't destroy the property you want. But the two applications are separate. Really, probably it would be best to try to give an hour long talk on each of these applications and explain them fully as opposed to trying to do a 10 minute summary of each one. That's perhaps a little bit ambitious. Okay. Thank you. You're welcome. So let's thank our speaker again. Thank you. Thank you.
I will introduce the Speiser and Eremenko-Lyubich classes of transcendental entire functions and give a brief review of quasiconformal maps and the measurable Riemann mapping theorem. I will then discuss tracts and models for the Eremenko-Lyubich class and state the theorem that all topological tracts can occur in this class. A more limited result for the Speiser class will also be given. I will then discuss some applications of these ideas, focusing on recent work with Kirill Lazebnik (prescribing postsingular orbits of meromorphic functions) and Lasse Rempe (equilateral triangulations of Riemann surfaces).
10.5446/57318 (DOI)
So indeed, I would like to repeat Dierick's comment about what a pleasure it is to see so many of my friends together again for the first time in a long, long time. I'm going to give a lecture which, well, of course, this room is filled with experts. And as a result, one hesitates to speak about things that they know and have known for years and in probably in many cases know better than I do. But there are also here some non-experts. So I'm going to try to give for today, Heichmuller spaces, two big examples. Of the Thurston pullback knot, the skinning map, and the pullback map for post-critically finite branches. And if I get through that in 50 minutes, that won't be doing so badly. So Heichmuller spaces, let S be a Riemann surface. In the actual applications of Thurston's theorem, the Heichmuller spaces are only Heichmuller spaces of Riemann surfaces of finite type. Compliments of finitely many points in a compact Riemann surface. But I think that in the context of this conference, it is a much better idea to give the general definition, which applies to any Riemann surface, because trying to generalize Thurston's theorems to more general settings, such as for instance, transcendental functions, is going to require looking at Riemann surfaces of infinite type. And probably it's a good idea to give the definition in that generality, in addition to which it's not all that much harder. So let S be a Riemann surface. But you ought to be thinking something like P1 minus an infinite set. An infinite set of points or something like that. And with finitely many accumulation points. Or perhaps accumulating in a simple closed curve. Or there are many other possibilities. Heichmuller space modeled on S is equal to the set of phi from S to X. Quasi-conformal homeomorphism with X a Riemann surface. S phi a Quasi-conformal homeomorphism up to an equivalence relation. And the equivalence relation is if you have S, X1 phi 1, X2 phi 2. X1 phi 1 is equivalent to X2 phi 2. If and only if there exists F from X1 to X2. Analytic isomorphism. So it works that first phi 1 and then F is. And now there's a hesitation as to whether one wants to write isotopic or homotopic. And it's a theorem, but not a trivial theorem that they're equivalent. So I will write isotopic to phi 2. But this could be just homotopic. And it is a theorem of surface topology, but not an easy one. That those two notions, that those two are equivalent. What kinds of structures, this is just a point set. What kinds of structures does it have? The first move of space of S is a Bonnock analytic manifold. Of course, finite dimensional if S is a finite type. And it is a metric space. Actually, it's a metric space in lots of different ways. And there are people who are fond of different metrics on the Teichmiller space. But I have my favorite. It is the Teichmiller metric. And that's the one that is relevant to all applications that I know of. Of Teichmiller theory. Well, that isn't quite true. It turns out that the Peterson-Vey metric is also relevant to some aspects. But I will not be speaking of that one. The Bonnock analytic structure, I'm going to temporarily ignore. Could you speak louder? Analytic isomorphism. At least for Riemann surfaces of finite type. I think that there is analytic data, the complex structure of the Riemann surface, and combinatorial data, the marking fee. It isn't quite true that the marking is just combinatorial, in the case of Riemann surfaces of finite type. Now, let me, the metric. The metric is sort of an obvious one. The distance between x1, phi1, and x2, phi2 is equal to the n-thymum of log k of f, where it's the same drawing as before, s, x1, x2, f, phi1, phi2, f, quasi-conformal homeomorphism, such that first phi1 and then f is isotopic, and then phi2. And the point about this quasi-conformal homeomorphism is that a quasi-conformal homeomorphism quasi-conformal dilatation f, k, k equals 1, is in fact analytic. And therefore, if the distance of two points of type-coupler space is zero, they do in fact coincide. It's the very definition of the equivalence relation. This word quasi-conformal is essential for the present purposes, but it could just be a homeomorphism, as if we were in surfaces of finite type. But it's important to realize, at least for this conference, that if, well, p1 minus a sequence, a sequence converging to a point x infinity, that does not correspond to just one type of list. It has one topological model, but it corresponds to uncountably different topological space, type-coupler spaces. The reason for that is theorem. If gamma is a curve on s, and tau maps to length sub-tau of gamma, associates to x phi in type-coupler space of s, the geodesic on x homotopic to phi of gamma, then log of l gamma, l sub-tau of gamma, or tau maps to the logarithm, is Lipschitz of ratio one. So now, if you have a sequence accumulating to a point, you can look, for instance, at these curves. Each one has a length and a log length, and they form a sequence of numbers. And those sequences of numbers for two different Riemann surfaces in the same type-coupler space, the logarithms of the lengths have to belong to little l infinity, and that has to be a bounded sequence. As a result, but those sequences can be anything you like. And so, there are huge numbers of Taichmeler spaces, and on that topology, they somehow look like little l infinity over little l1. And that's a bizarre space, little l infinity, which is some non-separable Banach space divided by little l1, the set of sum-able series. And that's a strange, highly non-Hausdorff space and so forth. And this is designed to make you realize that you can't just loosely talk about Taichmeler spaces of infinite type Riemann surfaces without being really careful about what you're doing, that these Taichmeler spaces are really different. Let me see. This one. That isn't the one I want. This one. Okay. Let me say one more generality about Taichmeler space. And this is really the only way in which I am going to use the complex structure. The tangent space at tau, the Taichmeler space based on s, is the dual of q1 of x, the space of integrable holomorphic quadratic differentials. On x. So a quadratic differential is something which in local coordinates, a quadratic differential is written, q is equal to alpha of z, d z squared. And then absolute value of q is equal to absolute value of alpha of z, dx dy. And so norm of q, l1 norm of q is equal to the integral of this norm of q over x, which is the norm of q, the absolute value of q is naturally a measure, which you should think of as the area. And so these are the quadratic differentials of finite area. And there's much to say about trying to understand the geometry of quadratic differentials. Understanding in detail the geometry of quadratic differentials is essential to understanding Thurston's theorems. But just at the moment, I'm going to leave it at that. Okay, so this is my introduction to Taichmeler theory, and this is where I ask for questions. You're supposed to understand every word I have written. This is not supposed to be mysterious. It's not. Now is the time to speak up. The word here from the back makes you hear me. There is a question. Okay, I did not get a single question, which makes me wonder whether I'm talking into a void or not. But, well, it's to give you an idea as to what you might have an infinite set with an infinite accumulate. It's worse than a set with an accumulation point. But the answer is, yeah, trying to understand the multiplicity of possible Mandelbrot, of possible Taichmeler spaces modeled on P1 minus a cantor set is a, well, it's one of these many things that you can try to understand. They're probably highly relevant. It could be any close set, but specifically a sequence going, a countable set, accumulation accumulating on finitely many points is sort of the next step in understanding Taichmeler theory for transcendental functions. Well, Thurston theory for Taichmeler for transcendental functions. That's the example that I'm trying to understand at the moment. And I think they're already in that setting a myriad difficulty, difficult problems. Okay, so this is my introduction to Taichmeler theory. I'm going to use the language and the words that were used here constantly. Oh, I'd like to point out a little bit in the setting of infinite dimensional Taichmeler spaces. Taichmeler space is not modeled on a separable Riemann surface. So it is not a reflective Riemann, a reflective bonnock space. The dual of the, the, the, it is true that the tangent space is the dual of the holomorphic one forms, but it is not true that the holomorphic one forms are the dual of the tangent space except for finite dimensional in the finite dimensional setting. I mentioned that when thinking about the topology of even these Taichmeler spaces, you have to think about little L infinity and little L infinity is not a separable bonnock space. So you cannot expect that these are the kinds of spaces where duality, you just say, oh yes, the dual of the dual and so forth. The dual of the dual is not the space you started out with. Oh, and there's one more thing that I really ought to say ought to say this gives you a metric on the tangent space, the dual metric of the L one metric on the quadratic differentials. And when you have a metric on the tangent norm on the tangent space, you automatically get a metric on the space itself. The metric on Taichmeler space sub s is the metric associated to the infinitesimal metric. On the tangent bundle of Taichmeler space, dual to the L one norm on two one events. It is true that the distance that it comes from knowing the lengths of tangent vectors is that log log K of F. There might be a factor of one half depending on normalizations. I believe that this last statement is due to Richard Hamilton. I'm not really sure. Maybe just as one more word. It's all very well to talk about q one of X. Even for even in the finite case, the norm on q one of X is a really entertaining object. If there you have this finite dimensional q one of X, and there you have the unit ball of its norm, and you might sort of imagine running your hand over this unit ball and feeling how bumpy it is. And it's bumpiness as it turns out reflects the geometry of the underlying Riemann surface. The Himalayas are running along there the bumpiest spot, giving you a picture of the underlying Riemann surface. It's a it's something which was developed mainly by Royden many years ago, and which I have spent I've spent my life looking at this unit ball in the space of quadratic differentials. It's a it's an exciting space in itself. This bonnock a finite dimensional bonnock space isn't just oh yes a bonnet all norms are equivalent. The details of the norm really matter. Come on. You're going to go all the way up. And. And how do I get the other one to come down. I hope it's like this. Yes. Okay. Maybe considering the audience that is here. I'll start out with the skinning map, which probably you don't know as well as the Thurston pullback map, but it's a most entertaining map. Okay. Okay. Okay. Suppose gamma and PSL to see. Is a finite lead generated. Discrete. That's called the Kleinian group. It is then a great theorem of all force. Oh, then it is true that he won. Which I will write off as the boundary of H3. Of course, H3 is a complete metric space. It doesn't really quite have a boundary. But there are many models of the hyperbolic space. There's the ball models. There's the upper half space model. There's the. There's a hyperboloid model. There are many models. But it is absolutely true that the set of ends of geodesics is naturally. The Riemann sphere. It is naturally a, it has a natural complex structure. It doesn't have a natural metric, but it does have a natural complex structure. And every automorphism of H3 extends as an automorphism of its boundary. Although the words automorphism mean different things in that sense. In one case, it's a set of isometries. And in the other case, it's a set of analytic isomorphisms. P1 doesn't have a natural metric. But it does have, of course, a natural complex structure. Then I'll force proved. Oh. P1 is equal to the limit set of gamma disjoint union. The regular set of gamma limit set. Regular set. Which you should think of as just like dividing the Riemann sphere into the Julia set and the fatu set. And in fact, part of the purpose of this conference is to bring out the similarities of those two, of those two notions. This is the set of all gamma. It's the, it's, this is equal to P1, enter gamma x or any x in H3. It's a set of accumulations. And you should think of this as very analogous to the closure of the set of inverse images of any point. Bartol, Laurent, speak louder. You are absolutely right. I want to write the closure. You could also take x in P1, except for some exceptional cases. And lambda gamma is compact. So omega gamma is open and all force prove that omega gamma over gamma is a finite union of Riemann surfaces. Of finite. This theorem is the other side of the Sullivan dictionary. On the other side of the Sullivan dictionary, you have the no wandering theorem, no wandering domain. It turns out that the proofs are almost identical, the proof of the no wandering domain theorem and the proofs of the Alphor's finiteness theorem are almost identical. And this is perhaps the main entry and certainly the inspiring entry of the Sullivan dictionary. Let us suppose that H3 over gamma is geometrically finite. And that is something that is going, I am not able to explain that word right now, though, oh well, I guess I sort of could. It has a finite sided fundamental domain. And a finite sided duically fundamental domain. Of each component of omega gamma is quasi-fuxing. Quasi-conformal conjugate of a fuxing group. But it is easy to say quasi-fuxing. But I would like to emphasize that quasi-fuxing groups have a world of geometry. And you can't just dismiss this definition as being fully explanatory. So here is an attempt of drawing what this group looks like. Here is hyperbolic space. Or perhaps rather its boundary. H3 inside here. Here is omega gamma. Here is omega gamma. Here is. And you can see that this is a quotient. This is P1 over gamma. Excuse me. This is omega gamma over gamma. A finite union of Riemann surfaces of finite type. For each component, you have a universal covering space of that which I would call a leopard spot to the corresponding Riemann surface. So this is my picture of the setup of the Thurston's hyperbolicization theorem for Hocken Manifold. You then have the following rather extraordinary construction. Yes. They're quasi-discs. They're quasi-discs because I have put into my hypothesis that the stabilizers of the components are quasi-fuxing. Excuse me. That's a strong restriction. There could exist examples. There are lots of other possibilities, but that's the restriction that I'm working under. You then have the following extraordinary construction. If you choose some particular one of these components, say this one, S1. That particular spot, spot one. You can look at H3 divided by the stabilizer of that spot. And what does it look like? On one side, you see exactly the green Riemann surface. And then you see a Riemann surface. And what's on the other side? And what's on the other side? Well, the other side corresponds to the whole outside of S1 divided by your quasi-fuxing group. And in particular, it has all the structure of all these spots. So it contains, it's also a Riemann surface of genus 3 as I have written, drawn it. But it has a whole extraordinary structure. I don't agree. 10 minutes. I don't agree. 10 minutes. 10 minutes. I started, and it isn't my fault, five minutes late. And I do want to get to the second spot here. Now, supposing that I take a point tau in Teichmeler space of this particular spot. That's a finite dimensional Teichmeler space. Well, actually, in Teichmeler space of omega gamma over gamma. So that's the product of the Teichmeler spaces for all these things. That gives a complex structure in every spot. It gives a complex structure on all of these Riemann surfaces. Therefore, it gives a complex structure on all these spots. But with the hypothesis that the group is geometrically finite, the limit set has measure 0. And so it's actually a complex structure on the whole remainder of the sphere. And so it gives a complex structure on all these spots. Except that here, there's no good reason to believe that the spots are simply connected. And therefore, it gives a complex structure to this surface. And this defines taking the complex structure on here. And looking at the mirror surface, the other side, the wild surfaces on the other side, gives you a map. Sigma sub x0, or sigma sub gamma perhaps. From Teichmeler space of omega gamma over gamma to Teichmeler space of omega gamma over gamma, star the complex conjugate surface. This is the wildest map you've ever heard of. And this is the basic construction of Thurston's hyperbalization here. So I'm starting with a complex structure on each of these surfaces. I'm lifting it to a complex structure on each spot. That gives me a complex structure on the Riemann sphere, because the complement has measure 0. And then dividing it out by the stabilizer of one spot. Of course, the division of the stabilizer of that one spot is just the complex structure of that spot. But dividing out the outside by the group gives you some completely different complex structure on the conjugate surface. You do this for each one of these surfaces, and you get new complex structures on the conjugate surface. Corresponding to the spotted and striped structure of these surfaces. And if you understand even just this from today, you're already doing reasonably well. This map is the fundamental object in the construction of Thurston's hyperbalization theory. The skinning map. The derivative at tau of the sinning of sigma sub gamma transpose associated to a quadratic differential. What does it do? It takes a quadratic differential on each of these Riemann surfaces. List it to all the spots, then pushes it forward from all the spots to all of these Riemann surfaces, giving you in the final analysis a quadratic differential on all those surfaces. And the proof of Thurston's theorem consists of showing that this operator has norm less than one, with the norm depending only on the length of the shortest curve, shortest geodesic for the complex structure tau. This is an immensely hard theory. The proof of Thurston's theorem, 90% of the difficulty is showing the norm of d tau sigma gamma. The transpose is less than one by a constant, depending only on the position of tau in modulo space. The proof of Thurston's theorem. Unfortunately, this is extremely difficult to prove. It is true, but it's not obvious. Now, what is your time? It's 9.57. So I'm going to have to stop here, huh? Yeah. Okay, well, I'm going to stop here. I'm afraid I was sketchy at the end, and I never got to the second entry of my plan. So I think we have time for some very quick question. If not, there is a little break now, and we can discuss in the lobby, but there's time for some questions. Please remember that I'm a little deaf, and if you want to ask a question, you have to speak loud. I use the microphone. The image is bounded of this map. So I believe, well, I do not understand the bounded image theorem. I don't know how to prove it, but I do know how to prove this statement. I do know how to prove that statement, but I do not know how to prove the bounded image theorem. And I hope that the bounded image theorem actually is a theorem, but, well, I don't know a proof. Well, it is obvious, or almost obvious, that the direction image operator always has no... Well, it's really obvious, it's a generality, that the direction image operator has norm less than or equal to one. In this case, it is essentially obvious that it has norm less than one, at every point. On the other hand, what is really difficult is showing that that norm depends only on the position of tau in modular space. It certainly is true, but for one thing it isn't true, because there is a hypothesis to show that that norm is less than one, which is that there are no embedded annulis, that there are no annulis, let me try to draw you in a candidate annulus. There is a closed curve contained in one image of a spot, but that image of the spot then is not simply connected, and I call it a stripe. And then that curve is one boundary curve of an annulus inside this three-dimensional portion. And it's really a handle body, it's a structure between... Well, it isn't a handle body, it's a surface cross 0, cross 0, 1. And in order to have the norm of this operator be less than one, it is necessary that there be no such things. And therefore, the proper image theorem has to depend on the hypothesis that these annuli do not exist. I think as much as I can answer your question right now, that's my answer. Under that hypothesis, the contraction is uniform, yes. Are there any more questions? Well, I guess it's due to say that for any detail and much, much, much more, I refer you to the three books of the speaker, right? Exactly on the Teichmuller theory and applications. So, are there some questions by which I will show her in the chat? Yes, so maybe this is true. We should ask, we should check whether there are any questions from the remote participants. No? Nothing. So who's speaking? Let me see. I'm very pleased to see... You should look at there so that they see you. But he wants to see who is... Ah, okay. So there are some questions in the chat. So I have a lot of friends there. I see Sean, I see Raluca, I see Remus, I see... I see Sabia, lots and lots of good friends of mine. Hello Pascal. Kuntau. Hi Kuntau. Hi Remus. Hi Raluca. Hello, hello. Hi. Hello Sabia. Hello Lasse. Hello. Hello Pascal. Bonjour Pascal. Hello Kuntau. Hello Walter. Okay, so if there are no more questions from here or from there, let's thank the speaker again. Thank you.
W. Thurston's theorems almost all aim to give a purely topological problem an appropriate geometry, or to identify an appropriate obstruction.. We will illustrate this in two examples: --The Thurston pullback map to make a rational map from a post-critically finite branched cover of the sphere, and --The skinning lemma, to find a hyperbolic structure for a Haken 3-manifold. In both cases, either the relevant map on Teichmüller space has a fixed point, solving the geometrization problem, or there is an obstruction consisting a multicurve.
10.5446/57319 (DOI)
Thank you very much for the opportunity to speak here. It's wonderful to meet so many friends. So in this talk, I will talk about the recent work with Nasi Rampi and James Waterman in which we construct transcendental entire function which has some wandering domains that form these lakes of Guadalcan. So for the people that are not familiar with what lakes of Guadalcan are, in this construction we start with a bounded domain in the plane, like this black set here, which you can think of an island in the sea, right? And then this island has a certain number of lakes, in this case this dark gray lake and this light gray. These are just like subsets of your starting domain, which have these joint closures. And the inhabitants of this island, they're not very happy because the ones living towards here in the center, if they want to reach the sea then they need to walk a long distance, right? So for their convenience, they decide that what they will do is they will dig some canals in this island with the property that these canals. So the inhabitants of this island are not being happy with this starting setup. They dig this canal that has the property that, as I was saying, it doesn't disconnect the island, but now from every point in the island, if you want to reach the sea, this white set, there is just some small distance that you can feel. And then you do a similar thing with the other two lakes in the island so that these lakes, you build some canals around so that the water from these lakes reaches every point of the island, of the black set, within a smaller distance. And they can use the same for the third one. And then what happens is that the next year, these inhabitants of the island, they get a bit more demanding. So they modify again each of the canals so that the water reaches every point of the island within a shorter distance. And they keep doing this again and again. And at the end, what happens is that you can see that the black set is getting thinner and thinner. And at the end of this construction, the island disappears and you just have the black set becomes the common boundary of all the complementary components. So the black set that used to be the island becomes the boundary of each of the two lakes and this white sea here. So you see, you end up with a set that looks a bit like this. So here you're seeing there are four complementary domains. So there is the red, the blue, and the green that are bounded domains. And the white one is the unbounded complementary component. And all these four domains in the plane, they have the common boundary, which is this compact subset. And we call this type of set a lakes of water continuum. So this is just a continuum, which has the property that it's the common boundary of at least three sets, three domains in the plane. The possibility for this continuum to exist was proved by Brower in 1910. And you can see here in the right a figure of, that is from the original paper of Brower, of how this lakes of water looked. Actually, as a curiosity, this figure was the first picture in color that appeared in the mathematician analogy. So also, I should mention that this works not only for a finite number of lakes, but throughout the construction at each step, you cannot new lake so that at the end, you have that these lakes of water continuum is the boundary of infinitely many domains in the plane. Also, okay. So these lakes of water construction appeared in a paper by Yone Yamma, 1917, who described this iterative process. And these are the original figures from this paper where you can see that it started with an island and the lake and then drew this sketch of a canal that goes around. Although by our definition, this case would not be a lakes of water continuum because there is only like the sea and the lake. We require to have at least three complementary components of the islands where you would have to have at least one other lake. But this is the idea. In this construction, the paper of Yone Yamma, he quoted that this was communicated to him by his advisor, which was this person called Mr. Wada, which was a mathematician from Kyoto University. And if you try to search online some information about him, actually, there is not much known about him. But we were happy to hear from Yutaka Ishii that told us that he wants research about this and he found in the library of Kyoto University, he found this picture of Wada. And you can see the figure in this paper here. So it's this person. OK. So these lakes of water, they look a bit strange the first time you see them. But actually, they appear naturally in the theory of dynamical systems. So there are several places where they show up. So for example, these pictures that I showed before, it comes from the thermomorphism of the torus. So this is actually the projection to the plane of the set that lives in the torus. And I should also mention that the cover and the oberster board obtain also that these lakes of water appear in the Hennon maps of R2. But all these are real dynamical systems. And until now, there was no appearance of these sets in complex dynamics. I think it was also a question by Walter Bergweiler asked in several lectures, I think, whether these lakes of water appear as a dynamically natural subset of the Julia set for some functions, such as the boundaries of photo components, et cetera. And also, there is this question by Fatou from 1920, where he asked, this is in the context of rational functions. He asked, you have a rational function with at least more than two Fatou components, because otherwise you can just have z squared in the circle. If you have a rational function with more than two Fatou components, then obviously this function has infinitely many Fatou components. And is it possible to have the two of these Fatou components share the same boundary? So this is still an open question for rational functions. But then six years later, Fatou introduced and initiated this study of transcendental entire functions. And what we actually do is we answer this question in the context of transcendental entire function. So our result says that there exists a transcendental entire function with a bounded Fatou component, whose boundary is a lake of water continuum. So it means that the boundary of this photo component is the boundary of at least three domains. So these Fatou components, in our case, are wandering domains. And we can choose these lakes of water continuum to be any lakes of water continuum for any number of components. So we can obtain transcendental entire functions, which have infinitely many wandering domains, all of them sharing the same boundary. That's the result. So there are also a number of related questions which concern invariant Fatou components. So as I said, in our case, these are wandering domains. The question for invariant Fatou components is much, much harder. But just because I was mentioning this problem, I thought I should mention these things. So for example, if user completely invariant Fatou component, then it follows that the boundary of u equals the Julia set. And Mackienko's conjecture asks whether the opposite is true. And the connection with the lakes of water is that, in fact, any counter example to Mackienko's conjecture would need to be a lakes of water continuum. And then I also wanted to mention that recently, Dutkon-Lubic talked about that they are proving that the boundary of a Siegel disc of a quadratic polynomial is a Jordan curve, which means that for quadratic polynomials, if this result goes through, then it would mean that there are no lakes of water boundaries for quadratic polynomials. And this suggests that perhaps this cannot happen at all for polynomials, right? But the claim, I think, their claim is for quadratic polynomials only. So we pose this question in the paper, which says that perhaps if F is an entire function and using a bounded invariant Fatou component, then perhaps the boundary of u is a simple closed curve. So this would mean that these lakes of water boundaries would be more or less associated to wandering domains. Okay. So I think Anna also did a nice introduction to wandering domains in the morning. I don't probably need to do too much. So wandering domains come in two flavors. There are the multiply connected wandering domains and simply connected wandering domains. So for the matter of this talk, we are interested in simply connected wandering domains. I just put two cases here because they are very different in terms of the dynamics and in terms of their geometry. If you have multiply connected wandering domain, then you know that it's contained in the closed, it's contained in the fast escaping set and when you iterate, the iterates of it, they become some sets that contain very large annual line and the dynamics are very well understood while in the case of simply connected wandering domains, they are much more flexible. I think in the sense like there are oscillating cases and also their geometry can be very different. This is part of what we do in this talk. We study what are the shapes and the boundaries of this. Okay. So I want to mention also that our work is very much inspired by this recent paper of Luca Bocchallar which appeared in January this year, I guess, which we read and we really liked it. It has very nice ideas. So in this paper, he shows that if you have a bounded simply connected domain, the question is when can bounded simply connected domain be a wandering domain of a transcendental entire function? So he gave two conditions. One is that you need to be regular, which means that the interior of the closure of you needs to be equal to you. So here in the left, you see a domain that is not regular because when you take the closure, it disappears and then it's not equal to you again when you take the interior. And the second condition he gave is that the complement of the closure of you is connected. So for example, this thing in the right would not satisfy this condition. But if these two things are satisfied, then he showed that there is a transcendental entire function for which you is a wandering domain. Then the first condition being regular, he showed that this is necessary. And in the paper, he asked whether the second one, the fact that the complement of the closure of you is connected, whether this is necessary or not. And our theorem that I just showed answers this question in the narrative because in our case, all these other bounded complementary components make the complement of the closure of this connect. So our theorem follows from this is more technical result where when you start with a full compact set, then you can obtain a transcendental entire function for which they iterate transcend to infinity on this and the boundary of the set is in the Julia set and components of the interior of this full compact set, they become wandering domains, escaping wandering domains, although you can make them also oscillating. So roughly we have a different point of view in the terms that so Luca was looking at the open set with these two previous conditions and we look at full continuum and then the wandering domains are the interior components of this continuum. Okay, and then it's still an open question whether so our things satisfy that the feel of the closure. So it's still not known. So if you is a simply connected for two component and you take the feel of the closure of you, it's still not known if the boundary of you is always equal to the boundary of K this happens in our legs of what examples but we don't know if this is true in general. So this is the the refinement of Lucas question applied after knowing our results. Okay, so one nice corollary that we can have is that if you start with the Julia set of the field in Julia set of a quadratic polynomial and you apply our theorem, then you obtain that there is a transcendental entire function for which this exactly fit field in Julia set satisfies that the boundaries in the Julia set and each of the components of the interior are wandering domains that escape to infinity. It's nice that we can have exactly this shape as a as a subset of the Julia set. Okay, also one, one other result that we have is that we can use this this construction result to obtain new counter examples to the strong chemical conjecture. So let me mention some of these things before to the original counter example to the strong chemical conjecture was given in the in the archupas paper and then there were subsequent improvements to this example one by Bishop in class s then last meet another one in the arc like paper. Recently there is this one of Tanya and Lasse and I think Andrew Brown is also looking at constructing new counter examples to a strong chemical conjecture. So we obtained a counter example that looks like this. So what what we do is like we take the field in Julia set of a quadratic polynomial and then we add a spiral towards it around this field in Julia set and then we apply our result to the continuum which is the union of these two things the spiral to work the spiral and this field in Julia set. So we obtained the function for which the interior components become wandering domains in the scaping set and this spiral and the boundary of this is in the Julia set and everything escapes to infinity. So this is a counter example to the stronger mango conjecture because if you take a point from the one of the interior components then this point cannot be joined to infinity by a curve of points that is escaping because if you had such curve then it would need to cross here and you would obtain like a domain that is bounded by a curve of points in the scaping set and it's an there is a point in the Julia set and basically you would need that the scaping set needs to be a spider's weapon. We can construct this function the way we construct the function it omits certain points. So there is like an asymptotic curve where the function is bounded. So this constructs the fact that the function has an escaping set spider's one. So this is a completely new example that looks so these examples are a bit more similar in the construction. This is just like a simple way to obtain a counter example to a strong Kermenko conjecture. Just to finish I wanted to give some ideas about the proof of this result. So it's very similar. So the ideas follow from Luca's construction there is just a small change I want to point out Luca used a more refined version of Runge's theorem but it turns out that it was not necessary we can do it with the basic Runge's theorem. So given this full compact set what we do is we first consider an ester sequence of compact sets that approximate this set such that the intersection of this compact set is our set K and then in each of the levels we put a point so we obtain the sequence we use a sequence pj of points such that each level has one point and the sequence is chosen with the property that their accumulation set is the boundary of the set K. So even though you just put one point at every level you can still make the accumulation set is the full boundary of K. And then what happens is that we have some results like this so these points that accumulate K. So we obtain a sequence of functions so that for the nth function in this system this set a dense step this nth level moves n steps forward so let me try to draw. Can people see the board? You know if people online can see this board? Yes because I think they can. Great. It's a little bit dark but also I should point out that if the slides are still shared then the board won't be recorded for the recording so it might be best to stop sharing the slides and change then the board will also be recorded. Okay. I'm not sure how do I do that. Maybe let me try to explain by words. Maybe it will be easy. So you should imagine a sequence of disks of radius one that are centered along the real line. Okay. And then so these are the disks dj here right and then the nth level of this nested sequence of compact sets it moves n steps following each jumping from one circle to the next every time. Okay. And the point that is outside of it it also follows it but then jumps to a basin of attraction. So there is a basin of attraction somewhere in the narrative real line and this sequence of points pj that are accumulating the set they follow the set for n steps and then they jump to this basin of attraction so the compact set k is approximated by points that are eventually mapped to this basin of attraction and the closer the point is to the compact set the longer it spends following the compact set throughout this sequence of disks and eventually maps to the basin. Okay. And this is why at the end of the construction for the for the function that you get at the end only the points that were in the compact set move always to the right and the other points event eventually map to the basin of attraction that is at the narrative real line. And this is basically Lucas idea which we just do it with a little bit different but yeah. Yeah. Perhaps I'll stop here. Thank you. Are there questions for David? I have a stupid question. So it seems to me that there are many examples when you take a nasty continuum and then you say well it appears natural and transcendent dynamics. Is there a kind of a general theorem that says that you can essentially construct for given continuum whatever nested is you can construct a function using some Runge theorem or? Well, I don't know in general. So last I have this result what are like continuum would appear or maybe there are kind of negative results when you say that this particular continuum is never something part of except for super examples when the Julia set is the whole. Well, our result what says is that whenever you have this any full compact set then you can obtain it as a subset of the Julia set of a function and the dynamics there is that this state. Which means that the answer is yes. Yes. Provided it's a full compact set. So for example we cannot we still don't know about this type of things like in this situation here we don't know where this gray set can be wandering domain or not. Any other questions? I just have one question. So you have said you said there's a you have refinement of my question on the. Yes. This one. Yes. So what's your opinion? I was wrong with my opinion. So I thought that the answer to my question would be positive in some sense but. So. Do you also have a wrong opinion? Yes, so I've been thinking about this question actually. But I don't have. I think I'm scared to say. Yeah, where this can happen or not. I should hope. My hope is that it doesn't actually it cannot happen right because then it would be that between what you did and what we did we cover all the cases. Which would be nice. But I cannot really this part of the moment but I'm working on it. I don't know maybe we can talk about it. So more questions online or here we have questions online. Okay, then let's thank David again. Thank youッ Thank you.
We construct a transcendental entire function for which infinitely many Fatou components share the same boundary. This solves the long-standing open problem whether Lakes of Wada continua can arise in complex dynamics, and answers the analogue of a question of Fatou from 1920 concerning Fatou components of rational functions. Our theorem also provides the first example of an entire function having a simply connected Fatou component whose closure has a disconnected complement, answering a recent question of Boc Thaler. Using the same techniques, we give new counterexamples to a conjecture of Eremenko concerning curves in the escaping set of an entire function. This is joint work with Lasse Rempe and James Waterman.
10.5446/57321 (DOI)
So, thank you very much for inviting me. I'm thrilled to be on a conference for the first time since almost two years, since actually end of March of 2020. So, I'll talk today about, basically it's a project more than, I'll formulate several theorems, but it's more like project, an attempt to study alphor's regular conformal dimension using group theory. So, how could that be achieved? So, let me start from the setting. So, the setting is a covering map, where, so this is a covering map, where M is a compact matrizable space, or metric space, though metric is not very important here. Yeah, it will be. And such that it's locally expanding, in the sense that there exists constants epsilon greater than zero and L greater than one, such that distance from f of x to f of y is greater than L times distance from x to y for all x, y in M of such that distance between them is less than epsilon. So, an example of this would be a hyperbolic complex rational map restricted to the Julia set. You will have then this expansion condition for the restriction of the concurrent metric. And many other examples say a class of examples where M is a manifold are precisely when M is a infranily manifold, so when it's a quotient of an important league group by a discrete affine group and f is induced by an expanding automorphism of the league group. So, now I want to associate an algebraic object with this situation. And for that I will assume that M is path connected, though it's not really needed. It just easier notationally, but everything what I say will be true for any expanding map, a covering map. So, if you have a point in M, two points in M, t1, t2, and the path L from t1 to t2, then you can lift this path using the map f. And so, t1 will have some pre-images, t2 will have some pre-images. So, if you take this path L, you can lift it and you will get a bijection between the pre-images and the f. Then these pre-images will have pre-images. You can lift the lifts and you'll get bijection between these pre-images and so on. If you take the union of all of this, you can think t, you will get a bijection between the backward iterations of f of t2. So, this is arranged naturally into a tree, which I will denote as t1. These also form a tree, t2, and we get an isomorphism of these trees, which I will denote as SL. And you can look at the boundary of these trees. So, boundaries are cantorsets of all paths, infinite paths going, infinity, and this isomorphism of trees will define a homomorphism of the boundaries of the trees. And we obviously have actually an action of the fundamental groupoid on these boundaries of the trees. So, if you take one path and then continue it, the corresponding, the isomorphism induced by concatenation of the paths is the same as composition of the isomorphism induced by each of the paths. In particular, if we fix a point, if we take one point t, then we have a semi-group of fractal isomorphisms of sub-trees of this tree defined by paths connecting different vertices of this tree. So, we take two vertices of this tree. So, we have the tree, let's take the case when degree is 2. So, we have a binary tree, a rooted tree, and then for any path inside m from one point to another, when we lift these paths, we'll get an isomorphism from this tree to this tree. So, we get on the boundary of the tree, we'll get a homomorphism between a clope and subset of that boundary and another clope and subset of that boundary. And the set of such homomorphisms will be what is called an inverse semi-group. So, it's not a group because homomorphism is not defined on the whole campus set, the whole boundary of the tree, but it's a set of homomorphisms between clope and sets and this set of homomorphisms is closed undertaking inverse and composition. The composition is a composition of fractal maps, so sometimes you can get empty map and so on, but it's something that algebra is called an inverse semi-group. And this inverse semi-group is finely generated. So, I didn't claim that m is locally simply connected, but it does matter. Well, let's pretend that it is for a minute, then the fundamental group will be finely generated. So, this semi-group will be generated by the generating set of the fundamental group and any collection of paths connecting the base point to the immediate parameters. If you take these paths, this inverse semi-group will be generated by these paths, by the corresponding of the morphisms of the tree. So, there will be some of the morphisms of the tree and then partial of the morphisms which map the whole tree to this sub-tree and whole tree to this sub-tree. The semi-group will be generated by these. Even if the space is not locally simply connected, the set of automorphisms that you get from the loops will be still finely generated because of the expansion. The fact is that any loop which is small, efficiently small, when lifted, will be definitely a loop even smaller and so on. So, all small loops act trivially on this tree. So, even if the space is not locally simply connected, the group is still finely generated. The group defined by loops, the sub-group defined by loops in this semi-group will be still finely generated. So, the group defined by the loops is called the iterated monodromy group of the map F. But in addition to iterated monodromy group, we have these maps which map the whole tree to sub-trees. Now, this group, the iterated monodromy group, so we have the group, the iterated monodromy group, the sub-group of the semi-group generated by the loops. And as I said, there are additional generators which in the case of degree 2, there are two additional generators which map the whole tree to the sub-tree growing from the first level. So, let's denote the set of those additional generators by x. And then for every element of the iterated monodromy group, so for every loop, and for every element of this additional generating set, so for every path connecting the base point to a pre-image, we'll have the following relation that if you apply Sx and then, I compose them this way and sorry, if you apply Sg and then Sx, you can rewrite it as applying, sorry, I confuse the sides again. If you apply Sx and then Sg, that will be the same as applying sum Sy and then Sh. So, for all g and x, there will exist h and g and y and x, so I get the following relation holds in our semi-group, namely what are these? So, in the semi-group that I just mentioned, the semi-group of all these homomorphs, so the question was if where this relation holds in the semi-group of those partial isomorphism between sub-trees defined by paths. Let me explain it. Since I see inverse semi-group, I can rewrite it in a different way. So, if you take, so this is our path g, this is our path x and so what are these h and y? So, this is a path in the base point, this is a pre-image of the base point, so we can lift this path g in a unique way and this by the map f, by the covering, so the map fx like this, it maps both these points to the base point. So, this is a lift and this is one of the pre-images, immediate pre-images, so there is an element of x connecting it to the base point and that element is our y and our h will be this triangle. So, you go along x, then along this lift, less than or the lift by gx, so the relation will be that, yes, so h will be the concatenation of the paths x, gx and then y. Why this is so? Because just from the definitions, if you take these three paths, the corresponding automorphism, the corresponding element of the semi-group, so if you take this path, the corresponding element of the semi-group will be sx, sgx and then sy inverse and by definition of gx, by definition of g, g is the sg, sg is the automorphism of the tree defined by g, so it's defined by lifts of g everywhere, so restriction of this sg to the subtree starting in this vertex is exactly, so if you restrict it to the subtree starting in this vertex, let's call it tx, then this is exactly sgx, because by definition, sg is defined by taking lifts of the path g everywhere, so it's the isomorphism between the trees defined by these lifts of sg, one of these lifts is gx, so if you start lifting gx, you get the same isomorphism, but you started only from this vertex, you didn't start from the root of the tree, so you have your binary tree describing the action of f and sg is the automorphism of this tree induced by lift, defined by lifting the path g, but now if you restrict it to this subtree, it will be, this map is defined by lifting this frame image of g to this subtree, and so restriction of sg to this subtree is exactly sgx, and you can check that this is exactly what it should be, yes, essentially this is what I wrote here, so you can hear what we do, we map the whole tree to the subtree and then apply sg, but applying sg is the same in this case would be as applying restriction of sg to this vertex, so it's applying gx, and then we come back using y, and so you can check, and you can check that you have this relation. Okay, so what we get here is, so let me give you an example of such computation, so for example if you take z square minus one, so the basilica, you will have two generators of the iterated mod dummy group A and B, I just want to show you how these relations here, and then there will be two elements of the set x, so if you have a base point it will have two pre-images, so let's take the base point, so these elements A and B will be defined by loops going around two post-critical points, zero and minus one, and if you take the base point it will have two pre-images, let's say take the base point, the fixed point between zero and one, so it will have two pre-images, and let's take one connecting path, the trivial one, denoted x0, and another one, the non-trivial one, denoted x1, and then one can show that, so if you take A to be the loop around critical value and B a loop around zero, in that semi-group you will get that SA, if you do Sx0 and then xA it will be Sx1 and identity, because if you lift A you will get two curves like this, and so if you do X0 then you take the lift of A will be this path and then you come back in the triangle using the connecting path you will get a trivial path, so it will give you this relation and then you can check that this relation and then for B you will get these relations. It's annoying to write these S's, so we'll just write it this way, Ax0 is x1 Ax1 is x0B, Bx0 is x0, Bx1 is x1A, and this is something that was known in group theory as a self-similar group for a biset, mainly if you take any product of these X's, X1, X2, XIN and multiply by A, you can use these formulas to rewrite it, so you have A times x0, x1, you can rewrite it now as xj1 times some group element, time another group element, well a generator A or B or identity, then again you can rewrite this using the same formulas, again you can switch letter and group element, this x and a generator of the group and rewrite it again as some other letter and some other element of the group. Could I ask a quick question, can you hear me? Yeah, yeah. Just to clarify, so the x0, so let's say your base point is the alpha fixed point, then your x0. In this case is the trivial path, yeah. No, no, just the x1 and the x0 to begin with, so you're connecting one, you're connecting your base point to the preimage of the alpha fixed point, one's on the top and one's on the bottom of something, that's your x0 and your x1. So the whole semi-group is generated by two sets, by union of two sets, one set is the generating set of the fundamental group for example, and the other set is the set of connecting paths from the base point to immediate pre-images of the base point. Yes, did I answer the question? Yes, thank you, sorry, I just realized my video was off, sorry. Yeah, we can take the fixed point between 0 and minus 1 of Basilica, which is what I forgot. But this square minus 1 will have two pre-images, alpha and beta, and that's the better one. This is alpha 1, okay, not the landing point of ray 0, but the other one of landing point. No. Yeah, so I take this, this is called alpha, the one which is just two rays, yeah, so you take alpha and minus alpha, sorry, I thought that you say that minus alpha is a fixed point, no, no, no, no, correct. Alpha and minus alpha, yeah. So X0 is the trivial path at alpha because alpha is the ray measure itself, so we can take the trivial path as the connector, connecting path, and then you take X1 is this path in the upper half plane from alpha to minus alpha. Did I? In these formulas? Okay, I'm sorry for that. Yeah, I'll change this. Thank you. Okay, so we get these, these can be seen as some kind of rewriting rules that allow us to write any product, an element of the fundamental group times a product of these Xs, two product of these Xs times an element of the fundamental group. So and this is exactly what is a similar group. Now, this inverse semi-group, which I mentioned at the beginning, it's a semi-group of partial homomorphisms of the boundary of the tree, so it's a semi-group of homomorphisms between clop and sets of the boundary. In fact, it's more natural, so this semi-group is nice, it's given by finite number of kind of defining relations which I wrote above there. But okay, sorry. So let me repeat once more, one more time what I wrote at the end there. We have self-similar group, which means that, so there is the iterated-mordemy group, the subgroup of the semi-group, which is defined by elements, consisting of elements defined by loops. Then there is this set of paths connecting the base point to the, to the pre-images. And in this semi-group, we have relations of the form that if you take any element G in the, in this subgroup, and you take any element V in the semi-group generated by this set of connecting paths, so all products of these elements xi, then there will exist a unique element g of, unique element h of the group and a unique element u of this semi-group generated by x such that g times V is u times h. So you can rewrite, and actually it's easy to write it explicitly what it is. So this V is a product of, of these isomorphisms of the whole tree with a sub-tree of the first level. And so when you compose two of such, you will get an isomorphism of the whole tree with a sub-tree of the second level, and so on, if you compose n such, you will get an isomorphism of the whole tree with a sub-tree of the nth level. So it will be defined by a path connecting the base point to that point of the nth level. So there will be some path l1 connecting the base point to that point of the nth level, and then g will be a loop. So you can lift this loop to that point of the nth level, so to nth preimage by nth iterative f. So that will be your lift of g. And then there will be one path corresponding to V, and there will be a unique path corresponding, unique path such that the corresponding isomorphism of trees will map the whole tree to the sub-tree of nth level. So that's your l2, and that l2 will correspond to u, and then h will correspond to concatenation of these paths. In particular, you see that because we assume that our map is expanding, this lift will become shorter and shorter. And this can be used to prove that eventually these elements h, these triangles, these concatenations, will belong to one fixed finite subset of our iterated modeming group. So there will exist a finite subset of the iterated modeming group of the subgroup of our semi-group generated by loops such that when you do this rewriting, so when you're lifting paths and completing these paths by the connecting paths, you will reach the set n. So in the group element, there will exist n such that for every V in x star of length n or more, let me denote this element h by g with index V. This g with index V, so this element which you get over here belongs to this finite set. And such groups are called contracting self-similar groups, such iterated modeming groups, contracting self-similar groups. And this is an algebraic counter part of expanding covering maps, that conversely every contracting self-similar group, so every group given by relations like above there, which satisfies this condition, appears in this construction. So it is iterated modeming group of some expanding map with one caveat, you have to consider orbit spaces, like you have to include subhypropolic cases here. And that's a little bit technical in this case. But modulo this technicality, you have a complete like a bijection between expanding covering maps, modulo topological conjugacy, and contracting self-similar groups, modulo some type of algebraic equivalence, which is also a little bit non-trivial, but very can be easily formulated. So we have an algebraic theory in this way, we get an algebraic theory of expanding covering maps. In particular, this implies that there are only countably many of them, up to topological conjugacy. That's one of the proofs that you can use. So how you can reconstruct from this semi-group this expanding map. So the idea is very similar to the idea of hyperbolic, gromov hyperbolic groups and boundaries of gromov hyperbolic groups. In a sense, this semi-group should be thought of as a gromov hyperbolic group, even though it's not hyperbolic, and not a group. I mean, it is not a group, but it is hyperbolic. So let me explain how it is hyperbolic. So if you have a group, you draw the kilograph, and it's negatively curved, and then the boundary is the space that you're interested in. Here, the kilograph of the semi-group is not what you should think about. What you should think about is the kilograph of the associated groupoid of germs. So you have this semi-group acts on the boundary of the tree. So it's a group of partial homomorphisms of the boundary of the tree, of the boundary of the pre-images. So you can look at germs of this semi-group, germs of its action on the boundary. So you take a point on the boundary. So you have your tree of pre-images, it has your boundary, it has the boundary. And then you pick some point of that boundary, just pick some backward orbit. So let's call it xi. And then take your generating set of this semi-group, and then for every element of the semi-group, you have the corresponding germ, if that element is defined on xi, you have your element as defined by some path L, it acts on some clop and set, and if that clop and set contains xi, you have the germ of the action. So you identify two semi-group elements if they act the same way around this point xi. And then you draw the calligraph. Among these germs, if you have one germ and another germ, so the germs will be vertices of your calligraph. And you connect one germ to another if you have a generator such that this germ is composition of this germ and the germ of the generator. So it's exactly the same as for calligraphs of groups. The only difference is that you have to fix a point and look at germs of this group at a point. If you have a group action, you will get every element will be defined on this point. So these germs will come from group elements and every element will have the germ at that point and it will be closely related to the calligraph of the group. In this case, there are many elements of the semi-group which are not defined on xi, you ignore them. So how will this graph look like in our case? So in our case, our generating set is split into two parts. There is the generating set coming from loops and the generating set coming from connecting base point to immediate frame edges. And so the edges of the calligraph will also split into two parts. There will be edges which correspond to loops and edges corresponding to the connecting paths and it's natural to draw edges corresponding to groups kind of horizontally, edges corresponding to connecting paths vertically. If you take the vertical part of the calligraph, you will get a regular tree. So for example, in degree two, you will get a tree which has one edge going down and two edges going up. So these are our x's. X0, X1, X0, X1, X0, X1 and so on. The edges corresponding to loops, so generators of the iterated modem group, they will connect these vertices horizontally. So in this tree, they will, because the loops will preserve the levels on the tree, you will get some connections horizontally. So in each level of this tree, there will be some horizontal edges. You'll get this infinite graph which is naturally graded by this subtree corresponding to the set x, these connecting paths. And edges either these connecting paths arranged into a tree or some horizontal paths connecting these vertices of the same level. And the fact is that this graph, so let me denote this graph by gamma, it depends on xi. This graph is grommar hyperbolic. So that's what I meant when I said that this is a hyperbolic semi-group in a way. But it's not just this, graph has a special point on the boundary corresponding to going down in this tree. So the boundary has a favorite point. Let me call it omega xi. And then we have the grommar boundary and it's natural to remove that point. Because then the boundary will correspond to going up in this tree. And it will be a non-compact space kind of completing these three up stairs. So it will be like a half plane model of the hyperbolic plane. So there is a special point and then there is the real kind of real line. And so what is that space? What is that boundary? So remember xi is an inverse or is a point of the boundary. So it's an inverse orbit of t. You choose each time one pre-image. So this boundary will be a leaf of the natural extension of the covering. So if you have our covering map f, we can look at the inverse image, the projective limit, the inverse limit of iterations of f. And then it is naturally a fiber bundle where fibers are the boundaries of these trees that I was talking about. And base is m. And so in that fiber bundle horizontally you get leaves. And the case when m is path connected, those leaves are exactly the path connected components of the inverse limit. And xi corresponds to one point of the inverse limit. And then this boundary is exactly the connected component, the component corresponding to that point. And more precisely, let's m hat will be the inverse limit of these spaces. Then f induces a homomorphism of this inverse limit. It's called the natural extension. And this natural extension is a hyperbolic dynamical system. So there are stable and unstable equivalence classes. And this boundary of this chromo-hyprobolic graph is the unstable equivalence class of the point xi. So it's, yeah. OK. So now, yeah. Yes. Yes. Yes. Yes. So the cantor sets, if you take the basilica, natural extension of basilica, the cantor sets are the stable equivalence classes. So if you take two points in that cantor set and iterate your natural extension, they will converge. The distances will go to zero. Now if you take the inverse map, then the set of sets, you have the unstable equivalence relation. So it's the pairs of points where distances go to zero if you take the inverse map. So these equivalence classes, the unstable equivalence classes, are exactly the boundaries of these graphs. And they are exactly the path-connected components of that bundle. The path-connected components with a natural inductive limit topology. So when you unwrap it. OK. So let me go, OK. Sorry. So this is a particular case of a more general setting of so-called hyperbolic group points. So that includes many more examples, not only coming from expanding maps. And in all these cases, the calligraphs have this, the calligraph of these hyperbolic group points, they have this special point on the boundary. And so they are also stratified like this by horocycles corresponding to that special point. So these levels of the trees are the horoscales corresponding to the, sorry, horospheres corresponding to this point on the boundary. And we have the levels. Levels of the levels can be described using the Buseman cos-cycle. The Buseman cos-cycle, in this case, it just measures the difference between the levels of this graph. So we have, if you take two germs, then you can close this diagram. So you get another germ closing the diagram. And the value of this cos-cycle on this germ, let this germ be gamma, the Buseman cos-cycle beta of gamma is just the difference of levels. The level of the end of the germ minus the level, well, it's positive if you go up and take the difference between these levels. So that's the classical Buseman cos-cycle for a hyperbolic graph associated with a point on the boundary. But that's not the only one Buseman cos-cycle, because if you forget where this graph came from, you look only at quasi-ozometry class of this graph, you just remember this graph up to quasi-ozometry, and you remember this special point. There will be many Buseman cos-cycles associated with each realization of this graph in the quasi-ozometry class. So let me give you a general definition. And then I will, in the remaining 10 minutes, talk about how it is related to the dimension. So let G be a topological groupoid of germs of some semi-group. So in our case, we have a semi-group. It consists of homomorphisms. You take germs of these homomorphisms, and the topology is natural. If you have some partial homomorphism, then germs of these partial homomorphisms, the set of germs of these partial homomorphisms, is homomorphic to the domain. So you have your homomorphism, it's a bunch of germs, and this bunch of germs has the same topology as by the domain of rain. So you get, if you have a semi-group of local homomorphisms, that defines a topological groupoid, so the set of these germs. And then a quasi-co-cycle on G is a map better from G to R, such that there exists epsilon or not epsilon, eta greater than 0, such that first, this is almost continuous in the sense that for every germ gamma, there exists a neighborhood U of gamma such that the values of beta vary at most by eta for all elements of this neighborhood. And the co-cycle property is that for all composable germs, gamma 1, gamma 2, you have the product gamma, gamma 2 of gamma 1, beta of gamma 2, gamma 1 minus beta of gamma 2 plus beta of gamma 1 is less than eta. So it's almost additive, and it's almost continuous. So an example is, well, where eta is 0, actually, is that co-cycle coming from those graphs. So for the semi-group, which I was talking about today, you look how much you change the level, from which level you go to which level, that's an integer. And it's continuous, it's locally constant, actually, and if you go up or down, you add the differences. So that's an honest continuous co-cycle. Here's another example, which is kind of dual. Yeah? Sorry? Yeah, yeah, I know. So here's another example, you take F to be the groupoid of germs defined, so generated by F. F is a local homomorphism. So its germs are invertible. So you can generate a semi-group of local homomorphisms by F. That will give you a groupoid of germs, and there are two natural choices. So every element of this group point looks like that, like this. You take a point, you apply F some number of times, and then you apply branches of F inverse some number of times. So if you compose F and then F inverse, that will be a germ in this groupoid, and all elements of this groupoid are like that. So you go N times here, and say M times here, so you can define the value of the germ of the Bussmann co-cycle as say M minus N, for example. And this will be a co-cycle, if I did it right order. If you draw the kilograms of the kilograms, same way as we did over there, they will be exactly trees without any horizontal edges. And this Bussmann co-cycle is just the difference of the levels. But suppose that M, F is an expanding map on, say, for example, it is a rational function. Then another choice would be to take logarithm, negative logarithm of the absolute value of the derivative at your point, at your base point. So this is point Z, at point Z. And this is also a co-cycle. And it is also a Bussmann co-cycle for the same graph, but for a different realization, for different lengths of edges. So in the same quasi-zometry class, you have these two co-cycles. Okay, unfortunately, I don't have time to tell what I wanted to say, all what I wanted to say. I'm going to finish now by formulating some statements, and that will be it. So the point is that these two pictures are dual to each other. This groupoid and this groupoid, in the sense that, okay, let me, so if you have a Bussmann quasi-co-cycle, so we have two groupoys. One groupoid is what I was talking, what I defined at the beginning of the lecture, the groupoid defined by path. Another groupoid is groupoid of germs generated by the expanding maps. And if you have better Bussmann co-cycle on G, then it can be in a unique way up to bounded additive constant associated with a Bussmann co-cycle on the natural extension. So it induces a Bussmann co-cycle, sorry, eco-cycle on the groupoid generated by the natural extension. And then this co-cycle induces a Bussmann co-cycle on the other groupoid. So if you start from a Bussmann co-cycle on a Kili graph or one groupoid, through the natural extension you get a dual Bussmann co-cycle on the other groupoid. So if you start from the derivative, for example, rational function, you get a co-cycle on the groupoid for the self-similar group. So the project which I mentioned is to study Bussmann co-cycles on self-similar groups. And each Bussmann co-cycle on this self-similar group will define a measure on the Julia set, so on the gram of a hyperbolic boundary, and a class of metrics. So if you have a Bussmann co-cycle beta, then it associates the visual metric, so this visual metric exists starting from some exponent, and you have the measure, basically the Patterson-Salivan measure associated with this Bussmann co-cycle. So this Bussmann co-cycle will be, for the measure, the Radon-Nikodem derivative will be exponent of this Bussmann co-cycle with some coefficient, and the visual metric will be the map, the expanding map will be a similarity with some coefficient, with some derivative, and the logarithm of the derivative will be this Bussmann co-cycle with some coefficient. So you can associate with each Bussmann co-cycle its Hausdorff dimension, so that's the minimum of the Hausdorff dimensions of the visual metric, and one can associate then the smallest such Hausdorff dimension, which will be the isomorphous regular conformal dimension. And one last quick theorem is the following. So I mentioned that these self-similar iterative modem groups are contracting, and contracting means that, as I probably raised that definition already, that these elements belong to a finite set, and that can be in a different way formulated as contraction of lengths. So if you have a group element, and you take its length in the generating set, if you look at how this group element acts on the subtree, so you look at these gv for some fixed level n, you take all of them, you take all these elements, if you remember they were defined by triangles or paths, and you take the vector in of lengths of this, of all these elements, so you take lengths of all these restrictions on that level, x to the n, this is a vector in r to x to the n, actually in z, and you look how much the LP length, so you take the norm, the LP norm of this vector, and you look how much the LP norm is contracted, and you take the exponential contraction rate of this LP norm. So being contracting is equivalent for L infinity norm to be contracted, and then there will be a critical value of p for which you still have contraction, and the theorem is that this critical value is very interesting, by the way I include here also p less than 1, so theorem is that the alphos regular conformal dimension of m is greater or equal to the critical value of p for which g is LP contracting. Okay, so the project is to study these Busseman co-cycles and LP contractions also similar groups which are purely algebraic object and apply it to the corresponding expanding method. Thank you very much, let's thank the speaker. So we have time for some quick questions. I think your measures by this graph are somehow related to geometry coding, your graphs are related to geometry coding trees, I developed a theory and transport of measure from symbolic space to… Yes, so this… There's another graph, but maybe it's very similar to yours by pilgrim and… It is the same graph, I mean, the kind of… my graph is a covering graph of the graph. Another visual measure… Yeah, yeah, it is related to the work, yes. I should have mentioned that, of course. So in the case of the rational map, so we have this three-dimensional hyperbolic spaces attached to the leaves of the natural extension and I guess your heligraph or the group point is some discrete object whether it's a metric… Yes. …to this hyperbolic space. Yes, that's right, yeah. And so if you consider general situation to expanding map, do you have… well, you can… You mentioned that you can also consider one-dimensional extensions of the unstable leaves, you have a natural hyperbolic structure, the two-pitches… Yeah, but it's only discrete. Only… I mean, I have only graphs, I don't have any three-dimensional… So there is no sort of intrinsic good structure… Well, it is up to quasi-zometry, it's a unique structure, but it's like a general… case of general hyperbolic groups that you don't have any manifolds or things like that. So it's the only way to get it through the calligraph? Yeah. Yeah. What about the upper bound for the alpha-cellular guillotine, for what I mentioned? Do you have anything to say about this? So it's not… So this can be less than one. So in the cases when M is topologically one-dimensional. So the inequality can be strict, but the only case is when it's strict is sub-hyperbolic cases with really weird orbit spaces. Like for the Grigorychou group, this number is exactly the number which appears in the growth of the Grigorychou group. So it's really interesting when do we have a stricter inequality here? And so this is like maybe in the… actually in the expanding case, not sub-hyperbolic, but when you really have an expansion the way I defined it, maybe we have equality here, but I don't know. I had a question. Yeah. So you defined, I think you defined what is a quasi-co-cycle. Yeah. But then you are also speaking about a Boozeman co-cycle. Yes. So in these four hyperbolic group-oids, if you have a hyperbolic group-oid like these two examples, G and F, for each of these hyperbolic group-oids there in the calligraph, there is a special point on the boundary. And then you define one has to consider a special class of quasi-co-cycles which I call Boozeman quasi-co-cycles related to that point on the boundary. For example, in the case of the gromov hyperbolic groups, Boozeman co-cycle is a quasi-co-cycle always. There is also always this error. And so inside this class of quasi-co-cycles, yeah. So I give definition what is a quasi-co-cycle, but I didn't give a definition which is a Boozeman quasi-co-cycle. So it's some special type of co-cycles associated with hyperbolic group-oids. Okay, thank you. Any more questions in the room or remote? Okay, so then let's thank the speaker again.
One can associate with every finitely generated contracting self-similar group (for example, with the iterated monodromy group of a sub-hyperbolic rational function) and every positive p the associated \ell_{p}-contraction coefficient. The critical exponent of the group is the infimum of the set of values of p for which the \ell_{p}-contraction coefficient is less than 1. Another number associated with a contracting self-similar group is the Ahlfors-regular conformal dimension of its limit space. One can show that the critical exponent is not greater than the conformal dimension. However, the inequality may be strict. For example, the critical exponent is less than 1 for many groups of intermediate growth (while the corresponding conformal dimension is equal to 1 ). We will also discuss a related notion of the degree of complexity of an action of a group on a set.
10.5446/57322 (DOI)
Thank you very much, Anna. And thanks to the organizers for giving me the opportunity to speak here today. Works? Excellent. Okay. So yes, so I'll talk about docile transcendental entire functions, just to make sure the audio is okay. Anyway, so yes, so this is joint work with Lasse-Remba. So as a brief outline, to begin with, I'll just give some really basic definitions that I'm sure you're all aware of. Then I'll talk about local connectivity of the Julia set, specifically first local connectivity of the Julia set for polynomials and what this means. And then I'll move on to talking about local connectivity in the context of transcendental entire functions and what we have there. So then after that, I'll talk a little bit about basically two classes of transcendental entire functions. We have the strongly geometrically finite functions and docile functions, which have various properties related back to properties of polynomials. And then finally, I'll give an example, specifically an example, which is a docile function, which isn't the strongly geometrically finite, which we'll find out more later in the talk. So to begin with, we're going to start with a function from CSE. It's going to be analytic. So beginning of the talk, we're going to just focus on polynomials. And then from there, we'll move on to talking basically just about transcendental entire function. And as usual, we're going to denote by F to the end, the nth iterative of F. So what we mean by this is just F to the end is F composes itself n time. So then as you all know, the nth iterative of F splits into our two dynamically interesting sets. We have the fo-2 set, which is where the iterative is, where I can continue, and we have its complement, the Julia set. Okay. So for a polynomial, I mean, we can look at the points that basically go off to infinity because infinity is just a nice super-tracting fixed point. And so we can basically define it, give it a name, and say it's I of F, it's the points to iterate off to infinity, and it's actually very nice. It's a nice neighborhood of infinity. It's in the fo-2 set. And importantly for us, for the rest of this talk, is that the boundary of this nice basin of infinity is the Julia set. So this is important for the rest of the talk. And so in the spacing of infinity, if we have two polynomials, p1 and p2, here we have pictures of the filled in Julia set, I suppose, for two quadratic maps, the Calif-R and the rabbit. And so what Bauch's theorem tells us is that in general, these maps, p1 and p2, are conformally conjugate in some neighborhood of infinity. And in particular, if we have all the critical points stay nice and bounded, then we get, well, pictures that kind of look like this. And this conformal conjuction actually extends to the whole basin of infinity that we have. And so the natural question for them to ask is, well, what happens on the boundary of this basin of infinity, which in this case, the boundary is our Julia set. And so the answer comes from this Carthagoria-Torres theorem and links with local connectivity of the Julia set. So we're now going to basically restrict the case where we just look at one polynomial of b and c to the d, and then we nicely have the disk here. And we have then some conformalized morphism that takes us from the complement here over the complement over here. And then the Carthagoria-Torres theorem tells us that this Riemann map that we have extends nicely and continuously to the boundary of our disk, if and only if, well, the boundary of this set over here that we had is locally connected. And so if we're just looking instead of just to arbitrary or an arbitrary domain u, that's an interesting domain, if we're looking at exactly basically these pictures here, then what this tells us is that if, then what it tells us is that the Julia set of this polynomial here is locally connected, basically if and only if we have that this Riemann map here extends continuously and nicely. So in particular, what this does is what it does is what it allows us to do is understand the dynamics of what's happening over here in terms of just basically a quotient of what's happening over here. So we're describing the dynamics of this very complicated system for this basically arbitrary and it's a quadratic polynomial on the Julia set in terms of a much simpler system of just this angle d tupling of this map. And so this is the main reason why this question of local connectivity of Julia sets is important is not so much just because of the, you know, just this is a nice topological property, but in fact because of the fact, one of the main things is because of the fact that we have, that we can understand the dynamics here in terms of this much simpler system. Now, what we're going to want to do is talk about this, but in the context of transcendental entire functions. In order to do this, what we need to do is describe the escaping set. So the escaping set I was talking about earlier by Anna and many others. And it's again, you know, I have F and it's the set of points which nicely escape to infant. And because of the fact that we now have a central singularity, the escaping set is not a neighbor of infinity. It no longer is just a subset of the two set it can meet the book to set and the Julie set. And it is several nice properties that our main code showed in particular for us. What we want is that the boundary of the escaping set is still the Julia set. And this is importantly what we need for kind of the rest of the time. And as well we have the closure of the escaping set has no matter components and this leads to our main cause of injection. So as a quick example of just an escaping set of a transcendental entire function, we have here a quarter times the exponential function. So in black here is what we're saying is the escaping set slash Julia set. And white here is the set. And this picture is periodic. And we have the in white here is some attracting a base basically our picture. So we want to know, well, what about local connectivity? Okay, so in general even if we're looking for local connectivity at polynomials, the Julia set doesn't actually need to be locally connected. So what about for transcendental entire functions? Well, the same is true. So especially for transcendental entire functions, the Julia set need not be locally connected. So for example, this picture that we saw before here, the Julia set is not locally connected. Okay, so what we want to do is say, well, what happens if the Julia set is in fact locally connected? Well, then even though it is this property of being locally connected, unlike for the polynomial case, where being locally connected basically allowed us to nicely describe the topological dynamics very, very accurately. But now this is really no longer the case that if we have say the Julia set of the exponential function, it's the entire plane. And this doesn't really give us anywhere near as complete description of the topological dynamics. It doesn't give us anywhere near as much information. So what we're going to want to do is talk about, well, what can we say about describing topological dynamics of basically this map on the Julia set. And in particular, what class of functions can we look at for which we can do this? And for that, we're going to need to introduce some various, well, terms, first one being the singular values of a transcendental entire function. So we talked a little bit, others have talked a little bit about them before. So we have the asymptotic values and the critical values of that. So remember, critical values are just the points where the derivative is zero. And asymptotic values means that we have some curve on which, so you have some curve that tends to infinity. And basically on it, the function tends to some finite value. That's what we mean by asymptotic value in general, or at least a direct asymptotic value. And so this allows us to define the Aminkel-Lewitz class, this class B, which are the class of transcendental entire functions for which the singular set is bound. So we can describe it alternatively in terms of these tracks that, for instance, Chris talked about earlier, that we have, say, here, I don't know if people online can see, but we have, say, the disk on which all of our singular values are based. And we look at the complement of this. Then we'll have some nice, simply connected, unbounded domains here, which these guys are our tracks. And if this guy has some radius r, then on here we have the modulus of f of z is equal to r, and inside it's going to be greater than r. And in fact, here, what we have is a nice cover. Okay? So, and if you want, for example, the exponential function, the track for it is a nice right half, here, this guy. If you stop the screen sharing on Zoom, then it'll also record what you do on the board, I think. I can try. Maybe I shouldn't. Well, the next time, the next time I use the board, I can ask Castile to, I press red. Okay, sounds good. Okay. Perfect. Okay. Thank you, Losson. So, next, we want to talk about functions that are disjoint type. So, functions of disjoint type are functions for which, well, they're in class B, and we have some joined domain, which we can think of, say, as just this disk that we already had, for which the singular values are all nicely contained in there, and this domain nicely maps back into itself. So, now let's see if I can do this, do that. And basically, what we have is that here we have our tract, it's nicely mapping over to the complement of our disk, and then we have, if we look at what, where the tract actually is, it's nicely in this complement is what we want. And in particular, of note here is that, oh, there's no slides anymore. There we are. But you can't see them. Zoom, there's green. Okay. Anyway, so, yes. So, basically, what we wanted to do is say that this, what we can do is we can basically shrink this disk in order to basically always obtain that we get a function of disjoint type. And, well, for people here, we have, say, here's our class B, and inside of it, we have functions of disjoint type, which is living in some larger thing of functions of their transmittance entire. And so, one more thing is that if we're in class B, then we call functions that are post-singularly bounded, which is where the post-singular set, which is the union of forward orbits of the singular set, is bounded. We call these this, if this set's bounded, then we call it post-singularly bounded. Thank you very much. And, in particular, for people here, this is post-singularly bounded. Contains functions of disjoint type, but is contained in class B. So, these are various subsets of class B. So, we can call functions that are trans-sintel entire functions are strongly geometrically finite now if, well, the two-set intersected with the singular set is compact, and the Julia set intersected with the post-singular set is finite. So, having that the Julia set intersected with the post-singular set is finite is, I believe, an important property for polynomials, when studying some classes of polynomials, and in particular, this is basically an analog of this. And also, what we want is that the Julia set contains no asymptotic values of F, and also this requirement on the local degree of points in the Julia set. So, what's a particular example that you should think of when you look at these functions? Well, the sine function. The sine function nicely just has our two singular values, minus one and one, and yes, it'll satisfy these properties. So, we've introduced these various quantities. So, what are we going to do with them? Well, what we want to do is talk about basically the analog of this Riemann map that we had before that nicely described the topological, well, gave us a nice description of the topological dynamics of what's going on. In order to do that, we have, well, in particular, to get started with, we have this nice result of Rempa from 2009. So, he states this for more general functions g and f, but if we just have f in the class b, then we can take some g, which is f of lambda z, as just a disjoint type function. So, we take lambda as sufficiently small, say, so that g is actually of disjoint type. And what you can think of with taking this lambda is basically we're moving around within the parameter space of f, is basically the idea here. And so, if we have this, then there exists some quasi-conformal map that we had, such that, well, restricted, well, so we have this conjugacy restricted to the set, this j greater than r, or equal to r, which is the set of points in a Julia set for which the modulus of the iterates is always greater than or r. So, you can think, you know, you have your Julia set, which is some say, question of terms, and you're looking at all of those guys that are always stay greater than or equal to some value, r. For instance. Correct. So, this is not necessarily in this escaping set, correct. It's, yeah, just, just, okay. So, now, slide. Okay. So, from this result, what we get is that if we have two functions that are posting rebounded, then this map theta hat extends to some nice natural projection on the escaping set. So, we had our theta, which is a quasi-conformal map, we'll define on the plane, but in particular, we have this conjugacy on this j greater than or equal to r. And from it, we can get this map theta, which is slightly different off of this j greater than or, but we nicely have that it's a bi-adjection from the escaping set of f1 to the escaping set of f2. And so, this we can think of basically as an analog to our Riemann map for our polynomial. We had this nice Riemann map from, well, basically the escaping set of one polynomial to the escaping set of another polynomial, of another polynomial. And this was basically the analog. And so, what we do is we define docile functions as being functions that are posting rebounded. And again, we want g to be a disjoint type, so we move around within our parameter space to make lambda sufficiently small so that we obtain something of disjoint type. And then, this, if this bijection extends to a continuous function on the Julia set, then we call this map docile. So, what we're really doing is we're saying, okay, we have this analog of this map that looks like the Riemann map for polynomial. And what we do is we call these functions docile if, as for polynomials, this extends to the boundary, if this also basically extends in a sense to the boundary. And so, related back to these strongly geometrically finite functions, so Alamed, Rempa, and Sixth Miss show that if you have a strongly geometrically finite function, then it is automatically docile. And so, we'll see later as our example to show that if we have a docile function, it isn't automatically strongly geometrically finite. So, again, to reiterate, this is basically an analog of what we have for this Riemann map extending to the boundary in this polynomial. So, if we have our function is just a disjoint type, then I guess it's automatically docile. Is that what you're asking? That if lambda is equal to one? Sorry, I don't understand. So, here, we start with a function which is postingly bounded in class B. And from it, we get a G by taking this disk, basically, to be small enough that we get a disjoint type function that we shrink down this disk. So, we force it by taking a particular lambda. And so, from that, we automatically get this map theta that we talked about before. And we ask whether or not this map theta actually nicely extends. Yes. I believe so. Yes. Yes. So, it's for some lambda. So, you take G of z to be some lambda for which it's a disjoint type. Yes, that is correct. So, you're just taking, so you're just choosing G to be that we have some lambda that's small enough so it's a disjoint type. And then, we call the function docile if we have this theta here extends to a map on the Julia set, just conduciate to the map. So, we can do that. No, maybe we can talk about after. Yes. So, we want to show various properties of these docile functions. So, in particular, one obvious one is that, well, if we're looking at this Riemann map that we have, well, the Riemann map is going to extend to the boundary independent of whether or not we're looking at f or f2 or f3 or f to the n. But this isn't immediately obvious if we're looking at the nth iterate of f. So, in fact, it's true that if some function f is docile, if and only if f to the n is docile. And our main result is, so, if we're given a post-singularly bounded docile function and some compact connected for invariant set, subset of the Julia set, so you can think like for instance the boundary of this cauliflower or you can think of basically polynomial like mappings, then this compact for invariant set K is locally connected. So, basically, what this tells us is that we do have some relation between these docile functions and local connectivity. However, of course, just because the fact that function is docile doesn't mean that the function itself is locally connected. For example, we'll see in the next slide, or if you have a function for which the f2 set is unbounded, then the Julia set definitely won't be locally connected. Yes. Well, as a set, yes. So, your set K. No, just from the point of view of K. Yes. So, here we have, for instance, here's say, cauliflower and the rabbit. And what we're saying is that we have the boundary here, which is going to be our set K, and we're saying that that is locally connected if it's docile and the same for the rabbit. And so, a very brief sketch of the proof. So, basically, what we have is we're given some compact set K here and which is for our function f, which is docile. And what we want to do is we want to show that in fact K is locally connected. And so, what we can do is we can take, basically, the Julia set is landing at various points on our set K. And we look at our map theta that takes us from our disjoint type function G. And we can identify each of these points that land here, which each of these points that land over here. And so, using the properties of our nice bijection theta, we can basically use topological properties of what we know over here in order to get that we have a nice ordering. And so, in fact, this map, or this set K is locally connected. Don't worry. I'll change back to the slides just as soon as you get it done. Don't worry. Okay. And just finally, as an example of a function that's docile but not electronically finite, we have this one that was studied by Bergweil before who showed that this black part here, which is the ptoset, is completely invariant. Thank you. And what we do is we say, well, it has an indirect asymptotic value, which indirect asymptotic value is a type of asymptotic value in the Julia set. So it's not strongly geometrically finite. However, it is docile. In order to do this, what we basically do is construct some basically uniform expansion with respect to symmetric that we construct in a neighborhood of the Julia set. So there we go. Thank you very much. So questions for James. More questions because there were a lot during the talk. So just for curiosity, when you mentioned that this conjugacy on the Julia set on the escaping set, excuse me, was the analog to the Riemann map. Yes. So if you took a sequence of polynomials converging to the exponential, for example, and you looked at all those Riemann maps from the basin of infinity, so from the escaping set, would you get convergence to this map? This is a good question. I do not know. Perhaps losses in the audience, he might know. But I am in the audience, but I frankly never thought about that. But that's a good point. I mean, there are these, you know, there's this work of the Van Gogh, but how about while the original preprinted then later published with some additional people, where they were looking at this convergence of these hares to the exponential. So now you'd be looking at these Riemann maps and they would kind of, you know, the compliments should converge to the to the cantilever K. It's a good question. It's a very good question. Okay. There are some sense in which these converge quite possibly. Okay, okay. Thank you. So any more questions? And let's thank James again and let's get ready for the next talk.
Several important problems in complex dynamics are centered around the local connectivity of Julia sets of polynomials and of the Mandelbrot set. Importantly, when the Julia set of a polynomial is locally connected, the topological dynamics ofthe map can be completely described as a quotient of a power map on the circle.Local connectivity of the Julia set is less significant for transcendental entire functions. Nevertheless, by restricting to a class of transcendental entire functions, known as docile functions, we obtain a similar concept by describing the topological dynamics as a quotient of a simpler disjoint-type map. We will discuss the notion ofdocile functions, as well as some of their properties. This is joint work with Lasse Rempe.
10.5446/57323 (DOI)
What I wanted to tell you about is really iterated monodromy groups, which is a topic that we already had in Volodia's talk yesterday. And I wanted to focus very much on the particular situation of quadratic polynomials with one positive sign that many things become much simpler when you concentrate on this special case. And I will try to give many examples rather than two general theories. And the other is that we could push the theory in the direction which really interests people in dynamics, namely not the critically finite maps, but critically infinite. So how do you take care of a critically infinite map, which corresponds to infinitely many punctures on the sphere, as we saw in Hamel's talk, is much more subtle in terms of the parameter space, and also from the group theory perspective, where we have to deal with more complicated groups. So first, very quickly, the general setting. You will recall from yesterday that we have a self-covering of a manifold map. Sorry, self-covering of topological space M, which you can assume is metric. And we can assume also the map F is expanding locally for the metric. But I won't write the hypothesis yet. If we take T in M, we produce the tree of pre-images. And this is a tree varying continuously. Dating in progress. So this is really a disjoint union of all the pre-images. And you must think about this as some sort of an abstract tree. So we have M here in the covering. We have our base point T. And then there are two pre-images. And then there are four pre-images. And so on. So this is really the tree that you should keep in mind. Written in this way, it's a set of vertices. And I connect by F. So I connect to Y to F of Y. Okay. So as I move my point T, as I said, these points will move too. And if the map is expanding, then these points will move more slowly than the way T moved. In all cases, if I drag T around the loop in this space, then these points will also be dragged, maybe along a loop and maybe along a path, which is not a loop. But this is the monodromy action, which is that the fundamental group of my space, base at T, acts on this way. Right? So for now, I'm just repeating what we saw before. But the important thing is that there is just one. So let's say that this is degree D. This is going to be a deregular root of tree. There's just one deregular root of tree. Vertex and then d ancestors and d squared, double ancestors and so on. But there are many ways of writing it, even though there's only one. We would like to identify this tree with something symbolic. The natural symbolic object to associate to it is the tree of words. Words over an alphabet with a d symbols. And here we connect, say a word w, say w x1 xn to x1. We connect two sequences by an edge if we agree up to position n minus 1 and then there's an extra letter for one of them. And one way of doing this identification is to give some labels. Maybe I should start using colors. So call this one 0, call this one 1, for example. And by lifting them, I could say that these would be 0s, these would be 1. But a very nice way of doing it topologically, let me use a different color for this, is that we can copy the pre-images of t in here. This will be a image. And I can choose some paths in here and label these paths 0 and 1. This gives me another way of identifying the vertices of the tree and the edges and to do some symbolic identification. So the point is that this path here is the same as this 0. And I can take the lift of this little squiggle here that is based at here. There would be a little squiggle here and a little squiggle here. And from here there will be a little squiggle here and a little squiggle here. And now this I can use to give some symbolic encoding to the edge here, to the edge here and here and here. This is not the same thing as the encoding that you see by following these arrows. But it's a very good way of doing an encoding. And in fact, this is the one that we will use. I call it geometry coding tree unless your original was dynamical tree. Yes. So using this we can describe the monodrome action. This is also a picture that Volodya explained. It would take x to be these paths. And now there is a map. From x cross g into g cross x. And I will write this map in the form x i hash g equals h hash xj. And this hash symbol means just concatenation of appropriate lifts of paths. So here I say I take x i which will be maybe this path here. And then I have an element g which is maybe this one in the fundamental group. I want to write the concatenation of this x i with g. But that's not possible because x i stops here and g starts here. But there's a unique lift by f of g that starts here. And well, it's in fact going to be this thing here. You have to always remember that the picture takes place in some other image. It will be this arc here. And now this is a path from t to a preimage of t. And there's a unique way of writing it as some loop at t followed by a path from t to that preimage. So the choice of j here is dictated by where this lift of g stops. It has to be this one. And then there's a unique element h that gives this equality as an equality of paths up to homotopy. So this is the data that we have just from looking at paths and seeing how they are lifted. And this data is sufficient to reconstruct the action on the tree of words so that it's up a space of words. Now, unfortunately, there's a terrible confusion in this area which comes from left and right choices. There's just no way to get out of it. I wrote this map in the natural order because this is how you see concatenations. I could have written these paths in a reverse direction. Then you would have seen everything in the other order. I think the only important thing is to stick to the same convention between the first and the last page of an article. And apart from that, there's just no way out. One of the reasons is, in fact, that you shouldn't think about left and right, but you should write things on a square. So there would be an up-down in another direction. Maybe I will explain this. In all cases, if you're given g in the group and x1 up to xn star, well, now I'm going to act from this side onwards just because I don't want to introduce any extra artificial reversion. Maybe I should write my words in this way. Well, you just write x1 up to xn dot g. And then this map phi tells you that when you have a g, you can switch it past the x and get a new element of x and a new element of g. This will be some g1 y1. And then there's an x2 here. So we'll switch these two. Then this will give you g2 y2 and so on. And I can rewrite it using my rule phi in a unique way as y1 yn. And then there will be some extra element h. This is the result of the action. Okay, so there are many convenient ways of doing these calculations. And I certainly don't want to harass you with all the different formalisms. But it's probably a good idea, nevertheless, to know that they exist. So, maybe I should give an example. Maybe the example even fits here. So, Volodya, give the example of the basilica. I want to do something simpler. Z squared, seen as a covering. And c minus 0. So in that situation, the fundamental group g has, oh, he's already used. Hey. Gamma. And this is a point that's sandwiched by a loop. Around zero. There's the puncture. I must choose a base point somewhere. So let's choose it here, for example. Then t will have two pre-images, the square root of t somewhere here, and minus the square root of t. And now a choice of the basis will be an x0 here, and maybe an x1 here. OK, to compute phi, I must say that I have to say x0 and gamma and x1 and gamma. What are they? Well, if I do x0 and then gamma, so gamma is in fact rather, it's a loop-based phi. It would rather be something like this. So one lift, and again I should use, so not to start on my picture, one lift will be like this, and the other lift will be like that. Now if I do x0 and gamma, this is really up to homotopy. The same thing is x1. And if I do x1 and then gamma, this is the same thing as x0. OK, so this is the algebraic description of the map z squared. In that, well, that's one. That's what you wanted? Yes, you have a map to g cross x, and oh, oh, oh, oh, oh. So no, no, no, no, it's not. Thank you very much. Yes. So if I do x1 followed by gamma, I stop at x0, but it really is gamma followed by x0. Yes, yes, thank you very much. Yes, it's not just the endpoint, but this x1 followed here is really going once around. OK, right, so this is one way of writing the information. And in some sense, the best way of writing the map y is by a square. If you put an x0 here in the gamma here, then you'll put an x1. I put a gamma and x0. I have an x1 in the 1. And if you put gamma, x1 will get x0 and gamma. So these things come with a solution. And in this way, if you want to compute the action of gamma on some word, well, you write gamma here. And you write the word here. It will be x0. x1. x1. x0. x1. x0. x1. x0. x0. x1. x1. x1. x0. x1. x1. And then there's a unique way of completing this picture with the squares. And here I have to use this square. So I have to put an x0 in the gamma here. And then I have to put again the same square. Again the same square. Now I have to put that square, which gives me an x1 and a 1. And then the 1, I didn't write the formula for 1, because obviously it's enough to write it for generators. 1 will just give me, again, a 1. And this is the result of the action on that word. So if you do things this way, you never have to worry about left and right. Unique natural way of working. Another convenient language is to say that you want to focus really on the action of the elements. So rather than writing a map from x cross g to g cross x, you write a map from the maps from x into g cross x. So, ultimately, we can say that gamma gives, well, now I have to write the pair of elements that I get here. This is called x0. This is x1. And I have to remember what happens to the symbols themselves. I will say that x0 and x1 are exchanged. And the number. This is exactly the same data just written with different symbols. And finally, one more notation for this is if you are so lucky that there exists S finite for which phi of x cross s is contained in s cross x, then you can write a graph. With vertices S and edges, it will go from some S to some T with label x goes to y. And this you do each time phi of x s is equal to T1. These are called automata. So in this situation, the automaton would have one vertex. They're called states. They're called gamma, one for the identity, then 0 becomes a 1. And 1. Now reading paths in these automaton is the same thing as reading words. And quite clearly, if I follow the path with x1, x1, x1, x001, that will be x1, x1, x1, x0, x0, x1. By looking at the input symbols on every edge, I look at the corresponding output symbols on the same edges. And this gives me exactly the result of the action. I can't really say that one notation is better than the other. Still about this example here. I want to do just a little bit more, which is that we're allowed to mark other points than just 0. I could also have decided to mark 1 if I wanted. And I could also have added one generator, which does a loop around 1, delta. This is perfectly legal, even though it doesn't seem very smart. And then the recursion for delta will be that if, I do x0 and then the lift of delta, well, one lift of delta will be delta itself. This will be that. On the other hand, if I do first x1 and then the lift, this lift surrounds nothing. So it's invisible. And it just gives me again x1. Good. So one question I have not answered is, why is this in any way useful? So one reason I believe it's useful is that you can use it for calculations that involve some extra homotopical data. So for example, I talked here about the map F. And I described the recursion for the map F. We could also have considered the map F of z equals 1 minus z. It doesn't seem to be a very exciting map from the dynamical perspective. This is a degree 1 map. So the basis x used to describe it will have a single element. That's where it's y0. And if I go back to my picture here, I have a loop around 0, loop around 1, and it will just be exchanged. So the recursions that I have will be that y0 hash delta is gamma hash y0. And y0 hash gamma is delta hash y0. Now I can compose. So if I look at the composition of 1 minus z with z squared, this is essentially the basilica. To change of sign, this is the basilica. And you can compose the operations. The basis will naturally be, well, I first do x and then y, or I first do x1 and then y. And if you go through this, you will recover the recursions that Volodia gave us yesterday. So there's an important message behind this, which is that you can construct more complicated objects. Such as the basilica, starting by something you understand well, like z squared. And here you can put, in fact, any Mobius transformation. So this lets you describe all quadratic polynomials as soon as you understand z squared, and you understand the recursion associated to a Mobius transformation, a degree 1 map. In fact, you can do all rational maps in this manner. So you don't have to do this complicated path tracing all the time. You have to do it just for the map z squared, which we could do without too much difficulty, and then use compositions. Now it's quite natural to describe Mobius transformation, or mapping classes in terms of group theory. Here, this map 1 minus z is just exchanging from truth, so it's action on the fundamental group is just a switch to generate. And we want to combine this efficient notation for mapping classes with the dynamics coming from z. And this is the way to do it. OK, so yes, well, how do you describe a mapping class? See if a punctured sphere. A finitely punctured sphere. So you can see that the map 1 minus z is just a switch to generate a map. And you can see that the map 1 minus z is just a switch to generate a map. And you can see that the map 1 minus z is just a switch to generate a map. Well, in general, what is a mapping class? So a good way of representing one is saying that it's an automorphism of the fundamental group. And if it's an automorphism of the fundamental group, this gives me directly the recursion here, which says that y0, my single basis element, hg, is y of g, the image under the automorphism, hash y0. This is just a way of copying the automorphism of the free group. Aha. That's an excellent question. I don't want to go too far into this, but the whole point about having this flexibility of choosing a y0 is dealing with base points. So once the y0 has been fixed, I have made a choice of a base point. And if you look at this object up to a choice of y0, then this is base point independent. But this is really the whole beauty of the theory. OK. So. Is the method that the free class is to put out a loop? Yes. Exactly. And the inner automorphism precisely corresponds to a change of base point. A change of base point. And replacing the y0, this basis element, which maybe you thought should be just a constant path at the base point, into some loop expressing this change of base point. I'll say something that is probably very useful in this respect when I get to more details on quadratic polynomials. There is maybe just one thing I wanted to add about the automata picture. Maybe I can respond. Oh, yes. Good. So we can be so lucky that there is a finite s. And then we draw a finite graph. In general, we could have done this for any s, even the whole group itself, if we wanted. But when we are in this favorable situation of the existence of finite set s, containing, say, the group's generators, we do have a complete encoding of the data by this finite. Now, I didn't explain this maybe sufficiently. But if you want to act by gamma squared, well, you will just put two gammas next to each other. And you will complete this picture. And here we have to use just once this tile here, which gives us some x1. And then all of these will be once we've got these x0. What we see in particular is acting by gamma squared, lifts to an action by gamma each time, and then to the identity. So in the automaton picture, gamma squared could have been added, but it directly leads gamma. And gamma cubed and so on would also lead to here. There is another element, gamma inverse, which does the opposite as that one. Now we are here in this very favorable situation that there exists a finite attractor in this automaton, in that wherever you start from, any element g, it will follow arrows, and eventually it will hit this set. This is exactly what Volodya had defined as the nucleus. And then there are very strong results that say, for example, that if the map f, the self-covering we started with is expanding, then there will exist a finite nucleus. And this nucleus, well, the automaton that you see here, not just the set, but the graph structure with a group, is really the invariant characterizing the map. OK, so this finite. If it's going to hit. Good. So now I want really to move to the situation of quadratic polynomials and see as much as possible about them. We can raise. I don't think I can move two things at the same time, right? No, I cannot. So my goal is to give you something like a iterated monodromy group dictionary. In that, the study of quadratic polynomials has been pushed so far in the last 50 years, since the birth or so of Hamal, that there are many concepts that have proven useful, and many of them have a translation to the algebraic world of these iterated monodromy groups. And I will not give many new results, but I will hopefully show that concepts on both sides are tightly related, with the hope that if we want to study cubic polynomials, higher degree polynomials, rational maps, or other such objects, they would help in leading to useful generalizations. So the example I will mostly concentrate on is the rabbit, probably the nicest one, on which we can describe things. So this is the map, whose Julia set will look like this. The postcritical set consists of these points. And I can consider the map f as a map on the Julia set, or just on the sphere punctured at these three points. Remembering that it's not exactly a covering, because, well, there are the pre-images here, which don't know where to map to. But I still have this unique path lifting property on the sphere punctured at these three points. So there will be a natural group generated by three elements going around these punctures. Now how do we write the recursion, or the automaton, or the squares, or any of these pieces of data? So one thing we can do, and this takes directly inspiration from what complex dynamics has always done, is we cut here at angle 1 over 14. And that's 1 over 14 plus 1 half, which, well, OK, I could take one of the seven. Yes, sir. So I can cut in this way, and I can use this as a basis for my bison. I will put my base point t close to infinity, and then there will be some pre-images of t. Well, one pre-image would be close to infinity also. Let's not bother about where, and the other pre-image will be minus that one. I can take one connection, let's say x0, which is this one, and another connection, which is this one. Same point. So this will be x1. And now I would like the nicest possible formulas to in particular this situation of an automaton. Well, for this, the solution also is already well known. I have to find what is called an invariant spider. So these are the legs of the spider. The body of the spider is here at infinity. And the spider has legs going to the spider. So saying it's an invariant spider means that it's a collection of curves such that if I lift them by the map f, either they become trivial or they are part of my collection of curves. And in particular, there is an algorithm, you can dirk, oh, hello dirk, and hamal, that says that you can construct, well, you can reconstruct maps out of topological data by a spider algorithm, which says you start by any spider, and you lift, and you lift, and you lift, and you straighten the legs, and you will converge to an invariant spider. An algorithm that actually works in practice in quite a few cases. It can be very much improved using iterated monotomy groups, precisely because we can decouple the lifting operation, which is analytic and the group theory is very much in the information. It's like in a mapping class group. We don't have to construct complicated edges as segments. We can construct them as words in a group. In all cases, if we have an invariant spider, such as this one, we can write a recursion. Here the recursion will be, let me write it in this form with the arrows. And here exchange x0 and x1. So this is also something that you can write as a graph. A goes to C, goes to B, goes to H with the labels 1, 0, 0. And here 0. So this is the beginning of the dictionary problem, which is basis, cut along the critical value. And then what I would call automaton, so existence of an automaton would correspond in my dictionary to the existence of an invariant spider. And we can do a little bit more. And that is a sequence here, 1, 0, 0. But this one, it's both a 1 and a 0, in fact. If I had written the inverses of the elements, I would have switched these. So what you see here is something like star, 0, 0. And in fact, this is the general situation. So maybe I can call this a theorem, which is that for periodic critical point, the recursion has generators a1 to an. And the formula is that a1 is 1an. And then this exchange, 1. And ai plus 1 is 1ai, or ai1. And this is in some case where sequence u1 is 1. I remember this position. This is ui is 0. So this defines a group, k of u. And u is the sequence u1, un minus 1. And u is the needing sequence. And then this is label on cycle. Now, I do want to check the time. This is maybe not the best choice of basis for the bicep, but the choice of connection. And in fact, there's another choice that I want to move to now. Ok. So, again, the rabbit, again the base point, here. And I can think that a polynomial is a deformation of the monomial z to the d, z squared in our case. And we want to give a special status to this. So, we could want the connections between t and its free mergers to be close to infinity. So, I could choose this path. This is x0 and this is x1. Now, this will have the advantage that if I consider this generator here, g infinity, the formula for g infinity is exactly the one that we computed in the beginning for the matrix squared. So, g infinity will have this form or in automata language. So, this will be 1, 0, 0 goes to 1, 1. Now, having infinity on the picture is not very convenient. So, let me draw the same infinity as the way we think about it on the plane rather than on the sphere. This will be my... And in fact, the t would be rather here. And the free mergers, which would be square roots, would be rather here. This is x0. This is x1. Now, remember that we had the tree of free mergers, which was a completely abstract tree constructed from the map f. But it also has an actual representation here. So, let me draw the lift of this. And this tree will approach the Julia set. If you remember that the map is expanding, this means that these lifts get shorter and shorter. And they will approach and they will collide with some. So, this is a place where they collide with exactly correspond to some group elements that constantly keep some activity. So, I'm going to connect this to the nucleus. But in all cases, by making this choice, this finite choice of an x0 here and an x1 there, I can again write the bisect. Now, my natural generators would be external rays. External rays and maybe an internal ray. So, I can write... Let me give them names. So, this is 1 over 7. This is 2 over 7. And this is 4 over 7. So, g4 over 7 is easy to write because one lift would be 2 over 7 in taking a square root and the other lift would be inexistent. 2 over 7 is also easy to lift. And finally, the 1 over 7 is not so easy to lift. Well, it becomes 1 over 14. I think I had done before. And then it crosses here and then returns that way. On the first side, in fact, it's encircling these two in reverse order. That would be g1 over 7 inverse. G2 over 7 inverse. In the second coordinate, it's doing these two and then also the 4 over 7. So, this looks a little bit messy. And the reason it's messy is that we didn't choose exactly the right generators. So, a much more clever choice of generators is by considering the arbitrary or more precisely the extended arbitrary. So, if I want to have a state closed formula, a finite set S with labels to itself, I should consider this element as part of my set S and I should take this one and then I should write the formulas for these and keep working with them. And in fact, S can be conveniently written as those paths crossing once. If you take something that crosses once, when it's lift, will cross it most once. So, it's automatically a set having this property that it gives you a finite. And in fact, you can draw the automaton directly on the picture by putting your states here for crossing here, here, here, here, here, here, and here. These are the eight states of the automaton and I won't do it now for lack of time. But if you choose these states and you take this to represent the path that crosses once, across the arbitrary, I'm connecting to infinity, take extension. There's a unique way of coming back. Then you will have an automaton. So, I want to keep here basis will be connected to infinity. So, we're thinking about deformation. And then automaton, the existence of an automaton is really the existence of a arbitrary. And then the recursions themselves, what you see here, are the itineraries. So, yeah, it seems magical to write formulas and lift them and so on. But if you know the arbitrary, you just draw the automaton next to it and you have it. Okay, so, now let me write. Three minutes. Well, well, well. Sorry. So, you mean less than three minutes. One minute more. Sorry. Four minutes. Four minutes more. Yes, very good. So, one thing I want to say is that, four, so if the postcritical set is infinite. Possibly. Then sort of the same picture will hold. If you look at these formulas, you can multiply four over seven times two over seven times one over seven. This is really multiplying them, coordinates by coordinates. You see it cancels here and here. You get again the same element four over seven, two over seven, one over seven. So, this product here is the same thing as G infinity, the loop that we had. So, the general situation is the following, that we have G infinity expressed as a product of elements. And this could be infinite product. To take an infinite product, you need some topology. The topology of, for example, the automorphism group of the tree. And then you have formulas of the form G i is G i minus one one or one G i minus one except for the first one G zero, which will be equal to some product G i on some index. And. And there's in fact a straightforward way of constructing such expressions. So I here, this I is countable. It's ordered. And we can assume that it's a subset of the circle. That's the way it's always given. But in fact, here's the fundamental example. If you fix an angle phi. Then you can construct generators. And then you can construct a generator. So, G theta. So theta in the circle. There will be, of course, continuously many, but all will be trivial except countably many of them. So G theta is G theta over two, G theta plus one over two. Not equal to phi. And G phi will be equal to here, the product on all theta in the interval phi, phi over two, phi plus one over two of G theta inverse times the product. So, the product on all theta, now in the closed interval. And here exchange x zero. Now these are the invariance that are associated to the angle phi. More precisely, the group which is considered is G sub phi, which is the group generated by all intervals. The product. So this lets me just the second I need for a theorem. Now I have to insert somebody here. Which will be, which is that the collection of G phi, up to isomorphism, well, up to subgroups, backing on the tree, as subgroups of the automorphism group of the tree, is the same thing as the Mandelbrot set over combinatorial equivalence. So here I identify two points in the Mandelbrot set if they have the same hyperbolic components between z squared and themselves. So, if the same hyperbolic components between them. So Mandelbrot set here, up to combinatorial equivalence, corresponds here to a family of groups. And I think, the binary. There's a natural relation on the Mandelbrot set, which is being a limb of one the other. There's a natural relation on groups, which I will not explain because direct forbid me to explain it. And then this is an isomorphism of posets. Thank you very much. Thank you very much. Thank you very much. There is no time for question, but one very short question I would allow. No question, no questions from online participants. So again, thank you. Thank you very much.
Quadratic polynomials have been investigated since the beginnings of complex dynamics, and are often approached through combinatorial theories such as laminations or Hubbard trees. I will explain how both of these approaches fit in a more algebraic framework: that of iterated monodromy groups. The invariant associated with a quadratic polynomial is a group acting on the infinite binary tree, these groups are interesting in their own right, and provide insight and structure to complex dynamics: I will explain in particular how the conversion between Hubbard trees and external angles amounts to a change of basis, how the limbs and wakes may be defined in the language of group theory, and present a model of the Mandelbrot set consisting of groups. This is joint work with Dzmitry Dudko and Volodymyr Nekrashevych.
10.5446/57326 (DOI)
All right, so today I want to talk about iterated monodramming groups and transcendental dynamics, which was the title also of my thesis. So what I will be focusing on today is basically how do points in a backward orbit of a point move. We have F, an entire function, and we have some base point T. We can look at its forward orbit, but we can also look at its backwards orbit. And the basic slogan behind iterated monodramming groups is how do points in a backwards orbit move if we move around our base point around the set. And so the possible ways to permute the backwards orbit can be described by the iterated monodramming group. And iterated monodramming groups are examples of self-similar groups. And self-similar groups have often very exotic geometric group properties. For example, one of the first examples of groups of intermediate growth, the very celebrated Gliacro group, is a self-similar group. And similarly, the Basilica group, which you already have seen in the talks by Laurent and Poloria, is an example of a non-elementary amenable group. And in my thesis, I initiated the study of iterated monodramming groups for entire functions. And one of the main results of the thesis is the following theorem, that if you have a post-singular finite entire transcendental function, then the iterated monodramming group of F is amenable if and only if its monodramming group is. So in my talk, I will first do a brief recap on monodramming. Then I will go over to iterated monodramming groups and in particular show how to combinatorially conceptualize iterated monodramming groups of entire functions as groups determined by some automata. And then we will see how we can prove the result that I stated in the beginning. So in order to go about monodramming, I want to do a brief recap on singular values. I think most of you have seen this in Anna's talk. So what's important for me is this function, 1 minus 3 times the exponential function. So this function has two singular values, 0 and 1. 0 is an asymptotic value, which you can see by going off to negative infinity. So then if you apply the function, you will get close to 0. And it's easy to check that its only critical value is 1, which is the image of 0, which is the only critical point. So I will work with this function during my talk most of the time. So I wanted to mention this here. And why are we interested in singular values? So I think this lemma was also seen multiple times already in the conference. If we have an entire function, then it restricts to an unbranched covering away from the singular set. And an unbranched covering, so classical covering from topology, has a unique half-lifting properties. So if we have some path here going around 0, then we can lift it. And loops might not need to close up. So if you go once around 0, we only do a half turn if you take three images. And this is where the interesting properties of monodromy come into play. So what we are doing for entire transcendental function is the following. We consider functions in the Spicer class. So we are only interested in functions that have finally many singular values. And we want to define or give a combinatorial description of the monodromy action, so of the action of the fundamental group of C without the singular set on the set of three images of our base point. And the way I like to describe this is using a preferred generating set coming from a growth graph. And this is dual to a Spicer. For me, a Spicer is something that connects every singular value to infinity by this joint arcs. And here on the right, you can see such a Spicer for the example functions we are constructing. And so what's the dual generating set? So we just take loops that cross one of the Spicer legs once in a positive fashion. So for the Spicer we have seen here, we have two generators, G and H. And then we take the three images under our entire function and we obtain the Shriagraph of the monodromy action of our function. So if we take this preferred generating set, then we have some nice properties relating the orbits of the generators to pre-images of the singular set. So here we can look at the colored in regions and look at their pre-images. And from this we see that under this generating set, trivial pre-images correspond to regular pre-images. So there's a little violet curve correspond to regular pre-images of the value one. Finite non-trivial orbits correspond to critical points. So the fact that zero is the critical point and that's 2, 2, 1, 2, 1 is reflected by the fact that we have an orbit of size 2 around zero and infinite orbits correspond to logarithmic singularities. So the fact that zero is an asymptotic value is reflected by the fact that we have here an infinite orbit for this generator. If we consider the action, then this defines also group homomorphism from the fundamental group of C without the singular set to the permutation group of the pre-images. And so the monodromy group is the image of this morphism. And for structurally finite maps, so these are maps with only finitely many singularities, the monodromy group is nice enough to handle for the exponential function. You all know that's just Z. And for the example, which I've given here, it's also not too complicated and one way to make this precise is the following statement that monodromy groups of structurally finite maps are elementary amenable. And the idea is that they all have a similar description as an extension with groups that are roughly similar to what's happening for one minus Z times the exponential. So this is what I wanted to say about monodromy. So now I would go over to iterated monodromy if there aren't any questions. Okay. So for iterated monodromy, we are going now to deal with dynamics. So it means that you can get your group out of finite groups and the billion groups via less than a group of operations such as taking extensions and taking direct limits. So here this is an abelian group. This is a locally finite group. So it's also elementary amenable because it's a direct limit of finite groups. And since this is an extension of these two groups, it's also elementary amenable. And it's called elementary amenable because there's a notion called amenable, which I might get to later. And these are for trivial reasons amenable. So this is the set of finitely supported permutations on the integers. So you have a permutation which only changes finitely many. So this is not a finite group. This is. Okay. Yeah. The translation up is the Z part. That's not finite and you have finite permutations which come from the fact that you can basically move points over here and have finite permutations. So you can move up and down and you can do finite changes. And this is why you have this extension. So the monodromy group of this function is the group here in the middle. So it's an extension of Z by a locally finite group. Okay. So for iterated monodromy groups, I first want to emphasize that I'm only working with posting the finite maps. So the posting I said you are all familiar with. What's important for me are the fact that the singular set of every iterate is contained in the posting set. So in particular, we have that if you look at any iterate, it also restricts to an unbench covering over C without the posting set. And then we can let the fundamental group of C without the posting set act on every level of pre-images via monodromy. And a way to organize this action is via the dynamic of pre-imager tree, which was already drawn both by Valeria and Laura. So here I draw it again. But so it has as vertices, has as root or base point and it has vertices, pre-images of our base point. What's important to emphasize is if you are working with transcendental entire function, then every point in this dynamic of pre-image tree has infinitely children because every point away from the posting set has infinitely many pre-images. So it will be an infinitely many regular tree. Now as I mentioned before, we have now that the fundamental group of C without the posting set acts on every level via monodromy and we organize this action together into the iterated monodromy action. So in fact, this action is an action which preserves the edge structure, which is easy to think about when you are seeing that edges are given by taking pre-images and we are taking iterated lifts so they preserve the property of being pre-images so the edges are preserved. So we have group homomorphism from the fundamental group of C without the posting set to the group of automorphism of this regular tree. And then the iterated monodromy group can be described as either the image of this group homomorphism or by the first fundamental isomorphism theorem, you can quotient out the corner of this group homomorphism and different people have different preferences. So I mentioned both of the versions. So now that we have defined iterated monodromy groups for entire functions, I want to compare them to iterated monodromy groups for polynomials. So there's the result by Volodya, Laurent and Arim Kamanowit that if you take a postingly finite polynomial, then the iterated monodromy group of this polynomial is amenable and the way they showed it is by showing that first of all groups generated by bounded activity automata on finite alphabets are amenable and by the work of Volodya and Laurent, you know that iterated monodromy groups of polynomials are of this form, so in particular this work applies and we are doing something similar. We realize iterated monodromy groups of entire functions as via bounded activity automata on infinite alphabets. So what do I mean by this? Okay. So now I go to the language of self-similar groups. So similar to what Laurent has done this morning, I am interested in automata and for me an automaton is a map that takes some state set Q and some alphabet and puts, spits out a letter of the alphabet and a new state, so you might have seen in the morning this diagram, the diagram of the adding machine, for me it's more convenient to take the dual diagram where we have as notes the letters of the alphabet and as edges we have some of the colors of the edges corresponds to the state and the label of the edges will be the resulting new state. And so again we can extend the automaton to act on finite words in the alphabet and in my setting I will only consider state sets which are finite but alphabets which are considered infinite and the kind of automata I am interested in are group automata. So these are automata where the induced action on the alphabet is permutation for every state and we also keep an identity state which doesn't do anything and if we do this then we actually have a projection not only on the first level but on every level of the standard regular tree here is an example for the alphabet 0 and 1 and then these are give examples of self-similar groups and what does bounded activity mean? For activity means that if we take any length n there are only finally many pairs of states and words of finite lengths such that the restriction of the state on the words of that length is not equal to the identity. So only for every length n only for finally many pairs something restricts non-trivial way and with this in mind I can give the definition which is central to the talk namely of dendroid automata. So we call an automaton a dendroid automaton if three conditions are satisfied. The first one I won't go into much detail but it has something about the topology of the space of permutations if you fill in the loops here it should be contractable that's the short version how to state it. More importantly what I want is that for every non-trivial state I want that it's a restriction at a unique pair of state and a letter so if we see this in the language of dual moor diagrams I want that every state appears exactly once as the label of an edge and secondly what I want is for all infinite orbits I want only trivial restrictions and for every finite orbit I want that there's only one edge which has a non-trivial restriction so what I'm not allowed would be if I have g in the upper part of the two cycle that there would be an h on the lower part of the two cycle this would be violating this restriction and with this definition the main structure result is the following if we have the posterior finite entire function then the iterative monodromy group can be given by such a dendroid automaton and in particular if you look at the combinatorical conditions in particular the second condition it's easy to convince yourself that it must be an automaton of bounded activity growth and for polynomials for exponential functions there's a very explicit description of the resulting automaton based on the needing sequence of the function in the exponential family and the way how you arrive at these results is you use a nice labeling on the dynamical pre-image tree and for exponentials we can use an explicit dynamical partition given by dynamic rays based on the work of Dirck and Zimmer and for generally entire functions we have to use periodic spiders here, periodic spiders only up to homotopy which is good enough for the purposes of iterative monodromy groups and so here in this example one pre-image of this by-delete gamma 1 is gamma 0 prime and one pre-image component of gamma 0 is gamma 1 prime and from this we got the labeling of these edges and one can say a lot more but for iterative monodromy groups in particular you can try to use iterative monodromy groups as a topological model of entire functions and this might be an algebraic approach to transcendental Thurston theory you can also use these automata to relate post-linear finite transcendental functions to post-linear finite polynomials and can say something interesting in this direction but obviously next step would be to look at iterative monodromy groups of meromorphic functions and so what we have done here so far was going from complex dynamics to group theory by saying the iterative monodromy group but we can also try to use the iterative monodromy group to say something about the dynamics and in particular I hope to be able to say something about landing relations of dynamic rays or dreadlocks with the help of iterative monodromy groups and of course amenability is only one group theoretical property it's also interesting to investigate other group properties of iterative monodromy groups and so with this in mind I hope I convinced you that the study of we started the study of iterative monodromy groups of entire functions and so we answered the question of amenability for these functions but again there are still many interesting directions to continue and with this I thank you for your help. Thank you very much. Are there any questions live or online? Is it as I imagine very rare for these monodromy groups to be amenable? So that they really correspond to just a couple of examples. So if you are looking at the class of functions which are compositions of structurally finite maps then they are amenable and so the monodromy groups are amenable and by this we solved also the iterative monodromy groups are amenable. And so structurally finite maps are also the maps where people did some work on their parameter space as well. If you look at general entire functions you can easily achieve as monodromy groups virtually free groups which are basically the opposite of amenable and so it depends on what you want to look at I would say. So the fundamental groups that you are talking about are free groups almost always. Yes. They are the complements of finitely many points. You are saying that the kernel of the action is enormous so that it will reduce the size of that group drastically usually. So for structurally finite maps yes but in general the kernel can be quite small. So you can construct examples of entire functions that only have three critical values and every preimage has either regular or a two to one but you can basically put them in a line in a combinatorial interesting way such that if you look at the resulting monodromy action you get just three product of three copies of Z2 and then you have virtually a free group. So for general functions if we allow complicated combinatorics far out then of course the monodromy groups will be also complicated but for structurally finite maps they are nice enough that you can even say that the monodromy group itself is elementary amenable whereas by the result also the iterated monodromy group is amenable but it won't be elementary amenable in most cases. If you have a rational function but not a polynomial is it known that the iterated monodromy group is amenable? I think this is a question that I'm so I think the current state of the art is if you know that it's not necessarily of bounded activity growth but of polynomial activity growth then I think Valeria can show that it's amenable but if you have something which for example the initial monodromy group is finite. The initial monodromy group is always finite for rational maps. And the question is if you can expect that iterated monodromy group is amenable. So for rational functions yeah it's difficult in particular in the case where you have a sub-insic capit, if you have a sub-insic capit then people don't know at the moment. But so these examples so if you look at the exponential function then you have Julia sets which are also the whole plane but you know that's amenable so it's something which is not yet known in Russia. More questions? Online? So let's thank the speaker again.
Iterated monodromy groups are self-similar groups associated to partial selfcoverings. In my talk I will give an overview of iterated monodromy groups of post-singularly finite entire transcendental functions. These groups act self-similarly on a regular rooted tree, but in contrast to IMGs of rational functions, every vertex of the tree has countably infinite degree. I will discuss the similarities and differences of IMGs of entire transcendental functions and of polynomials, in particular in the direction of amenability.
10.5446/57327 (DOI)
I'd like to thank organizers, especially for their patience, because I was not responsive, but still giving me this opportunity to talk. And I also have to apologize for the audience who are experts, because what I talk, most of the thing is like prehistory in renormalization. But since this is the first talk, I wasn't sure how much of the audience are familiar with renormalization. So let me start with the basic. And so I'm sorry for those people who are bored with this talk. Okay, so what is renormalization? Do you see the top part of the window? So the renormalization is a way to define the new dynamic systems from a given one, via the first return map, or some sort of first return map to a subset, and rescaling. So probably instead of speaking about abstract setting, let me start with a well-known example, which is not exactly complex dynamic, but it's about real dynamics on the interval, the union model map. And it's called the Feigenbaum-Coulet-Presse renormalization. So start with a map, the real map on the interval like this, and suppose that the shape of the graph looks like one hand. And then second iterate. Not always, but sometimes it has a shape like this. And in some under certain conditions, you may find a subinterval here, such that the second iterate comes back to itself as a two-to-one mapping. So in this case, for the second iterate, you draw this box, and its image will be here. So first map of f sends this interval to that interval, and the second iterate will come back. So in this case, just restrict the map to the smaller interval and rescale the interval. In this case, it's the opposite direction for the new model map. So you may switch the orientation, and you get the new map from the restriction, which looks again the new model map like the original map. So the new map, so coming from the restriction of the second iterate, and then conjugated by scaling map G is called renormalization of f defined on this subinterval J. So in a sense, this is a certain kind of operation or map. So given f, you get the new dynamics Rf. So later, we consider that this operation is a map in certain space. But anyway, if you don't always have this invariant interval, but just in the case, if you have such an interval, you are able to define from f the R of f. This is called renormalization. So this is the idea of renormalization for the new model map. So essentially taking this subinterval, and you iterate until the orbit comes back. In this case, you need the second iterate to come back and restrict the map here. And to make the interval unit, you have to rescale or flip the orientation. So this process is called renormalization itself. So there by renormalization, I might mean the renormalized map Rf itself, but also the operation itself is called renormalization. So in general, this is a kind of abstract schematic picture, renormalization in general. But you have a dynamical system, a map, defined on some space, and take a subset in the phase space. And then suppose that there are a certain number of orbit will come back if you start from here. Then you take this subset and rescale it to some unit size, or in abstract setting, you don't really have to rescale, but later on, this rescaling map may mean something different than just scaling a finding. And then forget about the intermediate orbit, and from for initial point, you correspond this final point of the return, and that will define a partial dynamic starting from here, the return point here, and that is renormalization in general. So from F, you construct the, in general, Rf is not the entire dynamical system on the whole subspace. You may have to restrict to some subset, but this is a kind of important idea to deal with the dynamics. So here, one construction is not a very deep thing, but I'd like to discuss about infinitely renormalizable maps. So by this, I mean that you can do this construction infinitely many times. So that means here, you have a dynamic to start with and take a subset, good subset so that some return map is defined. Then after rescaling, if it's necessary, you get a new phase space and dynamics or partial dynamics, and you take this certain subset again, and then from this, you construct a return map and so on. So from F, you construct Rf, then Rf and so on. So I'm interested in the case where you can define this for a suitable choice of these return subsets infinitely many times. So when this can be done infinitely many times, the map is called infinitely renormalizable. Of course, this construction depends on the choice of this subset, and each time you may have a different subset, but for the moment, I don't discuss too much about this choice, but when you have this construction, I say that it's infinitely renormalizable. So it doesn't always happen that the map is infinitely renormalizable. So when we assume that the map is infinitely renormalizable, it's already some special case, but there are a lot of interesting phenomenon, and there is also important meaning for infinitely renormalizable maps. So the goal is that you want to understand the original map, but still you construct this sequence of renormalization, and what you want to do is, given this sequence of renormalization, you recover some information about original map, or maybe a family of the maps that contains this F, or another map with the same type of infinite renormalization and so on. So this is the goal of this talk. And what kind of map has this infinite renormalization property I will discuss later. But so if you remember that when you restricted, say, small subset, it may take some time to come back. So it's possible that some high iterate of the original map respond to only one iterate for the return map or the renormalization. So that means the next level for the renormalization you're doing again, so you may have several iterates to come back. That means it takes even larger number of iterates for the original one. So essentially by doing this, this small number of iterates for the renormalized map respond to high iterate of the original map. So you can analyze this high iterate. So in general, when you put to the map, the high iterate depends very sensitively on the initial map. But if you assume that there is an infinite renormalization, there is a way to control a high iterate of the original map. At the same time, so this is blinding this rescaling and rescaling and so on. So small set is somehow a blow up. Like the previous slides, small interval is blown up to the unit interval and so on. So small subset is blown up to somehow a unit size or canonical size. And then in this set, you get another small set which is blown up to the larger scale and so on. So if you take, for example, here some certain finite level of the size, then that corresponds to this small structure in the original map. So this allows us to analyze this fine orbit structure for the initial dynamics F. So this is a kind of idea that you can use the renormalization or sequence of renormalization to analyze the original map. So why do you want to study the renormalization? Okay. The first motivation came from this physics. So before the dynamics in mathematics, there were the theory of renormalization in statistical physics and there were a lot of studies about the critical phenomenon. So critical phenomenon is more or less like our infinite renormalizable case and some universal scaling phenomenon was already observed. And that observation was transferred to the dynamical systems and the first case it was applied was the sequence of a period doubling bifurcation way, which I mentioned in the next slide. So it's due to Fagembaum and Kuhl-Etrecse. And there are other reasons to study the renormalizations. So it's a rigidity problem. So rigidity in a sense in this setting, it is like an automatic upgrade of conjugacy. So suppose you have two dynamics and somehow you have topological conjugacy or even weaker combinatorial equivalence. And it sometimes implies, surprisingly, that automatically implies the higher regularity of the conjugacy. So only assuming the weak one, you get the stronger one. So this is automatic upgrade. So the higher regularity means sometimes Qc or smooth or several different categories, which is much better at least than topological conjugacy. And the rigidity phenomenon is not only in a dynamics, but in a general motivation in mathematics. In various cases, the rigidity phenomenon was discussed. In complex dynamics, this rigidity question was very important of all because it is closely connected to the density of hyperbolicity of rational maps or polynomials. And the hyperbolicity I will discuss later, but hyperbolic maps are the class of the maps which are well understood and easier to handle. And the density means that there are plenty of those understandable maps. And in the largest generality, this is still open questions, but this line of questions was considered in some special cases, especially for the quadratic polynomials. Sullivan started to think about these questions, applied some argument that the rigidity somehow implies the density of hyperbolicity and the result, the positive result of the density of hyperbolicity among real quadratic polynomials obtained by Dubich and Grazik-Schwriantek. So this is related to this, the MLC conjecture, the local connectivity of the Mandelbott set. They are also the density of the rigidity question was the key to attack those questions. And for the MLC conjectures, there was a work by Yukos. He has dealt with the cases where the maps are not renormalizable or not infinitely renormalizable and he showed basically the rigidity result or the local connectivity of the Mandelbott set. And the Dubich or the current Dubich and other people have worked out the large case of the local connectivity result in the complex. But I think for general cases, it is still open. I might be ignorant, but as long as I know, this is the open question in general case. For non quadratic higher degree polynomials, for real polynomials, there was a big progress that the hyperbolic real polynomials are dense. And it was due to Kosolowski, Schoen and Ransbrien. And this was also attacked via the rigidity questions. So that means suppose certain sense, two maps or say two real polynomials are combinatorially equivalent and then ask a question whether they can be say QC conjugate. And by solving these questions, they obtained a positive answer for the density of hyperbolicity. The last line, so this maybe I should have mentioned earlier, but there's a Sullivan's dictionary which was mentioned by Hubbell's talk. So dictionary between this complex dynamics and the action of Kleinian groups on the sphere, two sphere. And so the question on the other side of the dictionary, it corresponds to the most rigidity about the Kleinian groups. And so besides these, the reasons, the normalization itself is somehow mysterious thing because, well, I'm going to define it, but it defines some kind of infinite starting from a complex one dimensional dynamic or real one dimensional dynamic. So suddenly, you go to the infinite dimensional space of the dynamical systems and define the dynamics there. And it's amazing that you can say something about those objects. So as I mentioned, in organization operation constructing from F, the R of F can be thought as a self map of the space of dynamical system, some class of dynamical system, for example, in the first example, I mentioned that for the new model maps, space of any model maps, and then you create a new model maps, which to define on the subset of this space. So it's a self map or partial self map of this infinite dimensional space. So in a sense, this can be considered as a meta dynamic. So initially, it's only a one dimensional dynamics that somehow you get the infinite dimensional space of the dynamical systems. And you've got a self map by using this first return maps. So for the first grounds, looks hopeless to such such a thing. But suddenly, you're getting infinite dimensional object. But somehow it's amazing that you find often the nice structure for those maps, especially for example, for this Fagenbaum-Kurek-Breset, in normalizations, there is a fix point. And it's not only a fix point, but it is a hyperbolic fix point. And unstable direction is one dimensional and stable direction is a co-dimension one. And then I'll tell you how to use this fact to conclude some results about bifurcation and so on. So more general cases, it's not only fix the point, but you find some invariant set in which the renormalization in more general sense is acting. And you obtain some horseshoe like structure and still hyperbolic. And this has a conclusion about rigidity or bifurcation and so on. So let me go with the most classical case for this enimolar map. The most studied family is one of the most studied families like AX times 1, more than AX. It's a logistic family or it's equivalent to X plus C. And the picture shown is a bifurcation diagram. So this horizontal axis is the parameter A and the vertical axis is like orbit of iterates. And after first iterates are negative. So for small parameters, you have attracting fix the point. And then it bifurcates the period of doubleing, bifurcation into the period of two attracting cycles. And that bifurcates the period of four cycles, period eight and so on. So you can define the bifurcating parameters A1, A2 and so on. And then it converges to a certain limit, A infinity. So it was observed first, well, conjectures and observed that there's a kind of limit that, so there's an A infinity somewhere here and compare the distance between these two and these two and this ratio between the limit and N and limit and N plus one has some limit. Not only there's a limit, but if you take the different families, which looks like a unimodal map, this limit is the same. So it's not only there's a limit, but this limit is universal for the univocal families. And that was quite an amazing phenomenon. And the explanation, using the idea from this statistical physics was following. So I told you that for this period of doubleing case, you can define from an original map, when you have this invariant interval, you get the second iterate and restriction and reskating get the new unimodal map. So this is a map from F to F. Then you suppose you can iterate this and then as I mentioned, this is a meta dynamic in a different infinite dimensional space of unimodal maps. And the claim was that there's a hyperbolic fix point, but it's one expanding eigenvalue and the co-dimension one contracting reactions. So the picture is like I showed you before. So this is a whole space of the infinite dimensional space of unimodal maps. And there's a subset where you can define this renormalization. And then the claim was that there's a fixed point on the F. That means our capital F is equal to F. And then there's an expanding direction and a contracting direction. So if you take arbitrary family and genetic family in this space, so it is expected that you cut across the space at the stable manifold of co-dimension one. And this intersection with one parameter family with a stable manifold exactly correspond to the infinity. And there's a co-dimension one sub manifold where the period W occurs. Actually, period W occurs when you have a fixed point with a derivative minus one. So it's easy to characterize. So it's a co-dimension one that's a manifold. But if you renormalize, there's an inverse of this some manifold, which is mapped to this green one by R. Then here, that means after you take a second iterate, it has a period doubling the bifurcation. So that means for the map itself, it is a period four bifurcation. So that corresponds to here. So intersection here will correspond to A1. But the next intersection with this one will correspond to A2. And then similarly, take an inverse image, the intersection will correspond to A3. So that means just to suppose you look at this picture from the direction of a stable manifold, there's a stable manifold fixed point. And then they did these variant manifolds of period doubling and period doubling for two to four and four to eight. And they have one parameter family and it corresponds to here. So this ratio is measuring the distance between this A1 and to A infinity and A plus one to infinity. But when you have these hyperbolic structures, these ratio should be proportional to this expanding eigenvalue delta, which is greater than one. So this should be expanding eigenvalue. So if you just look at one family, this kind of amazing fact that there's even a limit. But if you put the whole picture into this space of the dynamical system and consider the normalization map, it is very natural that you have this ratio converging to some limit. And the limit is the same for other families, for generic families. Because if you take different families, you still have the same limit. And this explains the numerosity of this bifurcation ratio of this form. So this was the initial starting point of this Feyer-Gemba-Kleitreste theory. And the first claim like this was given by Oscar Landford using the computer assisted proof. Existence of this fixed point was given by Arne F. Stein using complex analytic techniques. But later on, there was work toward this claim or more generalized claim using more theoretical or conceptual approach started by Denis Sullivan and in a general case given by Michel Lubic. But that was the starting point for this normalization theory. So I mentioned the rigidity between the renormalization map and renormalization map. So what is this still? Well, it's a complex dynamic conference, but it's easier to explain with the new model map pictures. So let me go on with the new model and period doubling renormalizations. So when you have an infinitely renormalizable map, I said that I'm interested in infinitely renormalized map. And in this case, I always assume that it's renormalizable by period doubling construction. Then there was a first interval and its image to start with. But after first renormalization, well, you have to rescale it, but for the moment, forget about the rescaling, get the invariant intervals for the renormalized one, and then there is image by the original dynamics. And then you can continue forever by these constructions. So each time you get the two intervals inside the previous level of intervals. So each time you get this proper subinterval from the original one. And eventually, and you can show that these sizes of interval shrink and as an intersection, you get the invariant cantilever for this infinitely renormalizable map. So now, suppose you have two such maps, and they have the same combinatorial type. But it's not necessarily here that you have a period two, but it is necessary to require that you have the same type. So for example, here, using only a period two here, but you can assume that period two for the first level. And next level, you have the period three, and period four, and the period two again, and so on. Say, for example, let me assume that this is a bounded period of renormalization. Then the VGT, I said that it is like automatic upgrade of the conjugacy. So for this type, the combinatorial type is like this commutation, this, you know, the periodic combinatorics of these intervals. And so under this assumption, they have a combinatorial equivalence. But the conclusion is stronger. So here, I have to assume that, say, F and G are C3, or you can weaken this assumption. And the critical point is non-degenerate. Then there exists a conjugacy such that connects these two maps, P, that it is a conjugacy, and it is, say, on the counter set. Sorry, I forgot to write. There's a limit counter set. So it's not just a topological conjugacy, but on the counter set, or the restriction to counter set is like a smooth one plus helve regularity. So this is, like I said, the automatic upgrade of the conjugacy. This follows from the fact that sequence of renormalization. So here, I'm assuming that the both are renormalizable with the same type. And so assuming that these two are in this, the same class, this implies that the F and G are the, for example, in a period doubling case, the stable manifold of the fixed point. And so when you have a hyperbolic structure, and this is as a co-dimension one stable manifold, that this sounds in certain good metric for go to zero, is exponentially fast. And so if you know this fact from that, you are able to conclude this type of the regularity results. That means the rigidity results. So if you know these exponential convergence, that means the first renormalization is about here. So you're comparing the phase space of two maps here. And secondly, normalization is comparing here and here. So, and then the structure in the image will be connected by this dynamic itself. So if the structure is similar here, the structure in the image is also structure, the similar here. So that means if you have a result of this type, if you kind of zoom into this, the fine structure of the count set, the fine structure is exponentially similar. Of course, initial stage, the F and G are not very close, but somehow bounded distance away. And next stage, they are not exactly the same, but getting closer and so on. Since this size is shrinking exponentially small, and the fact that this fine structure is getting similar exponentially, you combine these results to obtain this C1 plus alpha conjuction. So to go back to this original map, starting from say the nth level of the renormalization, to go back to the original one, you have to use the combinatorics to know that you have two maps. So if they are the similar here, use these dynamics to go back to this larger scale using the combinatorics here. So there's a combinatorics hidden here to conclude the rigidity result. For circle maps, there are also an idea of renormalization. Here, you take the circle, or the unit interval, and if you have a critical point, you mark this point, or otherwise you mark one point, which is the partitioned interval by its orbit or inverse orbit. And so repeating this subdivision by the inverse orbit, for example, of this point, you get the final and final partition of the interval. And in this case, the common formulation is instead of circle map itself, people have used the so-called commuting pair. So you consider the map in a larger class, where you have a one map F1, and one, two, second map here, two branches, and assume that, so they are defined in here this cutting point. And on the cutting point, these two iterates are defined and assume that they coincide. So this is a pair, the commuting pair, and the renormalization was defined in this space of the commuting pairs, such that this renormalization is from F1 and F2. Either you just take a composition here and F2 itself, or in general case, F2 and F1 some iterate and F2. And this again gives you a commuting pair, and possibly you have to rescale the interval because this pair is defined on the subintervals. So here, before the Bygenbaum case, it was like the automata mapping, so like two partitions. But here, the partition of the subintervals are closely connected to the rotation number or the continual fraction expansion of the rotation number. There are a lot of work done in this direction, especially with the circle map with one critical point like this. And even in this case, if you have convergence of the renormalization in a certain sense, you obtain the rigidity result so that two such maps with the same rotation number, same communitics, are conjugated by the smooth map. These are two examples just to have some idea about renormalization. But let me discuss a little bit about the two extremes in dynamics. So there are also one kind of which is very tame and zero-entropy minimal dynamics. One is the chaotic dynamics or expanding dynamics. One is very fragile, easy to destroy by the perturbation. The other one is robust, stable or structurally stable on the perturbation. And one is rigid in the previous sense. Somehow, within the same class, the conjugates automatically smooth. And the chaotic part, the conjugacy is not smooth. So in a sense, the tame class, the dynamics itself is very tame, but the very dedicated deal with. On the other hand, chaotic one, dynamic itself is chaotic, but it's easy to study by the expansion or the hyper publicity of the dynamics using the Markov partition or other method. The typical example is a irrational rotation on one side and once the doubling map on the other side. So this one is chaotic, this one is very tame. And the other example is like for the hyperbolic surface, the unique tangent boundary on the hyperbolic surface, there are two objects or even three. One is the geodesic flow, which is chaotic. The other one is the holocyclic flow. They're actually stable and unstable for the cyclic flows. And somehow, they are in the two extremes in the dynamical systems. But often, these two extremes are connected each other. For example, this circle map, if you try to conjugate or semi-conjugate the irrational rotation by the chaotic doubling map, the result is like a secondary of the original irrational rotation for geodesic flow on the tangent boundary of the surface. Again, you have this relation. So this was the flow. On this side, you have the G-HT holocyclic and the time change of the holocyclic flow on one side. And for symbolic dynamics, you have a well-known shift map, which just forget about the first one and shift. And on the other side, there's adding machine dynamics, which is like automatic dynamics. So when you have adding and carry to the right, you have this dynamics. In this case, again, you have a relation between this. So tau is this one, this same one, and try to conjugate or partially conjugate by this chaotic one to get the second iterate. Actually, it is very similar to this one. And if you notice that for a Heikenbaum limit count-off set, the dynamics is conjugated to this adding machine and the rescaling correspond to this shift map. So, for example, for anosophic differential on two torres, you can associate irrational flow. And when you have irrational flow, you can combine. Well, this is a case where you have, for example, the golden mean rotation or the quadratic irrational numbers, but in general, you may have to change these f's so that you have certain predation. So in a sense, the conjugation by or semi-conjugation by chaotic one is like a time change for the same map. So going back to the renormalization, the renormalization is exactly the connecting to these two extremes in dynamics. So for renormalizations, the target is, I said, the immediate renewal was of a map. And these are very tame and delicate maps, like Heikenbaum maps or irrational rotation. But suppose, given such a map, you construct a sequence of renormalizations. So this is, in a sense, in this case, this is expanding map. And it is not a dynamic in the sense that it's not a map from one space to the same space, but it's a mapping from one space to another space. And this one from this space or a subset to the another space and so on. This is like a fixed point of the renormalization. So this is, you get the different spaces for each one, but somehow this has an expanding nature. And the chaotic dynamic argument for chaotic dynamics can be carried over to this expanding composition of the maps. So in a sense, the expanding nature of a connecting map of these target maps helps us to study the target map. This single map itself is very fragile and difficult to study. But with the help of these expanding maps or the renormalization, which is a meta dynamic, we are able to conclude something about the dynamic. So another feature here that we saw this candle set, essentially, the renormalization is sending this to a init scale or this one to this init scale. So this part is this expanding map. So for renormalization, if you can conclude this something, you can conclude about the fine structures of this initial map. And this was a way to construct the smooth or C1 plus alpha conjugacy along the along the control set. So there's also another reason to study the renormalization and rigidity. Well, I think there are a lot of content along this direction, but let me go very quickly. So there was results by Manussov Sariv and Mubic that in a family complex, and I think family of rational maps, the trajectory stable parameters or j-stable parameters dense. And on the other hand, I mentioned that the density of hyperbolicity is one of the open questions in general setting. So the conjecture says that in the space of rational maps of degree D greater than one or polynomials, hyperbolic maps are dense. Well, what do you have a theorem says that for complex polynomials, if you know the density of hyperbolicity, you can conclude for quadratic polynomials. The mandrel of whole set is locally connected. And that helps to understand the whole picture about the quadratic polynomials and parameter space. On the other hand, Yoko's work allows us to study the non... I see it's not infinitely normalizable. Sorry, it's missing. Then the mandrel of whole set is locally connected. So the remaining case is infinitely normalizable case. So along this direction, there is of interest that we want to know the rigidity of this infinitely renormalizable maps. So this is another reason to study this rigidity and renormalization. And for real ones, I mentioned this result already. And the proof was given through the rigidity. And the general framework for fully renormalization conjecture for real maps, including many cases for the complex renormalizations, was done by Vubic and his collaborators. But let me... Yeah, keeps this one. Sorry. So there are various kind of renormalizations. Sorry, me too. Just maybe a quick comment for anyone who's used this first time. I think on the previous slide, when you mentioned the theorem of Duerdy and Hubbard, I think it's the other way around, right? So if the mandrel of whole set is locally connected, then hyperbolicity is dense. Oh, sorry. Sorry. I haven't done this for long years. And it was opposite. Sorry about this. So I mentioned two types. And it was easy to go back to this original map. We know this sequence of renormalizations. And I plan to discuss about this sector-type renormalizations, where these natural partitions of the phase space doesn't exist. This was the case where you have a map of this type, say complex mapping with the fixed point, always derivative, it's like irrational rotation. Then you can't really divide. This dynamics doesn't really give you the natural partition of the phase space. So I have to wrap up this talk. And in this setting, there was a work by Yuko's for the Zeal-Bruhm theorem about the renormalization, which is the construction like this. So starting with this fixed point of this type and take the straight line starting from the fixed point and look at its image and take smaller subsets so that it is nice. And then you can glue together with the first segment as the image by the dynamics, get the quotient space, which is known to be the puncture disk if you know more for mice. And then again, for this sector region, you can define the first return map to define the new map. So this defines the sequence of renormalization. And we know that once you know alpha n, what should be the alpha n plus one, which is like a inverse module of integers. And you may have to cut off a certain part of the dynamics to be able to define the renormalization. And if you want to go back to this original map, again, you have to add in some dynamics. So it's not always possible to understand this original map, even if you know something about the sequence of renormalizations. So here comes the idea about the, so basically, straight segment sits here and the image is here. So to have some nice manifold structures, you have to take a neighborhood of this initial segment and its image. So that means you just cover those fundamental sector. And the fundamental sector may look like this if you, in a sense, uniformize and then glue in this overlap region to obtain the renormalization. This is the picture in this additionally indifferent fixer type. And then you again do the same for the renormalization. So to go back to the original dynamics, it is always a problem that you have to glue. It's not a partition to start with. And this cutting was not the canonical and it was arbitrary. And there were ambiguity to start with. But anyway, this construction forward was used by Yokoz to study the Ziegelfelm or Bruno Thelm. And the backward construction was used also for the converse of Ziegelfelm or Thelm. So here I wanted to introduce this the idea of the dynamical chart in the sense that you just take a canonical space for this one so that you have an overlap. You allow the overlap. So like the definition of the manifold, you define this some canonical space here and then the gluing of the boundary regions to the next one. So you have five canonical domains. Then you glue them together to obtain punctured neighborhood. And then there's a canonical dynamic mix between this canonical coordinate. This is somehow some canonical maps. And glue them together, you recover the original map. And then if you know, this corresponds to the first renormalization because you are taking this one first sector of the fundamental sector straight segment is image. It is uniformized like half strip of this shape, like this shape. But the next renormalization is starting from here after renormalized. Don't go to this picture, but stay with this picture. And then there's a newly normalization will be giving a subset again, which is like a strip, half strip, and then glue them together, obtain some this one corresponding to this, these smaller sectors in the original dynamics. But to recover the original max with all small sectors, you glue these smaller sectors or the half strips and the dynamics on these half strips and so on. So this can be carried out. This idea can be carried out for. So I think I wanted to mention about the formulation. But if you know the definition of the manifold, it is a natural thing to define. And there's extra thing about the index set, how to glue them together. When you have a manifold, you also have a combinatorics of how to glue these charts and to recover the original dynamics. And this idea can be used to understand the irrational dynamics for certain class of dynamics. So this, let me just say, there's not the whole order classes that for some subclass, which I don't define together today, but for the irrational rotation number with very large entries in continue fractions, you are able to carry out one, including this quadratic polynomial and conclude about the local dynamics. Assume that when you have this irrational, continue fraction expansion, you have large ones. So let me just say a typical conclusion you can derive is that the rigidity type result when you have two maps in the same class with the same combinatorial type, which is rotation number. And it's similar high type. Then they're conjugated by QC maps in an invariant set. But slightly better than the QC that asymptotically conformal at the critical point. So I wanted to say that I applied this idea about the dynamical chart, but this is the only case which was applied that I'm hoping that there are more applications about different renormalized different settings when you can't have a natural partition, but you have to have some overlap to construct a sequence of renormalizations. And this is it. I'm sorry that I had to go quickly at the end. And I'm also sorry for the people who have heard these some prehistorical part of the talk. Thank you very much. Thank you. Do you have questions? No questions. Oh, that is a question. Maybe. So I think that there was a point that you were making at the end about the domains of the renormalization. So you were saying with the if you kind of start with the sector renormalization, you don't really have a canonical choice of your domains. And so I think you were saying that with the with the kind of further refinements of this, you do have some canonical way of choosing your domains. And that's what allows you to get stronger conclusions about the dynamics. Can you maybe say that again a little bit more slowly? That would be great. Yeah, so it is about the Yoko's renormalization. So it was this construction that you start with a straight line and its image and take some region founded by this straight line and its image and some kind of arbitrary segment. And if the rotation number is very small, you are forced to cut this in a very deep so that you are able to define this gluing and have some control about this gluing and uniformization, the puncture of this. So somehow there's a missing part here. And for the Ziegelbrunnen theorem, it was okay that if you assume that the rotation number is Ziegelbrunnen, this cutting of this doesn't occur too deep so that it doesn't grow too fast. And you still have some region remaining so that it corresponds to this linearization domain for the germ of this type. So the next question was that when you have a Kramer type example, so you don't hope that you have a linearization, but still you hope that you can say something not just at a fixed point but with something including some larger domain, especially when you start with the quadratic polynomial, you have a critical point and it is known that the critical orbit is very important for this Kramer type dynamics. So you want to include the critical point for this type of linearization. So for that definition, the Yokozlin normalization is cutting too far. That's why we needed to introduce another type of linearization. So that was the reason that we introduced the near-parabolic linearization. But you have to assume, unfortunately, that the rotation number is very close to the parabolic. That means for rotation numbers, this continue affecting the equations, all rush larger than some big number N. So that means you have a fixed point and another fixed point and they are very close to each other because they are close to the parabolic point. But in this case, we are able to determine one class of the map so that it is close under the renormalization. You are able to deal with the critical point. So this is the difference between the domain of definition for renormalizations. Anyway, thank you for asking these questions because I wasn't really explaining. Thank you. Okay. Some more questions. Yes. So I'd like to ask you about this relation between chaotic dynamics and tame dynamics. So in the case of almost parabolic renormalization, what's the chaotic dynamics associated? So chaotic dynamics, well, the limit case is very easy to define. So instead of the irrational numbers, you just take a virtual, well, I mean the virtual world now, so take a rotation number like one over infinity plus one over infinity and so on. In that case, the two fixed point collapse. And this is like a Fatou coordinate going to here with quotient by z. And then you are taking exponential. So the Fatou coordinate is not a very drastic thing once you change the coordinates so that fixed point is at infinity. The biggest thing you do is taking the cross the exponential map because here you have by infinity sitting the C over Z. To introduce the next generation renormalizations, you have to take a puncture disk. So the chaotic part of the dynamics or the gluing or connecting map is exponential map together with the very mild Fatou coordinate. So essential part is given by the exponential map. And after bifurcation, you have to modify this by something else. But typically, it is very close to the exponential map in a certain sense. So somehow there is a hidden dynamics, which is not dynamic itself, but it's a sequence of a composition of exponential like maps. That's why you see the hedgehog like structure or control bouquet. That's a. I also have a question. Maybe it's a simple one. But is there a natural way how to embed this real, unicritical renormalization into the framework of renormalization of a quadratic polynomials in complex? So the real renormalizations these days it is studied the beer complexification of the dynamic. So one they can study this the result about renormalizations. For example, you beach work about the food renormalization conjecture. He actually studies the complex and so on. But the real has additional advantage that you have this up to your rebound about the interval maps. So that has a very a that helps a lot about the conclusions also. But the flake mark itself was set up with the complex setting that in real setting, you have additional information. Thank you. And in a full complex renormalization, including the satellite type is I think still open. Okay, Michelle, which question and we are going. I would like just make a quick comment, not a question, but quick comment because me too from how gave formulated one application of the in a shashikura theory and ask questions. Are there any other applications and actually there is a whole theory developed. So due to in a shashikura priory bounds, essentially complete theory of neutral maps with combinatorial with rotation number of high type. So it is known under this assumption that the shashikura and for young that the boundary of the digital disk is Jordan Kov. There is essentially full classification of the hedgehogs, etc. etc. So there are plenty of applications of this in a shashikura theory, which we are somehow admitted. So but probably they will appear in some other talks of other mini of the other lectures of this mini course. But my question about the totally different type of renormalizations, whether you can use this idea. Okay. Thank you. Let's thank the speaker again. Thank you very much.
We discuss the idea of renormalization for complex dynamical systems. There various types of renormalizations defined via a first return map, appear in complex dynamics, for unimodal maps, homeomorphisms of circle, and germs of irrationally indifferent fixed points of holomorphic maps. The target of renormalization is usually tame and fragile dynamics and the connecting maps are often expanding maps and the exding property helps us to understand the rigid nature of the target maps. We propose the idea of dynamical charts for irrationally indifferent fixed points, in order to reconstruct the original map from the sequence of renormalizations.
10.5446/57329 (DOI)
So we consider the space of all quadratic like maps up to a fine conjugacy. This is a nice analytic space. The renormalization operator would be iteration and restriction. Both are very analytic operations. So the renormalization operator would itself be analytic and it will be contracting. So if we have two maps that are hybrid conjugate, then the renormalizations will also be hybrid conjugate and the quasi conformal distance will be smaller. A caveat is that the dimension of this space is infinite. Well, and our goal is again to find a fixed point or more generally renormalization horseshoe. And for that, we do want some compactness property because since our space is infinite dimension, we can't go far without any compactness property. And it is often called a priori bounds in this subject. And what happens is that often a priori bounds can be established in the near degenerate regime. And this is the main topic of my subject. I think I have some echo. Okay, not anymore. I'd like first to summarize what is known about the near degenerate regime for the first time and quadratic like renormalization theory. And then I'll move to the neutral renormalization. A fundamental fact about the remand surfaces is that they degenerate along thin analyte and white rectangles. A more familiar case is the situation with the compact remand surfaces, perhaps punctured by many points. So in this case, the surface degenerates along thin analyte. So it is called the syncyc decomposition. And if we have on the other hand a remand surface with boundary, so let's assume that we have some Jordan domain in the complex plane minus finitely many Jordan domains. Then such object degenerates along finitely many white rectangles. So it is more convenient to draw not the realistic picture, but a schematic one. So let's apply Photoshop and replace all white rectangles with narrow rectangles. So the convention from now on would be that narrow on pictures means white in reality. So this is because we are interested in homotopical properties of white rectangles. In the Boston theory, when we iterate, if under the iteration we develop long analyte, then there is a certain matrix that controls the degeneration. First of all, if the degeneration develops, then it develops along invariant, multi-curve, and applying the Grotchen equality, we easily obtain an estimate from below of the moduli of the analyte of the pullback through the certain matrix and the moduli of the surface before the pullback. And in fact, in the degenerate situation, this inequality can be effectively reversed and replaced by equality perhaps with an additive correction. Well, this is because, well, our map on analyte is a covering map, and we understand what's happening with covering maps. And the map on thick parts would be belong to some compact set. And this is the idea of how we establish an invariant compact set under the certain iteration. So of course, I skipped a lot of details. In the quadratic life case, so I'm going to consider the primitive bounded case, and I will discuss the argument by Jeremy Khan. So let's consider some quadratic polynomial and just assume that we are in the degenerate situation, so the modulus between u and j is very small. So let's apply renormalization and replace the Julia set by a cycle of small Julia sets. And our task, our goal is to understand what are the white rectangles between small Julia set and the external boundary. And then try to catch the contradiction. And as we can see on the picture, there are two types of white rectangles, horizontal, that connect two small Julia sets, and vertical. And remember that in the certain case, the generation develops along an invariant multi-curve. Well, same story here, that horizontal rectangles must be developed along an invariant graph that connects small Julia sets. But it's full clear that the only invariant graph that connects small Julia set is the hybrid tree. So therefore, horizontal rectangles must be developed along the hybrid tree, but we can say even more. So if we consider the associated PCF model, or it has the hybrid tree, so we can look at the corresponding matrix. Well, and just like the certain matrix controls what is happening with long annulite in the pullback iteration. This matrix controls what's happening with the horizontal rectangles. And in particular, if we apply a grudge type inequality, and if we assume that the core entropy of F restricted to the hybrid tree is positive, then it would tell us that the horizontal rectangles are not very good. So if we assume that there are no vertical rectangles, then this would already lead to a contradiction, well, more or less the same argument as in the Kyrsten case. But a correct statement is that vertical vertical rectangles must form a substantial part of all vertical rectangles. So horizontal rectangles cannot dominate. This is a key step in the Jeremy's argument for the MLC for bounded primitive parameters. Dima, you are proving something. What are you trying to prove? What is the statement? Well, I'm trying to make an analogy between the quadratic life renormalization and Kyrsten. You're making an argument that something is a contradiction of something. What is the statement you're trying to prove? Well, the statement is that the vertical rectangles form a substantial part of all white rectangles. So if we take, so that they will exist at constant C, so that if we sum up, yes. What are these rectangles? Well, rectangles, these are rectangles that are in the same, in the composition of the Riemann surface. When we take you and she removed for the cycle of small Julia sets. So this is a Riemann surface with a boundary. So this surface has the same thing, the composition. Well, elements of the same thing, the compositions will be white rectangles. And the statement is that, well, there was a true statement is that this white rectangle that are horizontal are aligned with the cupboard tree. And the second statement is that if we sum up the width of vertical rectangles, then this would form a substantial part of all rectangles. So I'm trying to, you said we have to Photoshop the picture you're drawing. And I'm trying to do this in my brain. Let me see whether I got it right. The blue rectangles, contrary to what you've drawn, mean that this little Julia sets are much closer to each other than kind of the diameter. So this is the realistic picture. Well, this is the other kind, but the blue ones, the horizontal ones. So the idea is that the little Julia sets are closer to each other than their diameter by some order of magnitude. And so this is two boundary components. And the other boundary components are from the little Julia sets to the outer ellipse boundary, the boundary of you. And here you're saying again that the little Julia sets are closer to the boundary of you than their diameter. Yeah, you can say it in this way. And I'm trying to Photoshop my mental picture here. And then you're saying of all the area in you that we see minus the little Julia set, this is essentially filled by your rectangles. And you're saying you want to say that the definite fraction is of these rectangles is between the little Julia sets and the boundary as opposed to the blue ones between the Julia sets. Correct. Yeah. So this is the statement that Hemel wanted. I ought to be able to understand what it means. I don't know why things are being called rectangles. And I've never seen this language before. And I just don't know what it is that's being talked about. Horizontal sides on these disks or on the boundary of the big D. Assume that you have a disk with several holes, just a few holes. Your rectangle is a topological rectangle whose horizontal sides on the boundary of this Riemann surface. And then you can, so it is just a topological rectangle whose horizontal sides are on the boundary of our Riemann surface in topological and on trivial guys. And the statement is the degeneration of such Riemann surfaces happens when this rectangle, some of these rectangles become white. And so that is the issue of renormalization theory as Dima was trying to convey is to prevent this degeneration, this kind of degeneration. And it is in a way very much similar to how Sursan prevents degeneration along long tubes in the case of compact Riemann surface of finite type. So, okay. Well, in fact, this is what I wanted to say about the quadratic like renormalization. So namely, I wanted to make this parallel that if you look at what's happening, then the argument is indeed kind of very close to what's happening in the Sursan iteration. And that's kind of one of the key steps, if not the key step in Jeremy Khan proof of the MLC in the primitive bounded case. An important tool is the covering lemma developed by Jeremy and Misha. And in the follow up work, Misha and Jeremy push the theory much farther. So now it handles quite well the primitive case, assuming that we are kind of uniformly away from the molecule. So roughly speaking, assuming that the core entropy is at least some epsilon. So roughly what they did, they translated all the theory of the primitive renormalization into the language of the near degenerate regime. Yeah, and establish a priori bounds under such kind of condition. Okay, so more details will be tomorrow in Misha's talk. So this is just before the bulliness. So please come to bulliness. So let me also make a comment about the Sursan theory. So there is another approach to the renormalization to the realization problem by Dylan. So he was talking about it yesterday. And in some sense, the near degenerate regime is even more explicit in his situation. So he starts with a map between graphs. So then he replaces graphs with a small neighborhood. This will be a very degenerate remand surface, and then he does the realization for this degenerate remand surfaces. And then after that, he does the surgery and obtains epispirically finite rational map. So Kevin H. Dylan will be given a Zoom mini course in November on the renormalization and dynamics seminar. So everyone who is interested to the subject is welcome to join. Okay, good. So much now. I'd like to go to the near neutral renormalization theory. So this is the theory associated with the boundary of the main capoboli component. So here are some examples of maps. So all of them are digital disks, but some of these digital disks are degenerate. So this is the golden digital disk. So this digital disk is a quasi disk that rotates around the center. So this digital disk has the rotation number that is very close to zero. So it is still a quasi disk, but the center of rotation will be very close to the boundary. So here, the rotation number is close to one half. And then the digital disk will be like this. So it will start to develop parabolic fjords. But still it's kind of a nice object. So as I will talk later. So here the rotation number is close to one quarter. And we can see that the digital disk has four parabolic fjords. So it rotates. So it's not, well, it wants to degenerate and it develops parabolic fjords. So there are several reasons, or there are many reasons why we might be interested in the neutral renormalization. First, the neutral renormalization controls the quadratic like renormalization near the cardioid. So this is where the primitive renormalization theory fails. It requires certain adjustments. In particular, there is no good theory of puzzles near the boundary of the cardioid. Well, also, near neutral renormalization is very much related to the theory of near-circle maps and such maps were in the focus for a long time. Well, perhaps even from the beginning of the dynamical systems, when the dynamical systems were motivated by the celestial mechanics like planets moving around the sun. Well, and since we like the transcendental dynamics, at least on this conference, well, a good news is that the transcendental dynamics also emerges on the unstable manifolds. So this is kind of one way how we can understand combinatorics of neutral renormalization. Namely the combinatorics can be described using this particular map. So this is the Julia set of this cubic polynomial. So it has a parabolic point here, and it attracts one critical point, and another critical point is here, and it maps to the fixed point under one iteration, S221. So this field, Julia set, is very similar to the molecule, and in fact, if we endow the molecule with the Brunner DOD surgery, and I will refer to the resulting map as the molecule map, then this should be conjugate to this topologically. Well, in fact, this is equivalent to the MLC conjecture, the satellite case, that these two dynamical systems are topologically conjugate. And the Brunner DOD surgery, well, let me kind of describe the idea on the case with rabbits. So if we have a peak of a Q-rabbit, so it's the rabbit that rotates with this rotational number, then assuming that P is smaller than 2 over 2, we can remove dynamically PES from the rabbit. So we do the surgery. Well, and then after the DOD Brunner surgery, we obtain another rabbit, which will have the rotation number P over Q minus P. Well, if P is bigger than Q over 2, then while we symmetrized the construction, well, we also map the basillic apabolic component into the main apabolic component using the basillic straightening map. So everything assembles into a continuous map around the main molecule, and the MLC conjecture of the satellite case say that this must be topologically conjugate to this molecule. And periodic points on the main apabolic component of the molecule map are exactly the legal maps of periodic type. Alternatively, we can describe the legal maps of periodic type using continuous fraction expansion. So this is the periodic point of the smallest period, so here is period 2 of the molecule map. So this periodic cycle is formed by the golden-ziggle parameters, so the upper golden-ziggle parameters and the lower ziggle parameter. Once we have a periodic point, we should expect self-similarity because around periodic points we may want to linearize everything. And this is indeed what happens with the Mandelbrot set. So if we zoom in at the golden-ziggle parameter, then we see self-similarity. So this zoom is almost indistinguishable from the zoom next level. So this is called universality. The combinatorial reason is because the golden-ziggle parameter is a periodic cycle. But of course, in order to justify this self-similarity properly, we need to develop the randomization theory. Let me also make a remark that if we consider some other parameter space, for example, the space of quadratic rational maps, where one of the critical points is the periodic, and if we zoom in at the corresponding golden-ziggle parameter, then again this picture becomes almost indistinguishable from the corresponding picture of the Mandelbrot set. And then the scaling will be with the same universal constant lambda star. So here would also be. And in fact, both of these pictures converge to the limiting transcendental dynamics. And the limiting transcendental dynamics can be used in order to identify these pictures well at least partially, but hopefully in the future there would be a complete identification. Okay, so this is a summary of the neutral randomization theory. So the first conjectures appeared in the 80s. So that was the time when many physicists discovered many interesting facts. So mathematicians call them conjectures. Kurt McMullen constructed randomization ziggle periodic points associated with ziggle disks of a periodic type. He also showed the exponential contraction. So Ina and Shishikura proved the publicity of the neoparabolic randomization. So this randomization controls what happens in the neighborhood of a parabolic point. And this was important for the theory because the publicity really allows to study nearby maps. And what they did, they proved the publicity for the actual parabolic map. And since the publicity is an open condition, it also holds in a neighborhood and it allows to study maps that were out of reach before. For example, there are many applications to the area problem. There is now kind of almost a full or complete topological understanding of the hedgehog of a Kramer parameter in the Ina Shishikura class. So Kramer polynomials turns out to be not that complicated after all. Well, there are applications to the MLC conjecture in the satellite case and also ergodic properties of neutral polynomials. So the original argument by Ina and Shishikura involved some computer assistants are not later presented a computer free proof and she also works for unicritical maps also high type. Denise Guidershuk and Michael Jampolsky presented a computer assistant proof for the actual golden mean, the digital disk, namely for this main periodic cycle. Well, then in a joint work with Mishan Nikita, we proved the publicity for bounded type parameters while we involved the packman renormalization. Well, again, the publicity means that we can study nearby maps. So we understand quite well what is happening with periodic, with digital disk or periodic type. And once we have publicity, we can study also nearby maps, and we obtain some applications to also scaling and area problems. And we constructed the first example of the MLC of the satellite bounded type. Let me also mention briefly the Geodigis surgery. So this is a way how we understand digital disks of bounded type. Otherwise, we can't control the critical orbit. So the idea is that we start with the Blaschke product, a Blaschke product restricts to a circle map. So it's going to be a critical circle map. And since the critical circle maps are quite well understood, in particular, if our rotation number is of bounded type, then the restrictions to the circle will be QC conjugate to the rotation. Then this allows to apply a quasi conformal surgery and obtain out of the Blaschke product a Ziegler polynomial of periodic type. So we forget the dynamics inside the unit disk and we do the surgery, and this is an essential bit for the surgery. And what's important is that we obtain a map where we control the past critical orbit. The past critical orbit would be rotated on the boundary of the Ziegler quasi disk. But there are limitations for this surgery because the surgery cannot produce a framework polynomials. Okay, so let me now state Ethereum, so it's a joint work with Misha. So the theorem states that there are uniform a priori bounds for all neutral polynomials on the boundary of the capoboli component. So there are different ways of stating it. So let me first state kind of a rough version. So a rough version claims that when the Ziegler disks of a bounded type degenerate, then they can degenerate only in a star-like way. So as we can see on the picture, so the degenerations has this type, but we don't see the degenerations like this. So this is we don't see on pictures. And this is the statement, so it can be made precise, but roughly speaking, when a Ziegler disk degenerate, then this is allowed. So such degenerations is allowed, and it can also happen on all scales. So this is like scale, let's say one, and if we zoom in, then we would see more parabolic fjords, and still there would be star-like patient. And this is not allowed. So a bit more precise way of stating it is that for any rotation number of a bounded type, but M can be any Ziegler disk, so this is our Ziegler disk that rotates. This Ziegler disk can be put inside a quasi-disk with a uniform dilatation, such that the degree of the map restricted to this enlarged quasi-disk, so we call it pseudo-ziegler disk. So such that the degree is one of the map restricted to this pseudo-ziegler disk. And as a corollary, because quasi-conformal objects are compact, such pseudo-ziegler disk exists for all neutral maps, even for Kramer polynomials, so there would be a pseudo-ziegler disk that is sort of almost invariant, and this pseudo-ziegler disk would have an absolute dilatation. So these pseudo-ziegler disks are obtained by adding these parabolic fjords on all scales. So this is how we construct the Ziegler disk. Dima, it's not really visible. So Raph is speaking out of a star. We can produce a... Dima, could you... Accuracy. Dima, could you please repeat your drawing, because it was not very visible. It's all dark. Is it better now? Are you talking about this? Yes. Maybe you can make it closer to us. Right. So is it better? Yes. Is it? Right. So what I'm saying is that pseudo-ziegler disk can be obtained by adding such parabolic fjords, while in stimuli with other periodic Ziegler parameters. So whenever a parabolic fjords is developed, we can add it to the object. So I was saying it more schematically. If we have a star, then we can add fjords between star and obtain a uniform QC disk. Right. And another way of stating a priori bounce is that there would exist a fiber-wise compact renomization operator, cylinder renomization operator around MacMallon periodic points when we take the closure of MacMallon periodic points. So this is renomization horseshoe. So we take all MacMallon periodic renomization points. We take the closure and the statement is that this will be a compact set of maps and there would be a renomization operator around this compact set of maps. So any questions about the statement? So any questions at this stage? Before I'll try to give some ideas of the proofs. Okay, good. So as I said, the proof is in the near-degenerate regime and here is like one basic idea that suppose we have a single disk that rotates. And suppose we have two intervals, i and j. And suppose that there is a vitric tunnel that connects them. So again, this is kind of a photoshop, but in reality this vitric tunnel will be like this. Well, as now, if combinatorially we can pull back these two intervals, i and j, two intervals located like this, so between them. And if we assume that the critical value is not on the way, then we can pull back this vitric tunnel and obtain a vitric tunnel that cross intersects the original vitric tunnel. And this is impossible because vitric tunnels do not cross intersect. And this type of arguments allows to cover like a lot of cases. So a good half of all the arguments are based on this principle. So in particular, this type of arguments allows to understand the geometry of parabolic fjords. Let me now kind of try to give a cartoon proof of the logic. So this will be like a cartoon rough presentation of the main logic of the argument. Assume that we don't have uniform a priori bounds. So this means that we have a digital disk that looks like this in reality. So like one peninsula approach becomes very close to another peninsula. Then there is a vitric tunnel between these two peninsulas. So as I said, we prefer to do schematic pictures. So if we apply the Photoshop, then we obtain like we put our stuff back, but then we would have a vitric tunnel between these two peninsula. So vitric tunnels geometrically looks like this. Well, it's now this degeneration can be spread around. So the digital disk rotates. The covering lemma allows to kind of spread this degeneration around to more or less all the peninsula, not more or less, but to all peninsulas. But now if we try to spread this rectangle to a rectangle that connects these two peninsula, then this is impossible because vitric tunnels cannot cross intersect. So therefore, when we apply the covering lemma and move the original rectangle to the rectangle that connects this two peninsula, the new rectangle must go through the digital disk. So it must be submerged. Well, and this submergence could be understood because inside the unit disk we have the complex structure of the standard disk that rotates. So in particular, these degeneration can be localized. So more of the degeneration would go like this. And like only a big of one of the degeneration would be beyond, well, would be below this. We call these techniques like snakes. And we have several versions of the snake lemma. Right. But now after localizing the degeneration, we can also observe that roughly this degeneration fits into two rectangles. And one of these rectangles should be twice wider than the original rectangle. So this allows kind of us to say that either this rectangle or this rectangle will be twice bigger than the original rectangle. So we can obtain from a degeneration, bigger degeneration. Well, and then this bigger degeneration could be put inside a fjord of a, inside the epininsular of a deeper scale. But here comes sort of the main logic that if we originally had a big degeneration, then we can find bigger and bigger degeneration at its depth and we can construct like a sequence of rectangles so that each rectangle I n will be to the power n wider than the original rectangle. But such sequence cannot exist because, because we do have a non uniform a priori bounds. So our digital disk was of periodic type. So it might be kind of super degenerate, but still it is of periodic type and it has some a priori bounds. So therefore, therefore, this is impossible and it implies that we cannot find such degenerations at first place. And that's kind of a rough kind of cartoon idea of the argument that allows to promote non uniform bounds to uniform bounds. So let me make like a few final comments like first, as I said, we need to work with the digital disk, so we add parabolic fjords to the digital disk. And we do make such construction on all scales, so to catch part of the technicality is sort of to show that different levels don't really interact much. So this is the way how we do it is by establishing a certain force a priori bounds. Well, another remark that kind of maybe let me say like about applications. So since this should be digital disk kind of exists for all data, so this would imply that for every data there would be some HCOG, let's say for every data that is not rational for every rational data, there will be a HCOG that contains the critical orbit. And for that, that agree is one. So we can't kind of claim at this stage the topology of that patch call, but at least it would exist and the degree will be one. So in particular, the post-critical set is not everything, which is an open question outside of the Inno-Shishikura class. But in the Inno-Shishikura class, so as I said, there is now full topological understanding of such objects and she are hopefully in kind of in the future, this topological understanding can be improved for all neutral polynomials. Let me stop here. Okay. Thank you. Are there any questions? You are online? Okay. Maybe I have a question. So Jeremy's philosophy was something like if your life is bad today, then your life was even worse yesterday. And your philosophy is that if your life is bad today, then it will be even worse tomorrow. So this is a joke, but do you have a dynamical or more like serious explanation why it goes another way? I mean, the reason is this, Naik, Lema. Well, in the Jeremy situation, let me go back to the Jeremy situation. Well, the Julia sets are disjoint and he doesn't need to kind of understand kind of the boundary of each, the geometry near the boundary of each small Julia set. He can kind of think that each small Julia set is roughly kind of a disk for this type of arguments. So here, here the Ziggle disk is one object, so it is a connected object. So we need to kind of understand the ideal boundary of the Ziggle disk with respect to the geometry outside of the Ziggle disk. So this kind of leads up to understanding the submergence of the white rectangles inside the Ziggle disk. And this sort of forces this philosophy that this kind of forces to go into the deep scale instead of going into the shallow case, into this shallow scale. So this is really because we need to understand the submergence of white rectangles inside the Ziggle disk. Yeah, let me also mention that I gave a mini course, a full lecture mini course last December about more details of this type of argument. So it's available on the Simon Center video collection. More questions? Can I ask a question? Sure. Yeah. So I'm wondering if you are also able to control the geometry of the fjords within this pseudo Ziggle disk that you talked about. For example, the fjords start to say a spoiler on the inside. A spoiler about the fixed point or to do other things. Right. Yeah, we don't. Right. Yeah. So in some sense, our arguments allow us to understand the external boundary. So our arguments sort of allow us to understand the external boundary of this pseudo Ziggle disk. But we can't really say much, I guess, at this stage about what's happening around the fixed point. So we don't. Yeah, we can't say how fjords would spiral. Right. So I mean, maybe a different question is if I understand correctly, the pseudo Ziggle disk contains, say, the first few iterates of the critical point. Yeah. Is this right? Maybe could I really state your theorem that for any, like, no finite number m, there exists a k such that for any bound, state of bounded type, there is this pseudo Ziggle disk, which contains the first m iterates of the critical point. Yes, this is correct. So we may need to kind of enlarge k, the capital K, but yeah, by enlarging this absolute k, we can control much more pseudo Ziggle disks. We can control much more the perspective orbit. Yes. Right. Because without stating it like that, because it seems to me that there could be maybe a trivial example of a pseudo Ziggle disk, you just take a closed curve which passes through the critical point. Yeah. And the map restricts to a degree warm up. No, but. Because your pseudo Ziggle disk is not invariant. Yeah, but pseudo Ziggle disks does contain the original Ziggle disk. And it is kind of an open question. Well, I mean, it was an open question kind of why the boundary of the Ziggle disk is not everything. I have a question about higher degree polynomials. So for instance, in the family of all degree three polynomials, would you also have a uniform k such that your theorem holds? Right. I think, I mean, kind of what we really do. So we start with the Ziggle disks that we understand with that are non-uniform. And then we kind of provide a priori bounds for them. Well now for Cubics. I mean for Cubics, if we consider a Ziggle disk of periodic type, perhaps a non-uniform that rotates. And if this pseudo Ziggle disk contains both critical points on the boundary, then I would expect that the technique should be applicable perhaps with minor modification. So in particular, such objects kind of are compact or at least I would anticipate it. But in the cubic case, it can may happen that one, I mean, we know that one of the critical points must be on the boundary. This is thanks to Joe Digi's surgery and Schmitz's show how it can be applied in the multi-critical for multi-critical polynomials. So if we have a Ziggle disk of periodic type, then it must contain one of the critical points. But then another critical point must be sort of kind of on the decorations. This may be kind of a challenge. Because I mean, if you want to understand all neutral cubic polynomials, then we want to understand also the parameter that are obtained by taking limits of these objects. Okay, thank you. Okay, let us send the speaker again. Thank you, Jeff. Okay, thank you.
A fundamental fact about Riemann surfaces is that they degenerate in a specific pattern — along thin annuli or wide rectangles. As it was demonstrated by W. Thurston in his realization theorem, we can often understand the dynamical system by establishing a priori bounds (a non-escaping property) in the near-degenerate regime. Similar ideas were proven to be successful in the Renormalization Theory of the Mandelbrot set. We will start the talk by discussing a dictionary between Thurston and Renormalization theories. Then we will proceed to neutral renormalization associated with the main cardioid of the Mandelbrot set. We will show how the Transcendental Dynamics naturally appear on the renormalization unstable manifolds. In conclusion, we will describe uniform a priori bounds for neutral renormalization — joint work with Misha Lyubich.
10.5446/57331 (DOI)
Thank you very much. It's a pleasure to be back here in Siam at Lumini live for the first time for a long time. Lots of interesting talks. Looking at my slides I realized that somehow I will be sitting a little bit between two chairs because I will not do all of the introductory stuff like I'm going to assume existence of that two coordinates and you know what virtual coordinates is and etc. But I hope it will grow anyway. So how does it work? Suddenly. I should do something. Okay. There it goes. So it was mentioned that there has been no talks about the Mandelbrot set so this will be a talk about the Mandelbrot set. But it's also about quadratic rational maps. So quadratic rational maps have three fixed points with multipliers which we generically denote mu, lambda and gamma. They have two distinct critical points. They have two distinct critical values. And we call M2 the space of quadratic rational maps modulo conjugation by Möbius transformations. This space M2 naturally has coordinates given by the like say the two first elementary symmetric functions of the multipliers and these coordinates gives an isomorphism to C2. So Milner set up the whole machinery to do this. He also doesn't work. So what should I do here? Just click. For instance, he defined these lines that he calls here one mu which are the set of equivalence classes for which you have a fixed point with a multiplier mu. It turns out that in these natural coordinates these are straight lines. So we call them affine lines. If you have mu in the unit disk or one, we include it. There is naturally defined a connectedness locus, a set of parameters for which the Julia set is connected. We call it Mö. And on so I'm doing it with a pointer anyway. The P1 mu, we have a natural coordinate identifying P1 mu with the complex numbers and that's the map which sends the equivalence class of F to the product of the two other fixed point multipliers. So we fixed one to be mu and then we have two other fixed point multipliers which can be whatever almost whatever they want. And the product turns out to be a biolomorphic map from P1 mu to the complex number. So it's a natural complex coordinate. And we can use this to identify P1 mu with the complex numbers and it's corresponding connectedness loci with a subset of the complex numbers. I don't know what I did wrong since it's a point. So where does the mental boat set come into it? If you look at mu equal to zero, then the line P1, zero is actually parameterized by the quadratic polynomial of C of z equal to z squared plus c. And in this natural coordinate sigma zero of q sub c is just four times c. And better than that, if you look at the connectedness loci mu from mu in the unit disk, then they are all quasi-conformally homomorphic. In fact, there is a homomorphic motion which learned about homomorphic motions in Newell's model, parameterized by the unit disk and the mental boat are M0, sending M0 to Mu and respecting dynamics. So the only sort of question remaining when you want to compare these connectedness loci is what can you say about M1 and its relation to the others? Milner conjectured that, in fact, M1 is homomorphic to M, which is 1,490. Pascal and I have been working on this for many years, made several publications on the way. And actually, it was announced a long time ago that we have a proof. But only this summer, we produced the manuscript, but it's now available on Archive. And it's sort of maybe like Jurassic Park, now it appears. But it's finally there. And that's what I want to talk about. So if you sit down with one of these programs and you try to do pictures, you look at this homomorphic motion, die along the segment from 0 to 1, then you get these mental boat copies, M1. And you look at the pictures and it really looks like a deformation from the mental boat set to M1. I didn't bring one of these, essentially because when I looked it up, my computer could no longer play my movie. I somehow hope you can visually from these two see that there should be a cross-patterns. So in the study of the mental boat set that was initiated by Jordian Hobart, the principal tools were external rays. Then came up with his Yoko's puzzles, so that became an essential tool also. And what we're going to do is we're going to generalize these Yoko's puzzles to maps in P1 and 1. We call them parabolic quadratic rational maps. So what can we say about rays? Rays naturally form a possibly singular collation of the basin of attraction of infinity. You also have an orthogonal collation given by the potentials of the green's function. These dynamical collations are invariant under the dynamic. And somehow this invariance is reflected in the mental boat set in that the co-landing pattern of external rays near the parameter C in dynamical space resembles what you see in parameter space. And so this is the general philosophy. So for quadratic rational maps, we have this parabolic basin. On the parabolic basin, we have a Fat 2 coordinate conjugating dynamics to translation by 1. This Fat 2 coordinate maps the entire basin onto C. And in C, we have two natural collations, a collation by horizontal lines and vertical lines. And we can pull them back by the Fat 2 coordinate to define naturally a horizontal singular collation and a vertical singular collation. It turns out that most of the horizontal leaves, all those that are regular leaves, are sort of boring. The only interesting leaves are the critical leaves. And when I talk about a leaf, or a singular leaf, I mean something which maps onto the curve, which maps onto the horizontal line through the critical values of the Fat 2 coordinate. So all the critical values of the Fat 2 coordinate, if you have a simply connected basin, lie on a line, so there's a unique line. If both critical points are attracted to infinity, you'll have two lines. So it turns out that it's only the horizontal leaves, the singular leaves that are interesting. So we're going to call these singular horizontal leaves parabolic rays. And let me just see if I can pick one. So here is an example of a map in this parabolic map in here 11. And you see in green, the tree formed by all the singular leaves. And the ray is a path descending that tree somehow going in there. So what does it represent? You see something that looks like a rabbit, except that it's a parabolic rabbit for a peer 1 map, which has a fixed point. The parabolic fixed point is here. So it's a parabolic fixed point of multiplier 1, so it's a general fixed point. You have the other fixed point here, the alpha fixed point. On the Fatou coordinate, well, you can sort of see the Fatou coordinate via the graph in that I should then I have to draw something. Is it OK I use the board here? So when you have a parabolic basin, let me just draw it as if it was the cauliflower. You have your Fatou coordinate. So you have a critical point with critical orbit. Let's normalize it so that we send the critical point to zero. Then the critical values of phi are precisely the negative integers. And the green which has been drawn up here is exactly the preimage of the negative real line. So what it looks like here is that first it's a critical value. So you have two branches branching off, then you have two branches branching off, you have two branches at each end, and you can continue. OK, so that's what you see here. What we see to the left, now we can turn off the light please, is a Blaski model of the basin of attraction of the parabolic fixed point in the case that this basin is simply connected. So let me go back. Well, it doesn't respond quite. It seems that there's something that was here. So here's my Blaski model. It's the Blaski function set squared plus one over three divided by one plus three, set squared over three, which is the Blaski model for the dynamics on a simply connected parabolic base. So on this graph that I showed you, I've drawn here this tree which corresponds to the rays except that for visualization purposes we made it inside the unit disk. Everything is reflection symmetric in the unit circle because it's a Blaski function. So we really should have made everything outside and then you would see complete correspondence with the picture that you see. So I said that these singular leaves give us a notion of rays. So what do I mean by that? So it's enough to discuss this on the Blaski model then we can transfer it. So you start from infinity, you go along one of these. If you go left you put a zero, if you go right you put a one and then that gives you a coding of the segments of your graph with zeros and ones and then you read off this coding along the way and then you eventually will make yourself your way to a unique point on the unit circle which you can then code by this binary sequence of the path leading to it. It should be said that this Blaski product is conjugate by unique homeomorphism preserving orientation to the graded polynomial z squared. So we can think of the action of the Blaski product on the circle as just the familiar z squared and we're going to use that later via this homeomorphism. So here we try to make Mark here with just the points but here's one of these rays and another ray. So you see that rays are in general here not disjoint. They are part of a tree and then that gives you a little complication later when you want to make puzzles but it's not unovercomable. So here are some more examples. So we see the, you could say the one seventh ray coming in and the two seventh ray and the four seventh which when I say one seventh I mean the binary expansion of one seventh which is zero, zero one repeated. We see here the zero and the ray with all zeros and the ray with all ones to the right. So and for the outside audience this is, I go back. Okay, let me just leave that. So it turns out that we have a natural concentration of the complement of the connectedness locus M1 which resembles the way that we parameterized the complement of the mantle board set by the position of the second critical value in the parabolic basin relative to the first and that gives also a notion of parabolic rays in the complement of the connectedness. So here is an example of a ray coming down so it's actually, well you cannot see that it's two rays but it's the one seventh ray and the two seventh ray coming down and delimiting what we call a wave of rotation number one seventh that is similar to what appeared in Alex talk on Monday. So let me return to quadratic polynomials because we want to compare quadratic polynomials to quadratic rational maps and let me just get a quick recap for those who are familiar and not familiar with this. If you have a quadratic polynomial then it generally has two fixed points well except for equal to one quarter we call them alpha sub C and beta sub C. If C is not real larger than or larger than equal to one quarter then the fixed point beta is the landing point of the unique fixed ray. And if alpha in C is in the Mandelbrot set but not in the main cardioid meaning that your fixed point alpha is repelling then by a theorem of 2 or D it's the landing point of at least one periodic ray and in fact it's necessarily the landing point of a cycle of periodic rays which have a rotation number on the unit circle so the argument on the unit circle have a rotation number so they coincide combinatorially with a rigid rotation and for each rational p of a q that is a unique such orbit. And alpha sub C is the landing point of a cycle p of a q cycle of rays whenever C is not in the main cardioid to whenever alpha sub C is repelling. And when you have this p of q cycle of rays co-landing on your fixed point alpha sub C you say it has combinatorial rotation number p over q and we can define the way the p over q way as the set of parameters C such that alpha sub C has combinatorial rotation number p over q. And we can define the limb as the intersection of the metal for that with the p over q way. So I showed you this picture before here. We have a similar situation for quadratic rational maps and for quadratic rational maps the one third way is the disk bounded by this red curve here so coming down with the one over seven ray and coming down with the two over seven ray and then when they bifurcate they start to surround a little disk which is the corresponding one third ray for the parabolic map. So when we study quadratic polynomials a good tool or an essential tool that was introduced to us by Yoko's is those so called Yoko's puzzles which are ways of defining a Markov partition for the Julia set. And the idea of Markov petitions is of course to sort of give a combinatorial understanding of the dynamics. So Yoko's puzzles let me just skip forward one slide. So for those who never seen them before Yoko's puzzle it's defined by a certain graph and then you pull back this graph iteratively under the dynamic to form a more and more refined graph. And then you have puzzle pieces which are the bounded connected components of the complement of the graph and these puzzle pieces just in a regular Markov position and becomes nests that nest down to maybe points or sets depending on the dynamics going on. So in many cases this leads to fundamental systems of connected neighborhoods of points and to local connectivity of Julia sets. So they are interesting quantities. So to be more concrete so this is where I want to skip forward. We fix an equipotential level say L equal to one or something like that and then we take a parameter P in the P over Q lim. So being in the P over Q lim alpha is the co-landing point of the P over Q cycle of rays up to take one over seven two or seven four or seven. So we take the equipotential of chosen level. We take the rays in the P over Q cycle from alpha including alpha all the way up to level one of us our chosen level and we also take the rays of the P of the fixed rays and we call that the base puzzle. And then what we do is we just pull that back. So depending on what C we choose here these puzzles look very different. But if we apply the tattoo coordinate or not that group but but your coordinate. So the virtual coordinate is only defined on the basin. Then the level zero puzzle are all the same. They always look like this for a given for the rotation number one third and in fact any level puzzle here will be the same in the picture coordinate. So we could as well have just defined starting with this graph pull it back on the set squared and then pushed it over using the the picture coordinate. And then maybe we get some landing patterns or something. And of course we will only use like the one third pattern if we are in the one third lim. What this tells us is that somehow the interesting part is the cold ending pattern of race and being in the one third limb. I mean always the one third cycle of race collides on the alpha point and it's pre-image or land on the pre-image of alpha. But it's apart from pulling back a few times you start to have things that looks very different depending on which value of C you take. So when you want to do puzzles for a static rational maps with a symbolic fixed point we do essentially the same except that we start from the universal puzzle. And then we just apply a push it over using the uniformization coordinate to the complement of the Julia set. And then we get whatever landing pattern of race we get. So here's an example. I don't really go into I want to because it's a whole there's a little discussion there how to do these puzzle pieces in the right way. Somehow there's what the Yoko's puzzle you can just pull back. In the parabolic basin because dynamics is attracting this way it means when you pull back if you have the pre-image of a set like this it will be bigger. At least in the middle here it will be bigger and out here it will be smaller. So you have to do some adjustments by hand but you can do them in this universal picture and then because that's what coordinates move holomorphically with the parameter you get modifications that move holomorphically in parameter space also. So yeah you can do that. Let me skip that. So I said the landing pattern of race in the puzzle is the interesting part. So here is some an example of a Julia set of a quadratic polynomial in the one third limb. I've drawn the race of the puzzles up to level three so the initial race are the one third cycle and three image. Then pulling back once I get a set here and race here. These two are both parameters of that and then this guy has two parameters one here and one here whereas this has two parameters one here and one there. So I was saying the interesting part is really how do they co-lamp and this is encoded by saying okay we define an equivalence relation on the. So we take E0 to be the POQ cycle. We take a ZZ0 to be the POQ cycle union negative so the pre-image and then we define ZN to be the pullback by Q0 to the minus N of Z0 for technical reasons we want to start at N equal to minus Q and then we also take the union of all of these. So there are variants of this in first and spherry where he takes like always going to the smallest pair or the pair of race with the smallest opening argument between the critical value surrounding the critical value but here Joko's idea is that you don't need to go to that you should just stick with only those that comes from the POQ cycle and that already carries sort of the essential information. So when you have race co-lending we can say that this defines an equivalence relation on our set. So for instance on Z0 which consists only of the set of points which corresponds to the period POQ cycle and its pre-image you will have two equivalence classes namely E0 landing arguments of the alpha fixed point and its negative. And then to mark an equivalence class we simply take and mark it on the unit circle and take the hyperbolic convex hull of that equivalence class and that gives us P bonds in hyperbolic P bonds in a hyperbolic base we include the boundary points and because these landing patterns here they don't intersect when you look at the P bonds they will not they will all be destroyed. So to give a combinatorial description of our element C in our limb we will take this sort of tower of equivalence relations one on well what is that it's an equivalence relation on set N for each N and these equivalence relations will satisfy certain properties namely that always till the zero has only the two equivalence classes E0 and minus E0 and if I have an equivalence class E at some level N then the image is an equivalence class at the previous level if I look at an equivalence relation at a given level N then that's the union of all the previous ones including itself in particular you can just if you have a level N equivalence relation and you restrict it to a lower level M you just take and throw away all the higher levels equivalence class and then equivalence classes will have disjoint holes so if you use this as a an actionmatic system for creating towers of occurrences you can in this way create a tree of equivalence or towers of equivalence where each node is an equivalence relation on set N so when you have this equivalence class you have looked at the equivalence classes in the convex holes what is really interesting is not so much the equivalence classes as the complement so in the drawing that we had before here we see that there are lots of white connected components in our terminology we call this a gap because there is no black and either zero which is in the middle here is in a gap or it's in an equivalence class. Gaps have a so equivalence classes have a dynamic so you have an equivalence class on the level N you apply q to 0 to it you get an equivalence class of the previous level N minus 1 but gaps is not quite that easy but if you look at the boundary of a gap so arcs on the circle then always they are mapped onto the boundary of the gap of the lower level of the level N minus 1 so in this way you can define a dynamics of gaps by simply looking at what does q0 do to the boundary. And then you can ask if you look at this dynamics on gaps if you look at the critical gap of some level you can ask when does it come back to become critical and you call that the critical period of the gap and then yeah if of course there is something to maybe to be said here which is that if you don't have a critical gap then there is a unique way to continue your equivalence relation backwards whereas if you have a critical gap then there are several ways you can have an equivalence relation one level higher so in this sense once you sort of had a critical class then you are stuck and these corresponds to parameters unique parameters in fact which are miseraivic parameters where t is bf to alpha. However the interesting ones where you have a critical gap which have a critical period if you have at all levels a critical gap then you have at all level a critical period and this critical period function is a never decreasing function it can stabilize it can be that at some capital N it becomes constant and if it becomes constant then we say that this tower is renormalizable. So basically we have two types of towers we have renormalizable towers and we have non-renormalizable towers. And then the basic statement is that towers essentially codes who you are except if your tower is renormalizable. If you take any parameters c in the PORQ limb so coming from taking the co-landing patterns of rays of puzzles of all levels then it's a true statement that this tower is k renormalizable and the stabilization happens at level N if and only if your quadratic polynomial has a top level renormalization of period k. And if k is larger than q you can use the puzzles to define your renormalization. If k is q then you are in a special case where you have to do some modification. And using our parabolic puzzles and using this conjugacy of the Blasco product to set squared we can take equivalence classes coming from parabolic puzzles and model them on the same space namely all the set ends. And then we have a similar statement that given any g a parabolic map in the PORQ limb then a tower if a tower is renormalizable then the map is renormalizable. And also if an infinite tower is realized for quadratic polynomials then it's also realized for a quadratic rational maps and vice versa. This sort of gives us a way of identifying parameters on the two side. So essentially you want to say that if you have the same tower you should be identified. So let me come to renormalizations later. So here is the statement. So I previously said that the parameter for a parabolic rational map is the fixed point multiplier of the non parabolic or the one that's not a multiplier of one. So we define a map psi one from m one to m by sending a in d bar. So if alpha for the parabolic is non repelling we have a unique t such that alpha sub t has the same multiplier which just corresponds like that. So then psi one of d and q sub c have a fixed point of the same non attracting multiplier. And except if the multiplier of modules one then they are dynamically conjugate. And then if you have a not in d then similarly to for quadratic polynomials your d is in some limb p over q of the m one and so it has a well defined a tower of equivalence relation. And then there are two cases either this tower is renormalizable or it's not renormalizable. If it's renormalizable of period a so g is renormalizable then we prove that there is a whole copy of the Mandelbrot set with the same combinatorics. It is already known for m that there is a whole precisely a copy of the Mandelbrot set with the same combinatorics. These of Mandelbrot sets are homomorphic so we just sent corresponding homomorphic parameters to each other. And then finally if you are not renormalizable then you just take the unique c which has the same tower and you send your map to that p. A priori we could not know if this map is injective or not. It could be that the Yoko's parameter theorem does not hold for parabolic so we prove that it does hold and that's going to give injectivity and it's also going to give continuity at the. So we want to show that this map is a homomorphic. So we are going to prove a parabolic Yoko's theorem for parameters that are not renormalizable and which are don't have an effect or non repelling thing. And from this you get injectivity and you get also continuity at all such points by the parameter of possible construction. You get continuity on the boundary of the unit disk because you have Yoko's inequality for the limbs of the Mandelbrot set and you have Yoko's inequality for the limbs of the M1 and so that gives you continuity there. And then on little copies we have it on the little copy itself from one little copy to another. It's continuous because the identification with the Mandelbrot set is continuous and the question was if these values that come from the outside would glue on in a way that would be consistent with continuity. And it turns out that there is this theorem of shrinking carrots which ensures that in fact it's going to be continuous on the little copies. And then so the main exercise in this round has been to actually prove the Yoko's theorem for M1 and we have just mimicked the proof that is already available for M. So there's a few slight differences and I will try and point out these differences. So the puzzle, it could have been a puzzle for a quadratic polynomial. So the boundary would be more rough. You will have little points of where you have angles but let's just draw it nice. The rays will also have angles. Not really essential. We define B0 to be the interior of the union of closures of puzzles pieces of level 0. And then when you go to the higher level or depending on how you look at it, a puzzle you pull back all of the old set thing. If you want to go to V1 you pull back V0 minus the beta puzzle piece and then you glue in a hand by hand defined new puzzle piece at beta and beta prime. And then by construction at each level you will just have Vm plus 1 to be the union of the interior of the union of closures of puzzle pieces of that level. We have rotation number 0. Let's go down to a level V over Q, intersect the critical puzzle piece. So I put C for the critical point. I put V for the critical value. So you have the critical puzzle piece of level 0. You intersect it with the level VQ and you look at G to the Q of that and that's essentially a 2 to 1 covering of the disk coming down here. So I want to say that using thickening just as in for quadratic polynomials, we actually have a quadratic like map always for any C in the limb. It's just that this quadratic like map might not be a renomization because we have built into the renomization that the little field Julia set should be connected. But if this little field Julia set is connected, then we would say that our parameter belongs to the satellite copy, mpq. And if it's not connected, then it's some cancer set. Sorry, Karsten, if you're pointing at the slide, you might be using the mouse because we can't see it. Actually, I was not at that time, I was not really pointing to the slide. So I'm saying if you see my hand here that I made a thickening or it's thickening like this, which I've denoted U and which is mapped probably 2 to 1 onto a larger thickening like this. And this is a polynomial like map of degree 2 always. And it has, as it speeds a fixed point, the old alpha fixed point, et cetera. And its Julia set is living in the closure of the, okay. So the Julia set is going to live in the closure of this little puzzle piece. So now we have a basic dichotomy when we have a parameter C in the limb. So either your iterates come back after Q iterates to the critical puzzle piece of level 0 or eventually it escapes. And escape meaning that in the Q of iterate it goes here. I call this set x0 tilde. It has a twin on the other side, x0. And if in the MQ iterate, here that means that in the MQ minus first iterate, I'm in the set here consisting of puzzle pieces of level 1 that are the remedies of those puzzle pieces here. It's the same. It works both places. The picture is the same. So the puzzles has been set up so that the pictures are the same. The only difference is that the pieces are put in by hand. So when you pull back, you all the time have to, in the next level, pull in, define by hand the level m plus 1 puzzle piece surrounding beta and beta prime. But this is done once and for all for everybody using this universal puzzle piece construct. So that boundary move holomorphically. So I was just saying here that you have this dichotomy either you are captured by or not captured by the URL in the satellite copy. So you keep returning to the central puzzle piece of the Q iterate all the time or you escape and you do that by going through X1 tilde. And the good thing about X1 tilde is that it's relatively compact in the beta, the pre-image of beta puzzle piece. So you have a nice round annulus there. And so if you come in that, then necessarily you are in one of these diatic decorations of your little copy, your mpq, the satellite copy. And these are naturally indexed by diatics r over 2 to the power m. So here I mean these are similar to the pictures that you do for the radical normials. So either you keep returning or in the first time you have returned back or in the first time you have returned back. So you have a critical value. So you have a critical value. It's right. Your critical value is outside. So in this part like this. So that in the next it will be here and then it will come out or it can go to the other side. And so here's the unit disk. Here's your limb, no, your copy mpq. Then you have these diatic decorations. This is the one half, the one third, no, the one quarter and the three quarters. So this corresponds to m equal to 1. This corresponds to m equal to 2, et cetera. So the main point I want to get to here is that when you are, for instance, here in this one half decoration, then the mq puzzle pieces adjacent to alpha and its pre-image onto the cube iterate map exactly onto the puzzle piece, the full critical puzzle piece of the level mq down so that these puzzle pieces have this property that both of them map one to one onto a larger disk containing them. And because you have this enlargement as a quadratic like map, you will see that this gives you the usual cancer construction. So they will just shrink down to the points in the cancer set where you take three images. And because the critical value is not intersecting, you have something that moves holomorphically in parameter space. And that moves holomorphically all over the, this diatic wake here. So that means that if you have, and so let me go down. So when you have a point C in the Julia set, you have essentially three possibilities. Either it gets captured by beta, it gets captured by the little Julia set from the cube renormalization, or it returns infinitely often to the set x1. And similarly, when you look at the critical value. So either the critical value is eventually captured by beta, it's captured by this little cancer ghost. So when you are out in one of these diatic decorations, you have your kg that's become a cancer set. You can be captured by that cancer set. Or you come infinitely often to the puzzle, this union of puzzle pieces which are relatively compact in P0 beta prime, so that you can apply the classical puzzle construction to these. And you can just use holomorphic motion arguments to transfer nested nest and well, convergence of puzzles in the nest in dynamical space to convergence of nest in parameter space. So it's a way to avoid these arguments about hyperbolic distance and diameters and stuff, but just use holomorphic motion directly to pass from dynamical space to parameter space for these parameters. It was sort of brief. So any questions? So Castan, is it right to say from your proof that sort of posteriori, the lesson is somehow all the cases that are kind of tricky in the polynomial case, they're actually the same as in the quadratic polynomial case, they're actually the same in the parabolic one. So the renormalizable ones, if I understood you correctly, they're actually just in the parabolic case, they're actually also just renormalizable in the usual sense. And in your course argument, apart from these cases that you were talking about, where you either end up on this inside this quadratic like map or you end up on the beta fixed point, you also have a similar argument to your course's original argument. Yes. So in these two, you're right, completely right, that the arguments are completely analogous in these cases in the renormalizable case and the Yoko's parameter case. Except that when you have this capture situation, we just do directly holomorphic motion to transfer from the dynamical space to parameter space because we have something which persists in parameter space. And so you can just transfer by holomorphic standard, holomorphic motion arguments to parameter space. So is it right that your maps are also conjugate? Yes, except that we don't know if they are conjugate, if the fixed point is indifferent with a Kramer cycle or even a bad Siegel cycle. Because we don't know anything about local connectivity of Julia sets. And it could be that the critical point is required to the beta fixed point so that Peter's surgery, which could have given us a conjugacy, we don't know how to prove that what comes out is actually a homomorphism conjugating dynamic. Okay, that makes sense. Thank you. So does this homomorphism coincide with the limit of the holomorphic motion of the parameter space along the segment 01? Yes, but we didn't prove it. But there's a way to set up puzzle pieces in the dynamical setting or in the hyperbolic setting different from zero, where these puzzle pieces resemble completely the parabolic and that these puzzle pieces converge to the parabolic puzzle pieces. Alex? Sorry. No. Just keep in mind that we have to catch the lunch eventually. Just in this hyperbolic setting in the previous question, do you have to do radial convergence in order for the puzzle pieces to converge or can you somehow control them? Yes, well, subhoracically. Yeah. So, yeah, you need to... Can you understand what happens in the horaciclic case? I never tried. So I should say maybe. Thank you. So thank you very much. Let's thank the speaker again. Thank you.
In a recently completed paper Pascale Roesch and I have given a complete proof that the connectedness locus M_{1} in the space moduli space of quadratic rational maps with a parabolic fixed point of multiplier 1 is homeomorphic to the Mandelbrot set. In this talk I will outline and discus the proof, which in an essential way involves puzzles and a theorem on local connectivity of M_{1} at any parameter which is neither renormalizable nor has all fixed points non-repelling similar to Yoccoz celebrated theorem for local connectivity of M at corresponding parameters.
10.5446/57333 (DOI)
So, yeah, as Dirk mentioned, my title is the VisualSphere for an expanding Thirst Map. And to start my talk, I first have to make some basic definitions here. I mean, what is a Thirst Map? Well, I think for this audience, I don't have to say much about it. I will define what an expanding Thirst Map is. You know, then of course, I get to the VisualSphere of this. And my main theme for my talk actually will be the conformally mention of a VisualSphere of an expanding Thirst Map. Again, conformally mention, I think was mentioned in some talks before, but I will remind you of the definition. Okay, so let's get started here. First, some references and credits. So many of the basic definitions that you will see in my talk, can we look up in my book with Daniel Meyer about expanding Thirst Maps? So this book is available actually on Archive. So you can find many of the basic things in this book. And towards the end of my talk, I will mention some more recent results. And this is a work in progress with Misha Alshanka and also with Daniel. Okay, so all right, let's get to the basic definitions here. So quick reminder, what is a Thirst Map? A Thirst Map is a branch covering map on the topological two sphere that I like to call S2. With the following property, if you look at the critical points, the points where the map is locally not homomorphism, then these critical points have a finite orbit under iteration. Right, or to phrase it in other terms, if you look at this finite set of critical points and if you apply the nth iterate on these critical points and take the union of all the sets that you get, well, then you get a finite set. And this set is called the post-critical set of F. All right, so Thirst Maps are branch covering maps on topological two sphere with a finite post-critical set. So of course, I have to give you some examples of Thirst Maps. And the basic example for the rest of my talk is actually given in this picture. So in some sense, I mean, all the later things that I will define are somehow already encoded in this picture, which gives a specific example of a Thirst Map. So let me explain what is going on here. So the two sphere that I'm looking at in this situation is actually this picture here on the right, which I like to call a pillow. So you just take two units squares, let's say, and you glue them along the boundary, then you get this pillow that I call S0. So how do I get a Thirst Map out of a situation like this? Well, so if my pillow, and now on the left, I consider some cell decomposition of my two sphere that in this situation is obtained by just gluing together some squares. And this is indicated here on the left. I mean, you have these topological squares, and you glue them together to get some cell decomposition of the topological two sphere that I call S1. And well, to get a map out of that, essentially what I'm doing is I take these small squares, map them to the phases of my pillow, and then do some kind of Schwarz reflection procedure. And of course, I mean, I have to be careful how I rotate my squares. You know, to get a uniquely defined map, I introduce these markings here. So in my picture here, you should think of these black points always going to this vertex here of my pillow. And the points that I marked white, they go to this white point here. Right? So if I take these markings into account, then there is a unique kind of scaling map that takes this small square here and maps it to, let's say, the top phase of the pillow. And then, as I said, I mean, then you do a kind of Schwarz reflection procedure. So in other words, this neighboring square here, this will be folded under and will be mapped to the bottom phase of my pillow on the right. Okay? So if you think about this for a second, then you see, I mean, there's a kind of well-defined map that you get out of this Schwarz reflection procedure. I mean, the fact that the map is well-defined essentially comes from the fact that at each of these vertices of my cell decomposition, an even number of these smallest squares come together. Right? So this means if you go around one such a vertex, I mean, then somehow everything is consistent and well-defined. So to get a thirst map out of that, we need some kind of identification of the sphere that we see here on the left with the sphere on the right. And I indicated this in the bottom of picture. So in other words, I mean, you somehow want to smash this picture so that you see essentially a pillow again. And this is indicated here on the left. Right? Of course, there are many ways somehow of find a way to identify my S1 here with my pillow. But the procedure that I'm following gives me a map that is well-defined up to a natural equivalence that is relevant for thirst maps. And this is thirst and equivalence. I will not even define it because in the maps that I'm interested in, thirst and equivalence will just be topological conjugacy. So for the moment, just think of this as up to some natural equivalence relation. There's a well-defined map to get out of these geometric pictures. That makes complain very quickly why my map here actually has a finite cost critical set. And let's think where a critical points can occur. So if I'm looking at such an interior point of one of my smaller squares here, then it's clear that the map is a local homomorphism. Right? I mean, the map is just a scaling map so nothing really happens. So interior points of my squares, they are not critical points. The same actually for interior points of these edges. Right? I mean, again, I mean, at these edges essentially have some kind of a folding map. And again, these are kind of local homomorphisms so you won't have any critical points there. So this means the only critical points that can possibly arise are these vertices of my cell decomposition. And again, I mean, if you think about this for one second, then you will see that you do not necessarily get a critical point. And for example, at these corners here, where only two of the squares come together, you won't have a critical point because again, you have a local homomorphism. Right? Essentially, near this point here, what the map does is again, I mean, it maps actually to this corner, it will just be a stretching map. So this means the critical points in my picture are here, here, here, here, here. Then there's another critical point on the other side of my sphere S1. Right? So this means I've actually a set of how many? Six critical points. Right? And then let's see what happens under iteration. Well, I mean, the way my map is defined, the critical points all end up in these corners. Right? And actually all the corners under one more iteration step map to this point here, the lower left corner, and this happens to be a fixed point of the map. Right? So in other words, the orbit of every critical point stays there forever. And this means my map here has four critical points, and these are exactly the corners of my pillar. Okay? So does that make sense? Are there any questions? Okay. So I have a first map with four critical points. So in a moment, I have to define what I call a tiles of solid compositions of all levels. And before I give a formal definition, let's just think of how I get this picture here, or rather this picture back from my map. Right? I mean, my selling composition defines the map, but conversely, I get the selling composition actually once I know what my map is. And the way to think about this is, okay, so you have your post-critical points here, and now you choose a Jordan curve that passes through these post-critical points. And in this case here, I can just use this Jordan curve. Yeah, later on, I will call it J. So let me also call it J here. And then I just pull back this post-critical, that is Jordan curve. And this essentially gives me the picture on the left here. Right? So the one skeleton of my selling composition is just the preimage of my curve J here. So my map is called H. So this is H inverse of J. Right? So the reverse of J gives me the one skeleton of my selling composition. Excuse me, I do not see that the black dots are critical points. The white dots, yes, but the black dots, I do not see that they are critical points. There are four inverse... As I said, I mean, these points here are not critical points. The points in the corner, but for example, this black dot will be critical points. Which one? I mean, the one here, there's a vertex right here, right? Which one? Sorry, this is not black. No, sorry. No, that point isn't black. No, sorry. So this means actually you're right. I think the black points are perfectly ordinary points. This is a map of degree four, and there are four inverse images of the fixed point, including itself. It's a perfectly regular point. Yes, so these points are not critical points. It's a degree five actually. Yeah, the map has degree five, right? I mean, there are 12 small squares. There's a funny flap somewhere on the left, right? On the left I see 12 small squares, sorry, 10 small squares, right? I mean, six on top and four on bottom. I mean, you don't see the ones on the bottom very well. Right? So there are five, there are 10 divided by two is five, so the degree is five. Maybe you could tell us again what's happening on the upper left. There's something sticking up. Oh, I see. All right, so maybe I should explain this, right? So this thing is sticking up, but it really consists of two squares glued together along three sides. So you should really think of again something that kind of looks like a pillar, but the pillar is cut open, right? And similarly on the base pillar, I cut it open like this so that I can glue in this so-called flap. So topologically the flap looks like this. Yeah, it's, you know, if you want, you can kind of think of the flap as kind of like a glove. Right, so you cut it open and you can stick your hand into the flap. Does that make sense? Yeah, so the flap really consists of two squares glued together along these three sides, right? Otherwise I would not have a topological sphere. Okay. All right, so, okay, so I have my post-critical point and my first map here, it has six critical points. None of the black dots is actually a critical point. And the post-critical set has four points, namely the corners of my pillar here. So I have four post-critical points. So I have a branched covering map with a finite post-critical set, so this is a first map. Now I can also easily explain how I get some kind of fractal object out of this picture, namely I essentially just iterate this procedure. And the way to think about this is, you know, in my picture, what I do is I essentially look at my pillow, it has a front and a back side, and I just replace front and back by the corresponding pictures that I see here. Right? So this means my top face will be somehow replaced by this kind of complicated picture, but the bottom face will just be subdivided into two by two squares. Right? So let me move to the next picture to show you what this kind of looks like. Let me see now, my iPad seems to be stuck here. Right? So this is somehow the picture that I want you to think about. So these somehow give this kind of iteration procedure, and you should think about this in some kind of a schematic way. This is just a way how I somehow glue the squares together to get these kind of higher iterates of my first picture. Right? So you should really think of this here, these flaps as glued together out of two squares, one on the front, one on the back, along three of the sides. Yeah? Where these bottom edges here are not the same, I mean the same for front and back, they somehow glue to different edges of my base. Right? And you know, if this kind of picture can easily iterate it, so the white squares here somehow correspond to the top of my pillow and the black squares to the bottom, and now I do what I said before in words, right? So the white squares corresponding to the top are always replaced by this picture, and the black squares correspond to the bottom are just subdivided into two by two. Right? And now it's easy to see how to iterate this, right? I mean, how do we get the next iterate? Well, we replace, for example, this white square again by this kind of generator, right? The black squares are just subdivided into two by two, and then you iterate, right? And you can do this forever, and then you get a sequence of some kind of polyhedral surfaces, and the visual sphere of my first map is essentially just what I get if I pass to a limit, right? More precisely, I mean, I could pass to some kind of from a house of limit. So the visual sphere of my map, I called it H, is what you get if you take a natural limit of the sequence of polyhedral surfaces, right? SN tends to infinity. All right? So this is roughly the visual sphere of this first map H. All right? So does that make sense on an intuitive level? Any questions about this? Okay. So yeah, let's keep this in mind for my more formal definition that I will give now. So let me tell you how you actually do this in greater generality for an arbitrary first map. So for an arbitrary first map, well, I mean, you look at the post-critical set, and very similar to what we did a moment ago, I mean, you fix a Jordan curve in my underlying sphere, Jordan curve, it's just a topological circle, passing through my post-critical set. Okay? And now I just pull back this Jordan curve by iterates of my given map. And well, then I get some kind of a one skeleton of a cell decomposition, and the complementary components of this set, they will be essentially these kind of sets that correspond to my small squares. Right? So they will be, one can show that they are topological two cells. And if I look at these topological two cells for each level in, then I get a natural cell decomposition of my underlying sphere. Right? So let me go back to my previous pictures here. Well, maybe I, this is a bit cluttered now. Let me actually move forward one. So you know, the cell decompositions are essentially seen in these pictures here. So this is the cell decomposition of the first level, the cell decomposition on the second level, the third level, and so on. Right? So you get the cell decompositions just by pulling back this Jordan curve. One thing in the general case that doesn't happen, namely in general, if I pass from one level to the next level, then typically the cell decompositions have nothing to do with each other. In other words, if I pass one level up, I mean, then I cannot expect that I get the refinement of the cell decomposition of the previous level. But if the curve is invariant, as was the case in our example H, then actually the cell decomposition on the next level somehow refines the cell decomposition on the previous level. In other words, if I look at an n-tile, tile of level n, then it is somehow subdivided into a tiles of level n plus one. But in general, these curves don't have to be invariant. You can pick any or Jordan. It's actually a theorem that under certain conditions, namely if the first map is what I call expanding, then one can actually show that such an invariant curve always exists. Well, not necessarily for the map itself, but at least for an iterate of them. Okay? Yeah, so this notion of expanding is given here. So I look at my tiles on these levels n. I fix an arbitrary metric that somehow gives me the topological, gives me the given topology of my sphere. And I look at the diameter of these tiles. And the requirement for expansion is that this diameter tends to zero uniformly as the level tends to infinity. Right? So intuitively, this just means that if I look at these tiles of higher and higher level, then they get smaller and smaller. As you see, for example, in my example here, so my map age is an expanding person there. Okay? All right, so now I'm finally ready to define in the full generality the visual sphere of an expanding thirst map. Namely if I have an expanding thirst map, then, well, I can define these tiles. And with these tiles comes a natural kind of metrics that are called visual metrics. And roughly speaking, a visual metric is a metric that assigns to tiles a size, namely a size that somehow decays exponentially in a uniform way at a given rate, where this rate here is described by the so-called expansion factor of lambda, by this expansion factor of the visual metric that I call lambda. Right? So roughly speaking, I mean, if I take any lambda sufficiently close to one, then there's more or less up to some multiplicative ambiguity, a unique metric, I mean, lying here a little bit that assigns to my tiles a size lambda to the minus one. Right? So going back here to my basic example one more time. So in these pictures, I mean, you somehow see kind of natural path, Euclidean path metric. And it gives tiles of level n roughly size two to the minus n. Right? I mean, as you see, I mean, in each iteration step, I shrink by a factor two. So this means here I have a very natural visual metric with expansion factor two. But in general, I mean, you have to take this expansion factor close enough to one to get a metric. Well, I mean, of course, this ambiguity of lambda is a bit annoying, right? But in some sense, it's not very serious because if you take visual metrics with different expansion factors, well, I mean, you cannot expect them to be the same up to a factor. But if you snowflake one of the metrics, if you raise one of the metrics to suitable exponent, then you actually get metrics that are the same up to a multiplicative factor. Right? So this means for an expanding thirst map, these visual metrics are well defined up to a natural equivalence that I call snowflake equivalence. Right? So snowflake metrics are snowflake equivalent if I have a relation of this type. Right? So this somehow means, I mean, if I define the visual sphere of my expanding thirst map by just picking one of these visual metrics, then I don't really get a fully well defined metric object. I get a metric object that is just well defined up to this kind of snowflake equivalence. Right? And if I'm interested in properties of my visual sphere, well, then I always have to keep in mind that the properties that are meaningful should at least, you know, respect the snowflake equivalence. So to give you an example, doesn't really make sense to talk about rectifiable curves on my visual sphere because they are immediately destroyed if I snowflake my metric. Right? But let me tell you some other properties that do make sense. So here's some of the basic theorem that records some basic properties of these visual spheres. So if I have an expanding thirst map, there's a small technical assumption, namely, I want to require that there are no periodic critical points. You know, then if I look at my visual sphere of my thirst map that I get by picking any visual metric, then I get a property that is called ALLC, annually a linear locally connected. So I don't even want to define what this means. For the experts will know what I mean. For the non experts here, it essentially means that my spheres doesn't have cusps sticking out like this. Right? So this is a condition that somehow is very similar to how one defines a quasi circle, but just one dimension up. Okay? So all I'm saying here is these visual spheres have good somehow topological properties. They also have good measure theoretic properties, namely, these visual spheres are always alphas regular, where the exponent of alphas regularity depends on this expansion factor lambda. Right? So as I said, I mean, lambda is to be very close to one. So roughly speaking, what this means is we should think of our visual spheres. I'm kind of a fractal object with a very large house of dimension. I will remind you in a moment what I mean by alphas regularity, but just one word here that somehow connects the dynamical properties of my map with the geometry of this fractal sphere, namely the house of q measure in this case is a very national measure that comes up in dynamic. It's namely, it's the measure of maximal entropy essentially. Right? So remember, the measure of maximal entropy of a map is just a measure that maximizes the measure theoretic entropy. And you know, it's a well-known theorem that this is equal to the topological entropy that you can easily compute here for these expanding thirst maps, namely, it's just the logarithm of the topological degree of the map. Right? So what I'm saying here is some other measure theory of my fractal sphere in codes, well, this natural measure of maximal entropy. Okay. Let me see. So what do I want to do next? Yeah, I wanted to quickly remind you of alphas regularity, right? So a space is alphas q regular if the house of measure of a ball of radius r roughly behaves like r to the q up to some fixed multiplicative factor. Right? So I think this probably has come up in previous talks. I don't have to spend much time on this. All right. So I think I've now given you some, you know, precise definition of this visual sphere of an expanding thirst map. I also gave you some example to think about this in intuitive terms. So now I want to define the conformal dimension of this visual sphere. And for this, I quickly have to recall what a quasi-symmetric map is. So if I have two metric spaces X and Y, and if I have a homomorphism between these two spaces, then I call the homomorphism quasi-symmetric. If essentially it distorts relative distances given by ratios of distances in a quantitative way, and the quantitative control is given by this kind of distortion function aider here, right? I will give you a more intuitive picture in a moment. So you know, if you've never seen this, you can think of a quasi-symmetric homomorphisms. So homomorphisms that somehow give control for relative distances. This leads to a natural equivalence of metric spaces, namely I call two metric spaces quasi-symmetric equivalent, well, if there is a quasi-symmetry from one space to the other. So here's a more geometric way to think of a quasi-symmetric maps. Namely, roughly speaking, a quasi-symmetric map is a map that takes metric balls in the one space and maps these metric balls, well, not to balls in the other space, but to kind of quasi-balls, I mean, roundish objects with the property that if you look at the naturally defined in radius of this image and the naturally defined out radius, then this ratio of in radius to out radius is uniformly controlled independently of what ball you take in the pre-image. So this is a bit different than the condition of bilipzitzness that you probably all know, right? For bilipzitz map, you would require not only that these radii R1 and R2 are comparable, but that they're also comparable to the original radius R here, right? This characterizes bilipzitz maps. And this is a stronger condition than quasi-symmetry, so every bilipzitz map is a quasi-symmetry. So here's a quick run through some properties of quasi-symmetric maps, right? So this is somehow a thing to remember about quasi-symmetries. They are homomorphisms that map metric balls to somehow roundish sets of uniformly controlled eccentricity. And all of you know what a quasi-conformal map is, right? I mean, these conditions are closely related, so you should think of quasi-symmetry somehow as a metric space and global version of quasi-conformality. Here are some kind of general implications. So as we have just seen a moment ago, bilipzitz maps are quasi-symmetric maps. And these kind of snowflake equivalences, right? Bilipzitz maps up to this snowflake operation, they are also quasi-symmetry. So every bilipzitz map is a snowflake map, every snowflake map is a quasi-symmetry. And one can actually define quasi-conformality for arbitrary metric spaces by some kind of limit condition. So we get this kind of natural chain of implication. So if you adjust globally in some nice space like Rn, then actually quasi-conformality, this kind of infinitesimal condition is equivalent with quasi-symmetry. For regions in Rn, this is not quite true, but at least it's true locally, right? So all I'm saying here is that quasi-symmetric maps, they are not so terribly different from quasi-conformal maps and they make sense in a general metric space setting. Any questions? Okay. So now I can finally define the alpha-stregular conformal dimension, which I will just call the conformal dimension of an arbitrary metric space. Really roughly speaking, what I do is this, so I look at my space and now I want to somehow squeeze it as much as I can by a quasi-symmetric homomorphism where I would like that the image space under my quasi-symmetry is alpha-st regular and I somehow want to make this alpha-st regularity, this exponent Q, as small as possible, right? So I take the infimum of all Q of alpha-st regular metric spaces Y that are quasi-symmetrically equivalent to my given space. Right? And this actually makes perfect sense for the visual spheres of an expanding thirst map. Right? The metric there is not well defined. It's only well defined up to snowflake equivalents, but snowflake equivalents are quasi-symmetry. Right? So if I relax this to call the symmetric equivalents, then this makes perfect sense. So it makes sense to talk about the conformal dimension of the visual sphere of an expanding thirst map. Well, I mean, I always have to drag around this small technical condition here that I don't want to have to be right at critical points. Actually, this relates to something that I should maybe have said earlier. I mean, it can very well be that the thirst map is a rational map. And for rational maps, this condition of expansion is equivalent to requiring actually that I don't have periodic critical points. Right? So somehow this condition here comes up very naturally. For general thirst maps, it can very well happen that I have periodic critical points, but you know, somehow these maps are a bit weird and special. So it makes sense to exclude them. Right? So this is some other problem that I want to discuss for the rest of my talks. You know, what is the conformal dimension of the visual sphere of an expanding thirst map? And, you know, why am I interested in this question? Because the general theme here is, you know, I somehow want to reconcile dynamical properties of my map with properties of this fractal object. Right? And I hope to see a lot of dynamics that are encoded in the geometry of my visuals here. So let me mention one basic fact here due to Pilgrim Meisinski and independently by Damian Mayer and myself, namely, if I have an expanding thirst map, then it is conjugate to a rational map if and only if the visual sphere of my thirst map is quasi-symmetrically equivalent to the standard sphere. And what this actually means is that for rational thirst maps, I mean, for maps that don't have a thirst and obstruction, this whole question about the conformal dimension actually is not so interesting because, you know, my visual sphere is quasi-symmetrically equivalent to the standard sphere. And, you know, the standard sphere offers to regular. So this means, you know, at least I can squeeze my exponent all the way down to two, but I cannot squeeze further, right? Because, you know, it's a standard fact that the house of dimension cannot be less than the topological dimension, which is two in this case. So this means for non-obstructed maps, for rational maps, my conformal dimension is actually equal to two. So for those of you that are only interested in rational maps, you might say, all right, I mean, that somehow, you know, clarifies the picture completely in all the other maps, the obstructed maps are not so interesting. But, you know, if you come from more geometric direction, well, I mean, you see these fractals out there that somehow come from these non-obstructed maps and you would still try to see, you know, what can you say about these visual spheres of obstructed maps? So this question about the conformal dimension still makes sense. I mean, these are naturally defined fractals. So you know, we would like to be able to say something about these. Right. And here's a slide that somehow summarizes some of these things that I just said, right? So, in other words, you know, phrase in different language, this question about conformal dimension is only interested for first maps that have a first and obstruction. Yes, so I won't even define what a first and obstruction is because I think you have seen this in our previous talks. Let me maybe very, very briefly go back to my map age that I mentioned earlier. This map actually is an obstructed map and the first obstruction comes from this Jordan curve here. So if I pull it back, I get this curve, I get another curve here, I get this kind of peripheral curve, you know, and if you compute, what you have to compute to decide whether you have a first obstruction or not, I mean, then you get a first matrix that is just one by one and you know, the degree here is two, the degree here is two and you get actually the wrong Frobenius eigenvalue one. So this is an obstructed map. So yeah, these are the maps that are the only interesting maps for this question of computing this conformal dimension. There's actually a general conjecture that was posed by Lukas Gaia, Kevin Pilgrim and myself about 15 years ago or longer ago, where we actually give a very precise prediction what the conformal dimension should be in the general case for these obstructed maps. So you can compute some kind of critical quantity, some kind of critical exponent from that dynamical data. It's a bit technical, so I want to skip it for my talk. So roughly speaking, what you do is you look at first and obstructions on the map, you compute some kind of a matrix out of that, that depends on some kind of a variable exponent Q, and then you want to somehow, you know, go to the lowest exponent that somehow makes obstructions disappear. So this is very roughly the idea of this exponent. And our conjecture somehow predicts that the conformal dimension of these visual spheres in this case is equal to this kind of critical exponent. So one inequality was actually proved by Kevin Pilgrim and Peter Heissinski. So they showed that this conformal dimension is always greater equal than this predicted exponent. But up to now, there hasn't been a single non-trivial case where one actually has established a quality. So for our map, for this map H, the prediction actually is that the dimension of the visual sphere in this case is equal to 2. I mean, you can compute this critical exponent in this case, and it predicts that the dimension in this case is 2. So intuitively, I mean, what this somehow means is that, you know, well, you have this kind of fractal sphere in this example. And these fractal spheres, these flaps, let me go back to the picture. I mean, for this example H, I mean, you get this fractal sphere that you somehow get out of the iteration of this picture. And in this case, we predict somehow that the conformal dimension is equal to 2. And intuitively, what this means is that you should be able to somehow shrink all these flaps that stick out from my base to something very, very small by a quasi-symmetry so that you almost don't see it anymore. Right? That's roughly what it says. This conformal dimension was defined as an infimum. Right? And another general theme by Heisenstein-Kilgrim actually says that for obstructed maps, you know, unless you're a Latte's map, you cannot really expect a minimum. Right? So for our map, the conjecture really predicts that the infimum is 2, but the infimum is not a minimum. So you can get Arthur's regularity up to quasi-symmetry with exponents 2 plus epsilon, but you can never quite get 2. So any questions before I go along further? Okay. So let me talk about some recent work with Nisha and Daniel. Namely, our result is... Maria? Yes. Just the other day we heard a talk by Dylann Tirsten where he had some formula for the conformal dimension in a similar situation as a critical exponent for some energy and then there was a follow-up talk by Park with his computations. So is this just part of the same series or what you're talking about? No, our situation is different because essentially if you have a rational map of this type that I'm discussing here, as I said, they don't have periodic critical points and this means that Julia said actually it's the whole sphere. And this is quite different, I guess, from the setup that Dylann and Insomai... May I take something here? This is basically the opposite case, like you have no periodic critical points in cycles and I assume that every cycle has a critical point for the theorem I mentioned. Yes, in some sense Dylann's results and what I'm talking here about, I mean these are orthogonal things. I mean they're closely related, they somehow sit in the same universe but they somehow give exactly opposite sides of somehow the same bigger picture. So you don't think that there is a unifying theorem? Well, that's a very interesting thing, right? I mean, Adima Dutko and I think Laurent Bataldi, right, I mean they somehow created a theory of these kind of thirst maps, expanding thirst maps that also allow, I think, periodic critical points and it would be very interesting to somehow create a theory that somehow unifies these things. Thank you. Actually, you know, since you mentioned Dylann's talk and these kind of energies, I think, you know, with these energies, Dylann probably gave a very similar kind of definition of some critical exponent and this exponent here that I mentioned is defined in a very, very similar way, not with energies but with some expressions that are very, very closely related. I have a paper with four quarters where we blow up periodic critical orbits and parameters and you obtain a situation without periodic critical points. That's actually right. I mean, there is a kind of way to, you know, take a theory of periodic critical points and somehow blow them up but then you get, you know, then you still don't get an expanding thirst map in my situation. More questions? Okay, then please go ahead. Okay, so, you know, our argument applies to slightly larger class of maps but, you know, let me not go into this. Let me just stick to this one map here that I discussed earlier. So, you know, our theme is that in this case we really can prove that the conformity mentioned is equal to 2. And you know, let me just tell you a little bit about, you know, how one even approaches this type of problem, right? So well, all right. In this case, you know, 2 somehow is a trivial lower bound because you can never go beyond the topological dimension which is 2, of course, for 2's here. So what it really boils down to is to construct metrics that are quasi-symmetry equivalent to well the visual metric of our map age and we want them to be, to give you a 2 plus epsilon regular metric space, right? So this is really what we have to do. For each epsilon, we somehow have to construct good metrics, metrics that are quasi-symmetry equivalent to our visual metric that are half of 2 plus epsilon regular. And it actually turns out that constructions of metrics in general is a very difficult problem. You know, it's very difficult to somehow construct these metrics. But there is some theory, some general theory that actually works for arbitrary metric spaces that was actually developed by Carrasco. So he has a general theory that somehow solves this problem of constructing of metrics and translate this into a problem of discrete modulus. You know, somehow, I mean, I will tell you in a moment what I mean by discrete modulus, but somehow there's a machinery that turns modulus problems, I mean, that turns this problem of constructing metrics into a problem of, you know, estimating discrete modulus. In this situation for expanding Thurston maps, I mean, there are some simplifications that you can run here. So you know, we have some kind of self similarity and that somehow makes the sole Carrasco machinery a bit easier. So let me just tell you, you know, without going into much detail here, you know, what this all turns out to be. So you know, as you know, I mean, modulus, classical conforming modulus is defined by minimizing some kind of an energy. Well in the classical case, the energy is a two energy. Here we want to minimize some energy with some arbitrary exponent, greater or equal to subject to some kind of admissibility condition right in the classical case, it's an integral condition. But here, you know, we just use the discrete geometry that we have given by our tiles. And roughly speaking, what you want to do is something like this. Take this kind of Jordan curve, J that I talked earlier about, that contains the cost critical points and just let's stick to our map age, right? I mean, it had these four cost critical points. And we had this kind of Jordan curve, which in our example was just the boundary of our pillow. Okay. And now I have these cell decompositions that I get from my end tiles. And what I essentially want to do is I want to look at chains of these tiles. So things that look like this, that join non adjacent sides of my Jordan curve, right? I mean, in this case, my Jordan curve is naturally subdivided into four edges, right? And you know, I call two edges a non adjacent well if they are kind of opposite. Right? So I mean, this would be a chain that joins opposite sides. I mean, of course, I can also have something like this here, right? And now what I do is I put a weight on all my tiles. And my admissibility condition is that whatever weight I put, it should satisfy the property that for these chains joining opposite sides, the sum that I get from my chain of all these weights is greater equal one. Right? I mean, this completely corresponds to the usually into real condition that you see in the definition of classical conformal modulus. And now what you do is you just minimize the LQ energy of the given weight subject to this admissibility condition. Okay? This gives you this discrete modulus. Well, it depends on this exponent Q, of course. Well, of course, it depends on the given map and it depends on this level in, right? Because it enters by looking at what tiles I actually use for these chains. Okay? So for given Q and for given N, for given map N, I get this kind of combinatorial quantity that I call MFQM. Now one can prove the following version of the Karaskov theorem, namely if for given Q, this MNQ is small enough for some N, then there exists a metric that is Q regular and cross is a metric equivalent to the visual metric. This is really the version of Karaskov's theorem in this case. I cannot say anything about the proof here. I mean, roughly speaking, I mean, it boils down to considerations that Dali and I used in our book when we constructed visual metrics. So roughly speaking, the metric that you get will be metrics so that the diameter of tiles is equivalent up to a constant to some other optimal weight function, very rough. Okay. What is the N in X to the N? Sitting in. What is the N in X to the N? N is the level of tiles. Look, I mean, in my definition. Oh, it's the tiles at level N, okay. Yeah, it's tiles of level N, right? So for each level, I get a different quantity. And these quantities for different N actually related. This is on my next slide. So I have some kind of sub multiplicativity here. Yeah, so this is a fact that follows essentially from the self similarity of my visual sphere. So if this kind of sub multiplicativity, and then if I use what is it called facatus lemma, then one can show that the sub multiplicativity gives you some kind of exponential behavior of these, the modular with N, right? If N runs for fixed Q and fixed map, you know, these are modular, some kind of exponential behavior, you know, where I have some kind of critical exponent here that I get for each Q. And roughly speaking what I'm saying here, but I said in my previous theorem is if this exponent is negative, well, then I get a Q regular metric in my quasi symmetric equivalence. So these critical exponents were given Q, I mean, they satisfy a natural monotonicity property that essentially comes from the fact that, you know, if I take a larger exponent, then my sums will be smaller, right? I mean, let's go back here to the definition of my quantity, right? I mean, these weights are, well, I mean, I can always assume that they are capped at one, right? If I have a number less than one and raise it to higher power, I get a smaller quantity, right? So what this means is I get this kind of natural monotonicity property. And this means that these critical exponents here are actually decreasing. So typically, you know, I have a picture that kind of looks like this. I mean, my, if I draw Q here on this axis and my kind of exponent CQ on this axis, I mean, then I get a graph like this. And what I'm interested in is essentially the zero of this graph. And, you know, life would be much, much easier. And I guess this is also something that you've probably seen in Insohn's or Dillon's talk. If for these exponents, we had strict inequality, but in general, one can only prove a non-strict inequality. And this causes a lot of headache in this whole theory. Okay. And here's what we can show in this case, namely, in our specific map H, it's not hard to show that for exponent two, these moduli behave roughly like N, if N runs, right? So this means that this exponent for Q equal to two has to be equal to zero, right? I mean, up to plus minus epsilon, I mean, this behaves exponentially, right? I mean, well, with exponent zero, zero times N, and then there's an epsilon ambiguity here. So for my first map H, for my example, this critical exponent is zero. So what this means is one would expect that actually if I go a little up, I get a negative exponent, but unfortunately, we don't know the strict inequality in general. So all we get from general theory here is non-strict inequality, right? And the big problem now is somehow is to work around this problem. Some won't let me write the non-strict inequality, right? I mean, so from general theory, you get this non-strict inequality. And the big headache now is somehow is to promote this to something better. And I think my time is pretty much up here. So let me just mention this very last lemma here that essentially gives me what I want, namely, as I said, for exponent two, one can somehow find an admissible weight function that gives me roughly this asymptotic behavior N. But now comes the crucial fact, namely, I can find weight functions that almost have an exponential decay. I mean, for technical reasons, we can't really prove full exponential decay. I mean, there's some kind of correction log factor, but it doesn't really matter for what I'm interested in because, you know, once I have this fact, I can easily estimate these critical quantities for two plus epsilon. If I just take the critical weight function for exponent two, raise everything to the power two plus epsilon and then do some kind of trivial estimates based on the lemma, right? I estimate this expression here with N. The other expression has an almost exponential decay and it beats this N. So this means as N goes to infinity, this expression tends to zero, right? And this means my C two plus epsilon is really strictly less than zero. My general lemma then gives me an alphas to regular metric with exponent two plus epsilon and this squeezes my conformal dimension between two and two plus epsilon. Epsilon was arbitrary, so this gives me that my conformal dimension is actually complete. So in the last minute here, let me just make some concluding remarks here and point to some open problems. So this whole machinery we could only make to work because in this case the critical exponent, I mean, the alphas regular conformal dimension was predicted to be equal to two. And this allows us actually to use classical conformal mapping theory methods, right, that are somehow adapted to exponent two. If the alphas regular conformal dimension is predicted to be bigger than two, then our methods just don't apply and, you know, we have no clue how to even start, you know, handling these type of cases. So big open problem in this general area, and as I said, you know, this was probably also mentioned in one way or other in Dylan's talk is this trick monotonicity. It would be really nice to have a kind of general theorem in this direction, but I predict that this is probably very, very hard to show. A general question that is interesting and independent of this whole discussion of conformal dimension is, you know, what are the summer properties that I encoded in these visual spheres. So one interesting thing is, you know, if I look at my visual spheres from an obstructed map, how do I actually see a first and obstructions there. So this is something that a former student of mine explored Angela Wu. And this is also something you know that Misha and I Misha, I have been talking about. So it would be some are very interesting to re prove Thurston's theorem, you know, but the existence of Thurston obstructions for non rational maps from this point of view of fractal geometry. So yeah, that's all I have to say today. So thank you very much. Okay, well, thanks a lot for your very nice talk. Let's have some time for questions, perhaps starting with the online participants. Okay, this gives us time here locally any yeah, Laurent, I have to run over. I went for 60 minutes. I wasn't sure. I mean, maybe I should have gone for 15 minutes with 10 minutes for discussion. Yes, hello Mario. It seems that really the problem is constructing this clever function w. So can you tell us a little bit more about what it is like it gives very small weight to the flap for you Mabli. It's actually not defined an explicit way roughly speaking what you do is this I mean you take your kind of fractal pictures on the end level let me just schematically indicated like this. Right. I mean you you had these these pictures. And now what you do is, you know, this is a polyhedral surface. Let's just look at one one side of it. This is a polyhedral surface polyhedral surface is carrying natural conformal structure. So this means, you know, I can map this. I can use conformal maps to map it to whatever I like it to be. And you know, I've these four post critical points that make sense to look at conformal maps that map this to actually a rectangle and actually turns out that the height of this rectangle if I normalize the bottom size to be one to this natural conformal map let me call it phi in actually maps it to a rectangle of roughly height in. Okay, and the small squares they are mapped to something and the diameter of the small squares of the images of the small squares under this map. This is essentially my weight. Yeah, it miss ability condition essentially comes from having base like a length one here. And you know the area of my rectangle is in and this is, you know, the two weight of my two mass of my weight. Does that make sense. I get my way it's somehow not by writing it down explicitly, but I get it from an auxiliary conformal map. Okay, one more question. So this is a question about the conjecture that you. I know you didn't tell us what Q zero is. So, yeah, but how should I be thinking about that like you said it was a critical exponent is it like a pressure type thing or since you asked, you know, let me let me at least give you an idea right I mean, if you if you have a first and obstruction given by a system of curves. Right I mean then you can define some kind of mapping degrees, right, usually called M i j alpha, right and then D, and then you know you define a first matrix that has entries that somehow come from these mapping degrees. Right, so you get a matrix with these entries. So, right, and, you know, so you can compute a perron Frobenius eigenvalue from this matrix and you have a first and obstruction, if it is a greater equal one, right. Now all you do is, you put an exponent here q minus one. So you get some numbers that now depend on on an additional exponent and my first and obstruction, and you somehow want to, you know, if if if Q is very, very large. Then, you know, these eigenvalues tend to be less than one and some of you want to go as low as for the first time you see something bigger than one and this is this is this kind of critical exponent so it's the infimum of all q's rate equal to so that you get some kind of first and obstruction well with this exponent. Yeah, yeah, right. I mean, the mapping degrees are fixed. So if you, you know, raise these degrees to higher power, you get something smaller. I have a question there. So this function is strictly decreasing. What's the monetization that you wish you had but don't. Well, look, I mean, let me go back here to my slides. Right. I, I define these kind of quantities, right, from this from a discrete modulus problem. And by this sub multiplicativity property that I stated here from Fecchitis theorem, I get some kind of exponential behavior in N. We have some kind of exponent here. And I would like to have strict monotonicity for these exponents with Q. You know, general theory only gives me this non strict monotonicity but I would really like to have strict monotonicity. But fortunately, general theory doesn't give me this. You'd also like to have that equal to something that you know, right. Very good. And I didn't get it. This should also be equal to something you understand. Is that if contextual are true. So essentially this, this CQ here, you know, where, where the value where C is equal to zero, this should be the conformal dimension. So in my picture here, where I cross the Q axis, this, this should be the conformal dimension. But you don't know what C should be in general. I mean, this, no, I mean, this, this is probably, you know, it's hopeless. Well, I don't know. I mean, I haven't thought about it. But this is probably something that this you cannot talk for kind of general. I hate to interrupt this interesting discussion and hey, you online people can go on for a little moment. I still think we should thank Mario again for this nice talk.
Every expanding Thurston map gives rise to a fractal geometry on its underlying 2-sphere. Many dynamical properties of the map are encoded in this fractal, called the 'visual sphere' of the map. An interesting question is how to determine the (Ahlfors regular) conformal dimension of the visual sphere if the map is obstructed. In my talk I will give an introduction to this subject and discuss some recent progress.
10.5446/13815 (DOI)
Good morning everybody. We are going to be looking at working with Plone and getting started with your Plone site. So my name is David Bain and I have been using Plone for many years. Today we're going to be looking at the new version of Plone which has a new user interface. And I'm going to bring up my slides so that you can see how this system works. Great. So the topic is getting started with your Plone 6 site. And we're focused on people who would be editing and managing Plone 6 sites. I'm going to start with an intro which this is part of that intro now. And then we will follow on with a little bit about Plone so that we set the context. Then I will do a demo and what we consider first steps. After that we're going to look at content management and specifically the new way of doing it. You also want to become comfortable with user and group management. So look a little bit at that. And then we're going to take on a challenge of using the out of the box features of the new user interface to see if we can replicate. Obviously it's not going to be a perfect copy but we're going to implement some of the features that are on the Plone 6.0.org website. And then we'll do a review and we'll talk about what are the next best steps. I should say welcome to everybody who's here. And we will begin. What is out of scope for this presentation? We will not be looking at installing add-ons. We're not going to be talking about theme and custom content types. So just to be clear so that everybody understands where we're going, we will talk about content management, user and group management and some of the things around that. We're also focusing on the out of the box features. And so this is why we're not worried about add-ons. A little bit of housekeeping. Please stop me to ask questions when you have them. You can raise your hand and I will acknowledge you. Tell me if I'm speaking too fast or too slow or not loud enough. The material is about three hours or three and a half hours depending on how quickly we go through it. And we'll take two breaks. There may be some errors. This particular presentation is edited from an older presentation with an older version of clone. So maybe we'll see one slide that may still have that older version. Hopefully I was thorough enough and got rid of all of those slides. And you can contact me at my email address if you want to follow up with anything or if you have any questions. And the slides are available. So if you're looking at my screen, at the bottom of my screen, there is a link to the slides. A couple of things that we're assuming here. We assume that you're familiar with the following terms. And we'll talk about those terms because we're going to be using them throughout this workshop. So hopefully a little bit about clone at the URL is what a website is and what a web page is. We don't expect you to know a lot about clone at this point. So just a quick reference. This is a browser. A browser has an address bar and URLs are typed into the address bar. And those addresses carry you to websites. And websites are composed of web pages. For the vocabulary for clone, we will talk a bit about content types. Also look out for the word blocks, which are now the components that are used to create pages. I will sometimes use the word item during this presentation. And I almost always mean a page. And a page can be of any type. So it could be a news item or it could be an event or it could be a standard page. But just for a generic term, I'll sometimes use the term item. Okay. So I'll pause here and I see, is this Hosef has joined? Let's give you a chance to say hi. Is it Hosef? Hello. Hello David. I'm pronouncing the name correctly. I know I've seen the name and the spelling, but I don't know if I have it right. Yeah, the name is... Okay. Great. Do you have any expectations for this workshop in terms of what you hope to get out of it? No, I have other expectations and I want to go out of it because I want to see a tutorial to react. Volto. Oh, okay. This is probably very straightforward for you. Oh, it's the false. Okay. Ciao. Okay. Thank you. Sorry, no problem. Okay. Okay. All right. So we are going to continue with what is Plone. Plone is an enterprise content management system. And to put that in perhaps clearer terms, it allows non-technical people to create and maintain information on rich websites or intranets. It's very focused on making it easy to get content up. And also being able to control who can do what with the content, who's allowed to publish it, who's allowed to view it, and how it's shared and so on. So it captures all those aspects of content management. So at the heart of Plone, it's about how do you get content up onto the web? That's almost always what you're trying to do. And that content could be text, it could be images, it could be a news item, which sometimes combines text and images. And content has different purposes, as you can imagine. Content could be even a form that someone can fill out. Here are some of the types of content that are fairly common to add to a Plone site. And Plone helps you to organize and distinguish between those types of content. And we often refer to the whole thing as content, just for simplicity, hence content management. Plone has an approach to content management of organizing things into folders. So it feels a lot like how you might organize things in folders on your desktop machine or your laptop. Your content can be rendered individually, or you can combine bits of content. And with the new user interface, combining content is a lot easier than it was with the old system. But we still have this idea of a single page and what we call Listin pages, where multiple things are shown on one page and it often links you from one thing to another. The Listin pages tend to take the title and the description, sometimes called the summary, and present the title and description on the Listin page. Just a note, we have used Plone a lot, especially on bigger, more complex projects, because of its flexibility and feature set. If you have a team of users, it's a good time to think about using Plone. It's not as critical if you're a one person. But if you have varied roles and editors and different permissions, right away, it makes a lot of sense to think about Plone for your platform. And we have used Plone for different projects in the past because of that flexibility and the ability to support multiple users and permissions and roles and so on. If you need more information about Plone, you want to go to the Plone site. There's also Plone documentation and there's a Plone training site where you can find training like this, as well as more advanced training for theming and for development and for other things. So Plone 6 represents a new release. And there are a couple of things you will want to know about this. Plone 6 is in alpha. So we don't have the final release yet, but we have a very good idea of what the final release will look like. And it introduces a new user interface as the default interface. It's based around a system called Volto, which was designed to provide a modern interface on top of Plone's backend. The Plone 6 front end also provides something called Plone 6 Classic. This is the older user interface. And if you are an organization that has invested in Plone and want to continue using the older interface, you can still upgrade the Plone 6 and continue to use Plone Classic. There are some trade-offs between Classic and the new user interface. And I'll hopefully at least one of those things I'll mention later on in this presentation. Right. At this point, we're going to do a little bit of a demo and what I call first steps. And then after that, we'll take our first break. So let's go to a Plone 6 site and let's look at the new section. And this is a default Plone 6 site. And we want to ask a few questions. How is the URL organized for Plone 6 sites? And then we'll visit a few more sections and see if we can pick up a pattern. Okay. So I do have a few Plone sites up and running. Let's see if we can spin up another one. So so let me share my screen so you can see that. There we go. So here is a Plone 6 site. As you can see, it has a new user interface if you've worked with Classic Plone. Let's take a look at the news section. What I really want to focus on is actually the URL here. Plone tends to use human-friendly URLs. And so as you navigate through the site, you'll find that the URLs map pretty neatly with what you're looking at. So if you're at events, you can expect to go to slash events. And this is peculiar with Plone. Other systems, you need to create what you call slugs and things like that to map a human-friendly URL to some other backend address. This is not the case with Plone. It doesn't require any new layers or anything to do that. This is a feature of Plone. And as you see, as we navigate through the site, that is reflected. Okay, I'm going to go back to my slides. All right, so that's a pattern that I just wanted to point out very early so that you're aware of it as you look. The other thing I want us to pay attention to is the fact that Plone, often the pages that you go to are the main text with a title, a description, and main text. This is another common pattern. In the backend, the fields that are most common are title description and main text or body text. And most content types conform to that type of arrangement. That's something to pay attention to when looking at even the structure of individual items within Plone. As I said previously, URLs in Plone reflect the way content is organized within your site. And so you can read this and make some judgments about what's happening here. So, for example, this must be a news section. Perhaps this is a 2020 news and you might expect to have a 2019 and a 2021. In this example, this is again news with 2017 news and specific news about submitting talks for a conference. Again, these URLs follow the titles. And you can make your own judgment about what's happening here. Let's talk about logging into Plone. If you need to log into your Plone site, one quick shortcut is just go to slash login. That URL will bring up a login box and you can put in your username and password that you've been provided with. If you need to log out, you should be able to do one of two things. You can actually type slash log out in front of your site URL and that will log you out. Or you can click on your avatar and that brings up a log out button, which you can see in this screenshot. So, let's look at the process of logging in and logging out. I'm going to go to the Plone 6 demo site, which is available on the Internet. Let's see how that goes. So, let me share. 6.demo.plone.org. I'm going to log in. Now, I clicked login, but I could just as well have type login and it would get me to the site as well. Then I'm going to put my password and I am logged in. The process of logging out is similar. So, let's go about that. We're going to go to my avatar or my user profile and we're going to click log out. And we're logged out. Let me log in one more time just to illustrate that there are other ways to do the same thing. One way is to just type log out in the URL and that will log you out as well. Okay. So, I'm going to go back to my slides. There's a little issue navigating there. So, here's an interesting question. What happens if you're already logged in and you visit login again? And also, it would be interesting to find out what happens when you put the wrong credentials in when you attempt to log in. So, let's try that just to get a little more familiar with the login mechanism and so on. So, I'm going to head over to that share and we are going to see what happens if we try to log in again if we've already logged in. It's mostly just so that you can become familiar with that. So, we're already within the site, navigating around. Now, we're going to say slash login. And indeed, it does present you with a login screen again. It doesn't mean you're not logged in, but this page is the login screen and so it will show you it again. And if you attempt to log in, it should just work. The other question that we had was, what happens when you put the wrong credentials in? So, let's log out. And we're going to put the wrong credentials. So, we're going to put not the user and see what happens. And as you see there at the bottom of the screen, both check that cap block is not enabled. Effectively, login failed. And if you've used classic clone, this is the same type of message that you would get. Okay. We are going to move on. All right. Preferences. In clone, it's possible to manage your profile and your preferences. And those are available under your user avatar. So, once you click your user, you will be provided with a box that looks similar to the one on the screen there. And the only setting at the moment is language. But you can change your default language. I expect that in the future, as we move out of alpha, you will see other settings. For example, in the clone classic UI, one of the things that you'd find underneath there is the ability to change your password. Profile. If you need to do your profile, change your portrait, full name, email, et cetera, you can find that under the profile option. So, these are some of the things that you can manage. And just as an exercise, we're going to try that out. Okay. So, I am going to share this blogging page that I'm at. Great. Now, I already have a page ready for this exercise. So, I'm going to use that. I'm already logged in to this page. And I can set my profile. So, in this case, I can say and my email address. And I can set a portrait. So, I'm going to see if I have, I guess I could steal the one that we already have on the clone site. So, clone.conf. I should have a picture that I could use. I mean, for demo, I suppose I don't have to use my own picture, but it is there. So, save image. Okay. So, let's do this again. Okay. Wonderful. Nice. So, this shows up here. And depending on how you're working with profiles, you may want to show user profiles on their photos, maybe on a page, in an internet or something like that. So, let us continue. Okay. At this point, we're going to take a short break. And we'll come back in about 10 minutes. So, you can set a timer. I may, we may make this a little shorter, but we're just going to take a little break now and then come back. Content management with the new clone user interface for clone 6. Right. It's always good to have a little definition. And I like to think about content management as not just the content, but it also affects the users and even the timing. So, this definition kind of aims to talk about all aspects of content management. In a simplest form, content management is can a user make content private and public? But that's a very, very, very simple use case. There are certainly more involved use cases. And once you start thinking about all of those use cases, you have to start thinking about the create, edit, publish, delete life cycle. And within that life cycle, you have to think about the people involved. So, that's going to be your users. And the content is your information. And sometimes the information can be meta, meaning information about your content. All of those are things that are important. The where is also important. Content can be in several different places within your site. It's not simply that you just dump it all into one giant folder. Some amount of reason and logic can go into how you arrange your content. And the content life cycle accounts for publication. When is it, when is it going live? Is it scheduled to go live or is it just going to go live as soon as you finish publishing? When is it removed, archived or deleted? All of those are important considerations. And of course, who can do what? So, it's not just that you have users. Users have different permissions. And all of those are important considerations. Okay. So, in the clone world, a content type is what distinguishes the type of information that you're looking at. There are other ways of determining or distinguishing type of information. One, another way that is sometimes used in clone and with other content management systems is to use tags or subjects. So, you have different bits, different items, and they are categorized based on the tags. And that's fine. But in addition to that, one of the ways that is really good with clone is content types. Because every content type can be distinguished from every other content type. So, it's a very important concept. Content types are added through the ad menu. And as you can see here, we have a couple of different types of content. And these are the common ones. And then we have some special ones among these, which are special because they can contain other content. So, a collection one serves to aggregate content based on criteria. And a folder aggregates content that it holds within itself. So, they behave slightly differently, but effectively they act in a way that is sometimes referred to as a container-ish manner. All right. So, we're going to do a little exercise where we talk a little bit about these content types and see if we can spot the content type. We have the very common ones, news items, pages, events, files, and folders, and of course, collections. We're going to browse around the clone.org website and see, based on just looking at the URLs, and let's see with our best guess. And it's okay if we're wrong. The point of exercise is to maybe look at the purpose of sections of the site and think and say how best could this have been done. And you can share the URLs you found. So, so let's head to clone.org. Okay. Here we are at clone.org. And let us browse around for a little while and see if we can kind of make sense of some of the things we're seeing. So, I'm going to go to this link here. It says, latest news. And as you see, within this link, there are several news items. Based on that, I think it's reasonable to think about this as probably a folder. But I could be wrong. It could also be a collection. Or it could be both. It could be a folder that has a collection. As you can see here, this is definitely a news item. It's a single item with news about clone. And it sits down in a folder. So, for sure, the 2021 news is more than likely a folder. That kind of makes sense. And we should be able to predict that there will be a folder for 2020, depending on when the structure started. We may even have 2019. And perhaps 2018. But we'll stop there. So, folders are a good way to organize things. And my guess is that what we're seeing here is a collection. And why do I say that? Because what's happening here is these collections or these items will probably change from months to months and from year to year. And there's probably a query that pulls the most recent news items here, which is a collection way of doing things. If these items need to be manually placed here, that would be quite inconvenient. And so, I suspect we have a collection here. Basically an aggregation based on a set of criteria. Okay. So, we're going to head back to our slides. And continue. So, one of the things we said, just to remind you, Plone has a folder-based approach. Content tends to be arranged in folders, just like you might see on your desktop or your laptop. Plone focuses very much on in-context management. And we'll talk about that a little more. So, you tend to navigate to where you need to edit. There are some systems where content is managed centrally in a dashboard. And things are not organized in folders. Interestingly, you can probably do both with Plone. You could set up a dashboard with items that need to be edited. And I've seen that done. However, it's almost more common to navigate to the news section in order to work with your news. And the same would be true for your events and so on. So, three key things that we looked at on the content management. The idea of content types, folder-based approach, and in-context management. Editing context. Editing items within the context where they exist. Right. So, now that we've looked at some of these principles, let's actually add some news items, some pages, some images, and perhaps a file. So, let's try that out and just see what that process looks like. So, here we are on my Plone site. And I am going to start out by adding a news item. Now, you can, for most things, most items, you can probably just add them right there. Right. So, I could click news item and it would be added right here in this context. However, I would much prefer to have my news organized a little more. So, at minimum, I want to have them in the news folder. So, that's what I'm going to do. I'm going to add my news to the news folder. So, news item. Let's borrow some news from the Plone site. So, I'm going to grab this picture. So, here's my headline. So, let's put our headline here. This would be the summary or the description. So, we're going to put that here. And here is the text. So, this is going to be interesting because when I paste this, I use a shortcut called paste and match style. But effectively what we've done is added all the text. And for now, let's leave it at that. Let's add our lead image. And we won't put an image caption for now. But we'll save this. And we've created our first news item. Obviously, it doesn't have all the styles and stuff. And we can fix that. So, let's at least put in some head-ins. So, where it says some highlights, I can make that a head-in. These are supposed to be bullet points all the way down to use dexerity for Plone site object route. And we can make that the bullet list. And here is, by the way, I did this on purpose. I decided not to highlight the whole thing because it helps you to grasp that some styles are what we call block level. So, the entire block is affected by the style. So, I think every style to the right of this line is a block level style. The changes that we make here are only going to be made to what we highlight. So, sometimes that's referred to as a character level style. There may be another word for it. But it explains what it affects. Okay. Okay. All right. So, I'm going to save that. And it's starting to look like something sensible. All right. Let's go back to our slides. So, just remind ourselves of what we want to look at next. So, we want to talk about adding just a normal page. And then images and files. Okay. So, let's go back and see what we can do. So, for this, I will probably add a folder. This folder will be called something like pages or articles. Your articles. And I'm going to save that. Just like Plone Classic UI, with Plone 6, you'll notice that as you add content, it shows up, at least if you add it to the top level, it will show up in the menu here. All right. So, let us add to this folder a page. Okay. See if we have any useful article. Maybe something from GetInStarted. How about this? Test drive clone. No. That's going to take us not to where we want to go. Yeah. All of this is taking us off or away. Not what I want. Let's see if maybe conferences. All right. And then conferences and events. So, how about something about the Plone Foundation? How about the foundation? This looks like it would be a page. Great. So, let's do this. All right. Okay. We have this wonderful image. So, let's make use of it. So, we're going to do our image. You can just do that. Let's add some text here. And as you see, our page is taking shape.books and objectives. Okay. Okay. Okay. So we'll save that. And now we have within our website an article about the flown foundation. In their picture, it's aligned to the left. Let's see, we should be able to change the alignment. This setting will be found in the block menu. So we're editing a block and sometimes there's some context available right there in the menu. I think that's quite what we're going to want to achieve, but it's an interesting look. So we'll work with it. Okay. So let's go back to the slides. We actually have added images already, but there were images that were associated directly with content types. There is a way to add just standalone images and standalone files. So let's look at doing that. Okay. All right. So for my images, I like to have a special folder just for media. So what I'm going to do is I'm going to add a folder and call it media. And then in that media folder, I can start thinking about images and things like that. We've downloaded a couple of images already. Let's see what we got. All right. So that's one image. Notice it's located in the media folder. And it's in this breadcrumb here. It takes the form of the name. So in this case, I named this image. So it doesn't just have the image name. It has the name that I have assigned to it. Okay. And let's add a file. I think this website may have some kind of... So here's a budget. Let's download that budget. And we're going to add it to the same media folder. Just to show that we can. All right. Let's drag that over. Let's call it 2016 budget, which is already called anyway. And we're going to click save. So this file is now available from the website. Okay. At this point, we have looked at news items, pages, added images and files, standalone images, standalone files. And one question that comes up quite a bit is, can I cut and paste HTML from one page to another page? The answer is yes, for the most part, the HTML comes across as rich text. And when you paste it into your site, it gets converted back to that HTML. And another common use case that people ask about is, can I associate a link with a document that I've uploaded? Because if you want to make something available for download or whatever, for example, that document that we just uploaded a while ago, that PDF, we want to make sure that there is some kind of link. So we look at an example of how you might do something like that. So, I'm going to do a new share. And go back. All right, so that's our budget. It's in PDF. Here's another situation. Suppose you wanted to put up another budget, a totally different budget. Let's say the 2017 budget. So let's download the 2017 budget. Let's say we want to make reference to that budget from our homepage. And in the classic clone, it would be possible for you to create a link to content, even as you're working with it. So you might say, there is the 2017 budget. I'm going to see if we can link. All right, so in this case, there is no way to, at least with this alpha release to upload. However, we do have the 2016 budget. So let's at least make a link to that. Okay, so it looks like at least for now, for the alpha release, we will have to upload first and then connect. So that is, that's something that is a trade off at the moment, at least between the classic UI and the current version here. Okay. I've, I've said a couple times that clone is very folder centric things are placed in folders and organized. And it's, it's actually considered best practice to arrange the content in your clone site into folders. So it's not unusual to have all your photos in a photo gallery, everything related to your services in a services folder, and so on. And the, the, the hierarchy of the site is derived from that structure. So, I'm just reiterating that, and actually have a slide that captures with a little bit too much text, but it captures that idea. Okay. So something that you will recall from classic clone, if you were a classic clone user is the ability to change the state of content for publishing. And that is available as well with the new user interface. The approach is that you click the three dots at the bottom. And it brings up this menu. And then you can specify what state do you want to move your content into. Currently it's private. You may want to make it public or to give the option of sending it for review. And this can be done through that three dot interface. So, I'm just going to give you a couple of quick tips. If you're pasting content, sometimes the content comes with formatting that you don't want to use a good solution is control. Instead of just control V, you add shift, and that removes any rich text when you're pasting. And then you add a command shift plus V. Of course, if you want the rich text format in, then sure, copy the format and paste it into the site, and it will look like the rich text, where you got it from. And then you can see in the process of adding links. And in fact, link into a file. In clone, when you highlight text, you're provided with context menu. And from that, that little menu, you're able to select, for example, link into a file. And after that file, you can link to anything you can link on the internet. Right. So that is just kind of reiterating what we've observed in the presentation, how to do your links. And we also use the image block in order to add a photo. And I'll just just remind you what that looks like. I'm just trying to get there quickly. So this image was added as part of just trying to follow the clone foundation site. Or page. But it gave you a good idea of how images are managed inside of the new clone user interface, you can add that image. And then you can add a block. This is a block that's floated to the left, hence text is flowing to the right of it. But it's still a block. That's a little bit about adding images. One thing when you're adding an image, it's recommended that you add all text. So alternative text. Interestingly, with clone, with the new clone user interface, it will add alternative text. If you don't put it in. So we're just going to have alternative text from the name of the document. And I'll show you that quickly. We're going to just to show you how the alternative text thing works. Here we are. Let's say we're going to add some, some image to this. Okay. So what we're going to do is we're going to say add image. Let's see if we can find an image. Here's an image that I made for this project. It's just a pretty pattern. And I want us to look over here and see what's going on. Notice that alternative text is actually the same as the document title. So it defaults to the document title. I might want to give it a different name. So that's how we deal with alternative text. Okay, so I think it's a good time to go on another break. So we're going to take a break a little bit early. Help me to catch my breath a little bit. And when we come back, we're going to continue in the content management section. And we're going to be looking at bulk uploading. How do you do a bulk upload of files and images? And how do you navigate around and cut and paste and move things around? And how do you change the view of something similar to what you might have seen in the Plon Classic user interface? So take a break and then we'll come back and continue content editing. Just perhaps we should review. By the way, Tommy Slav, were you able to see the slides? Maybe to catch up? If you look at my, let me see, I'm just not sure why the camera is not working for me, but let's stop the share. If you look at my share, you'll see that at the bottom, it has the address for the slides. Maybe what I should do is, well, I'm not seeing an option to pin them. But if you click on my thing and pin it, you'll see the slide address. I could also just type it in the chat. Okay, great. So you can always go there if you want to actually get access to the slides. Okay, so I'm going to bring the slides back up. And see if we can just quickly review where we have reached. Of course, here are the slides. So we're about to go into user groups and management. But it would be helpful to just take a quick look at where we're coming from. So we started out just doing a quick overview and saying what clone is. And went through a little demo of using it with the new user interface. And then we looked at content management. The next section is on users and user and group management. However, there is a little bit of content management left. So we're going to finish up the content management section. And then look at users and groups. And the next thing after that is going to be a site building challenge. I'm going to give you access to your own. Plone site, and we're going to see what we can get done. And maybe I would say half an hour to 40 minutes. Okay, so let us get back into content management. I think the last thing that we were looking at was using folders or was it publishing content. I think it was publishing content. Right. So we had spoken about the process of publishing content and so on. Right. Yes, we also spoke about this. And adding links. And using the image block. Right, so we're about to look at how do we do a bulk upload. So let us say that you already have for your clone site. And we do have a bunch of images. And we do have a couple images at the moment. I'm just going to grab a few more. Okay. I'll just grab one more image. Okay, we have been tasked, let's say we've been tasked with the job of maybe uploading a set of images into a folder. Here's how we'd go about doing that. And then we're going to head to one of our instances. I actually have one here that I can work with. I want to create a folder. So add folder and I will call it images. Great. And we're going to use what is called a contents view. So if images is a folder, we should be able to view the contents of that folder. And that's what we're doing. And there is a button called upload. So what this, what you see here is a listing. Of course, we don't have any images at the moment. And we're going to click upload. And then we're just going to drag the images that we want to upload to this box. And this is just all images that I've downloaded so far while working on the site. And I'm just going to proceed to do the upload. As you can see, all of the images have been uploaded. And I can come out of contents view. And I see the images. Now that's probably not a nice way to view the images. So there are a couple other ways we could go about doing it. We could change it from listing view to album view. And we do have a couple bugs still, as you can see album view, the proper behavior of album view is that you'd get to see a preview of the images. And that is not happening at the moment. And I don't want to put it, I don't want to blame, clone six. However, it's, it's very likely the fact that this is an alpha version, why we're not seeing this. I do have access to the backend API. And it does. So I do have a classic clone interface. So you can see that in classic clone, the images that I just uploaded are showing. I see there's a question or comment. Okay. So Thomas love is wondering if maybe it's, it's still processing. I'm not sure if that's the issue. I'd, in fact, I doubt it, not not for such a small number of images. So we definitely have some variation between the new user interface and the old interface and I'm sure with these blocks, these bugs will be cleaned up as we move from alpha into a stable release. Right. But if we go to the content view, we know for sure that the images are definitely there. And we can view them. Similarly to bulk upload of images, we can do bulk upload of files as well. And I leave that out. But just to say it is possible. Just to review, it would be a matter of going to a folder, going to the contents view, and then using the bulk upload button within that view, and it will accept images, it will accept files as well. Okay. Now there are some situations where you have content in one place and you want to get it to somewhere else. And this is another thing that can be done as a bulk action. So perhaps you may want to move some of the images to somewhere else. So let's look at that. Let's say you have a second folder. That folder's name is, I don't know, more images. We could go to the images folder and go to the contents view and select the images that we are interested in. And there's a cut. And then over more images, we can go to the contents view again. And paste. So now we have images in both folders simply by using a cut and paste. So the first folder and this is the second folder. We did something earlier when we created that folder and added those images. We actually used the display menu to configure how contents are displayed. Let's go back and see how that's done. So for the images folder, it's a different view than for the more images folder. This is the default listing. And this should show us at least an album view of images. It actually does work with the classic user interface, which we're seeing here. So we know that it definitely works. However, it's probably a bug that needs to be sorted out so that it will work here. And how did we do that? We went to the more menu. And under there, we were able to change the view. And then we can change the album view. Let's look at some review. So this is some review. Now, against on classic clone, what would happen is you'd see thumbnail images. And when I was testing this, I did come across this issue where the thumbnail images were not working and have to troubleshoot and figure out whether it was something I set up wrong or whether it's related to incoming features for clone 6. This note here, setting an item as a default page, this is, this was a common approach and pattern with classic clone. It's not as much encouraged with the current, well, the clone 6. So I will leave it at that, but you can certainly research setting an item as a default page for yourself. Okay, so now we're going to be looking at publication. Setting restrictions, by the way, is not possible in the clone 6 user interface. The restrictions approach would allow you to have a folder, for example, a folder of images, and you'd be able to say only allow images in this folder. That capability would typically be found under the Ad menu, and then there'd be an option called restrictions. I have not seen a comparable option, but it may be there, and I guess we'll have to reach out to others in the community to find out. Okay. So, in this exercise, we want to create a listing block that will allow us to see a preview of items that exist in other locations throughout the site. So I'd love for you to follow along. So I'm going to send you a web address where you can log in. So just indicate if you're seeing that web address in the chat. So that address is set up with version of clone 6, and you can go ahead and log in. I'll send you your credentials. So you can try it out and follow along. Oh, there. Apparently there was a problem. Let me check to make sure that the site is behaving. Let's see. Oh, okay. Well, I have more than one instance. So if one is down, maybe the other isn't. So you could try this one. And what we're going to be doing is just you can just follow along with what I'm doing. Just use the credentials that I've shared with you. Great. But thankfully I made several of them just in case one of them failed. So we did have a failure, but we can keep moving forward. All right, so here's what we're going to do. We are going to create an image listing. Well, it's not really an image listing. These are news items. But how do you do news items? How do you do this listing? Well, I'll show you what it looks like. So I edit in the homepage of this site. We already have one here. So that's probably not the best example, but it's still worth looking at. What we have here is called a listing item, or a listing block more accurately. So if you would add a listing block, obviously it would have to only find news items or whatever you want. Let's say in my case, I wanted a criteria of type, maybe images. So it would only find images throughout the site. You have different variations of how it shows this stuff. So you have something called an image gallery. And you end up with something like this. So that's interesting that this works, but the summary view does not. So I don't know if you mind sharing your screen, Tommy Slav. And we can walk through how we would go about doing this. Obviously, we need to have news items, which means we'd have to have some images. You can, but for simplicity, you can grab some news item text from the plone.org website. We're kind of recreating that anyway. And I'll give you, I don't know, maybe six minutes to try it out. Oh, sorry. I am blocking your screen share. So I'm going to stop my screen share. And I also need to make sure that you have permission to share. Right. So you do have permission to share. I know you're there's an issue with your mic. So you may have to type or something. Okay, great. You're a lot been already. So the way it works is when you press enter inside of one of those boxes, it creates a new box. Press it a couple times when you're when it's a bullet list. So you have to kind of come out of the bullet list, right. There's a little plus sign next to where your cursor was a while ago. Right. There you go. And where we could add an image. Okay. You should be able to just drag and drop images and they'll upload. So this is not the least thing that we want, but it is a block. We've added an image block here. And there are some options there you can change the text, the text, you can change the size of the image and things like that. So what we want to do we actually need to add some news items. So to do that, we would, it doesn't matter whether you add the listing first, or you add the listing. Second, the important thing is that a listing should be there, and news items should be there. I suggest you start by adding news items. You have to go to the top where the little save button is and save it. Right. Right. You're going along. And then what you're going to do is you're going to head to the news section and add some news. So, yes, right. Right. And now you can just say use the add button on left. And they're going to add some news. You can get your pictures by going to clone.org and you can get texts from there. There's lots of blown news there. This is a shortcut to Right. You can get a lot of news from there. So good. Good place to grab some texts for testing. What I like to do is right click on one of the images in the on the site and just download it. Just save image as Basically in real life, you if you're managing a clone site, you may have a content team that goes out and takes the photos or you've been provided with photos or you went out and got pictures yourself or created them. But for the sake of this exercise, we're just lifting images from the clone site. You'll notice that when you paste, it paste the format in and it also for some of them, it does this kind of highlight. It's like a call out. If you click inside of one of those bits and highlight some of the text, you will see that the call out is highlighted. If you click on the call out, it will go back to the normal default text. And you can organize the text however you want. Okay, so save it at the top left and we have our first news item. Great. Then you can go at it again create a second news item. It should be quite comfortable by now. So almost like you've done this before. Right, so this news item doesn't have a picture. And that's, that's all right, maybe it's a good comparison. We could go to the homepage and add a listing block. It's similar to adding an image but you'll see listing block there. So you'd actually click the edit at the top left. So, right, so we're now editing this page, and we can choose where we want to add that listing block. So remember to initiate a block, you have to kind of have your cursor at the end of another block and press enter. Right, and then you're going to click the plus. You'll see listing. And then you can change the characteristics of the listing over at the block side menu. So in the block side menu. You can say how the style of variation you want the criteria you want. So in this case, we probably want a criteria. So remember it's fine. And for the criteria, we want maybe type, use type and say everything that is news. So, right, there we go. And then we can save at the top left. So you've successfully added a listing. And we've also discovered something interesting. It adds a placeholder. Which may not be desirable unless it's like your brand logo or something like that. And that's out of the scope of this exercise, but those type of behaviors can be changed, like the default image, a placeholder image, and things like that. Or even whether it uses a placeholder image, you'd need to go in and change some JSX inside of Volto, which is the front end that we're looking at. Okay, do you have any questions? Or shall we move on? Okay, great. So we'll move on. All right, so let's jump back to my screen share. So we got something similar to this by going through the exercise of adding a listing to a page. Now we're going to move on to user and group management. So in this one, Plone allows users to have different roles. And in this screenshot, for example, the user has the role editor, a user called Jan Smith. And in this case, we just decided to make them an editor. So what's the process of typically for working in this way? Well, typically we start by creating the groups that we want. And one common thing that we often do is we set the permissions for that group on certain folders. So in other words, we may create a folder which is otherwise inaccessible to everybody and then make it accessible to our group. And then, usually, we add users to the group. And by doing that, it makes it possible for user of that group to access things in a folder. And so to do that, this screenshot is actually from an older version of Plone, the classic user interface, but it's quite similar to what we have in the new interface. So we start by, after the group has been created, we would assign permissions. That group may be able to view, add, edit, or review within a particular folder. And once we've set that, those permissions, it gives that group certain powers on that folder. Now, if nobody is in the group, then it's that group is useless. So it's typical then to put users, add users to a group. And I've given you a general idea of the process. To set up users, you would go to the group or go to the user. Either one works, actually. And if you're at the user, you can specify the groups that the user needs to be a part of. And if you're in the group, you search for the user and add them to that group. So in essence, the group mechanism is one way of giving permissions to users to do things. There is another way of doing that. And this is around Plone's workflow system. And the workflow system can work in collaboration with the groups. The key concept here, which we won't go into great detail, is that content can be given different states. So content could be, for example, private. And when content is private, then only an administrator or an editor or whatever the permission is, can interact with that content. But content can move from private into other states, such as public in the simplest situation. But in more complex situations, there could be multiple states. Plone, for example, is used in lab information management platforms. And if you can think of a sample at a lab, whether that's for food or for medical purposes, a sample passes through several stages. And again, it may be something as specific as has this reagent been added to the sample. And that would be a stage within the workflow. And if the reagent has been added, then a special group of users now have permission to view that sample. For example. So that is the concept. Perhaps we could try out creating groups and users and putting permissions on a folder. I can demonstrate it and you could probably follow along. So let's do that. So I want to imagine a folder which holds documents for maybe people who are members of the board. So, I want to create a section, a special folder in the root of the site. I'm going to call it board. Now I want to do something special here. I'm going to set board to be never visible in the navigation. That little option will prevent it from showing up in this menu at the top. And it doesn't matter whether I'm logged in or not. The board does not show up in the menu at the top. So we now have a top level folder called board. And we're going to create a group called board. And that group will then be allowed to access board. So, we will call this board members at that group. So we will call this board members. Board members. So we'll leave it at that. So we now have a group called board members. I'm not going to give them any special permissions. What I'm going to do is I'm going to give a special permission only on the board folder. So let's save that. And let's head to the board folder, which is well hidden. Here it is. Now, remember we said that the procedure involves setting sharing on the folder. That means the group is going to have some kind of special sharing privilege and that happens underneath here. So if you're viewing the contents, under the more menu, there will be an option to manage sharing. So we're going to search for our board members. See if we can find that. And we're going to say that anybody who is a board member can view this folder. Okay. There are others who can probably view this folder. However, those persons would be administrators and so on, because the folder itself is private. Can we confirm that? We should be able to. So this is a board folder. Let's see its state. It is private. So if we want to do now, we want to add some documents. Maybe that budget that we were working with previously, we're clearly on another site. We could bulk upload whatever we need to board. We might not see in that PDF that we had, but maybe there's something special about this pattern. Now, we can test this. Remember this document is in the board folder. Sometimes if I log out and then attempt to go back to look at that document. Now it says unauthorized. I don't have enough authorization to see that document. I can see it as a full administrator. So if I go back there this time as an administrator. I can see it. So the last test we want to do, we've created this folder. We've put certain permissions on it. We've seen that an anonymous user can't access it. We could do a test to see whether a logged in user that's not part of the board can't access it. And then finally, we could see whether a user who is part of the board can see it. So the easiest way to do that is probably create two users. One called board member and one called not board member or something like that. So we'll create a user called board member. Board member. And board member example.com with a password. This is for testing. So we won't go too crazy with the password. This is a board member. So let's add them to the group. Board members. There it is added. So board member has been added. Let's add another user called not board member. And that user will also Well, in that case, this user will not have. Not board member. Or. Not board member. I take sample. One, two, three, four, five horrible password. You wouldn't use that in real life. Oops, I'm not adding roles. I'm adding groups. And in this case, they're not a board member. So we don't need to add anything there. Wonderful. So we now have a board member and not a board member. And we have a folder. For board members. Let's start by. Logging ourselves out of this site. And let's see what happens if we log in as not board member. They are remembered the name. So this is not board member. Not board member. Can't really see much. But. We know there is a full account board. So let's go to the full account board. Oh, because not board member is not a board member. Not board member is not allowed to see that photo. What about board member. So let's log out. Not board member. Let's log in as board member. Let's see what happens. So I'm now logged in as board member. I was told as a board member that I can go to board. And also not authorized. So there's something wrong with. Or set up. So let's take a quick look. And see what we may have done wrong. Clearly we locked out everybody, including. The board member. So we were a little too strict. Let's log back in as an administrator. And see what we may have done wrong. So let's go back to board members. To prevent our board members from participating. So we're going to take a peep at the sharing. Board members can view. So let's go back to board members. I'm not sure what else we should have put there. This is this is an exercise that should have practiced to make sure. Comfortable with it. But it's what what should happen here. Is. Even though it's private. Because board members. I think I did. I didn't put the board member in the board member group. I think I did though. So let's double check on that. I'm almost 100% sure I did that. So board member. I mean, this is mostly just for my sanity to just check. So here we have board member. We only have an option to delete. So that's not so good. What about users and groups? What about groups? What about board members? So we have board members. So another way of doing this, which is not really what I wanted is we could give the board member. Anybody who's a board member. Administrative permission. And then every board member would be able to do administrative stuff. But it certainly would be a quick fix if you're not worried about. Ben see in other things. Unfortunately, the, the interface is a little bit limited. So we're still with the new user interface. And I think what we'll do is we'll, we'll skip this for now. And move on. But this is the principle. So we have about an hour left. And my suggestion for the hour is as follows. I think we should take half an hour to try out a challenge. And while you're going through the challenge. So you can keep your eye on categorization date settings. I use them. Settings to hide. Board from the. Front page. But there are a lot of other things you can do with it. Categorization, for example, you can use it to tag your content. And you can use it to tag your content. And this is our goal. We want to do our version of clone.org. There are some limitations to pulling that off with just the out of the box version of clone six. And so we will accept that. We're not trying to clone the site, but we're trying to get some of the elements from clone.org. For example. So if we go there. And we see upcoming events. Oops. Well, not too many upcoming events, but we do have some. Well, we do have the 2021. Conference. So yeah. So we should keep that in mind. Let's see what we get at all events. Right. All future events. But perhaps all past events, right? So we could get a lot of content from here. Maybe grab two events. And. What else? Create the volunteer section, the volunteer section of the site. I wouldn't even ask you to. Create the whole of that. But it's, it's the section called community. So. Again, we're not going to be able to necessarily get it in this structure without doing some modifications to the theme, but we can get a listing. So certainly a summary listing. And again, we only need to add maybe two or three of these sections. And then finally. Create an image gallery, which you've seen an image gallery done with the project. So it's 1107. Let's see if we can get it done. By 1130. So I'll do a check in at 1130. I'm going to grab a glass of water. Are you up to the challenge, Tomislav? Okay, great. So I will set a timer. Let's make it a. 20 minute timer. And we'll do a check in. After the 20 minutes. Okay, so we have a timer now. And I will mute in. Okay. I can't change the view. It's only on event view and no categorization option. Right. Right. Right. Right. Right. So as it is today, one of the challenges, and I think I can replicate what you're talking about. I, let's see, let me add an event here. Just to. Just to. Illustrate the problem. So. So, I'm going to do some events. Which ideally I should probably just make them public. So. Two events. Okay. That one is public already. And yes, there's only an event view. The event view is specific to the event. And so. So. I'm going to do more. Also, you said something about no categorization option. So let's just look at that. So there is a categorization option. So for example, you could say. This event is a sprint. So, I'm going to do a sprint as such. So now it's categorized as a sprint. So maybe off these two events are sprints. Because maybe they contain. The other event. And maybe the other event is. A conference as well as a sprint. So maybe we should give it another categorization. So. Conference. There we go. All right. So yes, I do agree. We can't change it here. However. And that's our 20 minutes. So. What if. For. This site. Or this section. And. And the listing. All right. So the criteria here by type will be. Event. So we have our events. And we could give it a summary view. Or maybe not. And then we would have some type of an events page. Of course, ideally. We would want a way to take that page and make it the default view of the events. But there's no, there's no setting for doing that. Okay. So I don't know if that helps to kind of give some direction. And we could even filter it to say sprint events only. Okay, great. So again, one of the differences between at least currently between the events and the events is that there is no provision for setting. And a page as the default view for a folder. I think it should be possible. To. Edit a folder as a page at some point. And then it would actually present you. With a section for the text of this page and so on. That is not the default behavior at this point. But if it were everything that we. Might be doing on the summary page. And then we have the ability to do on the folder itself to give a preview and things like that. Okay, so. Are you able to show anything else or that was kind of the hiccup that you had Thomas love. So we've created an event section. How do we go about creating a volunteer section. For me, I would probably leave that as a top level page. The section is actually called community. So. I would go to. The top of my site. And I would add a new page. And I would call it community. Community page. And then. I would start to. For the sake of time, I'll just put two entries. So there's a membership committee. So I'm going to use. A paste that doesn't carry across any, anything. So a membership committee. So there are a couple ways we could do this. We could have individual documents and then create a listing. Which is probably the better way to do this. What I'm doing right now is kind of manually building out the layout, which is not so great because it doesn't if we keep adding more community options. So this is a very short-term. So this is very much a very short-term. So we have added two things. We could then have a corresponding community folder, which is hidden. Let's see. Let's exclude it from a navigation. Ah, here is the trick that we need. Pages actually are able to hold content. So we created a community page. And then inside of that community page, we can actually put content. So we could then put our community content. For the sake of time, I'm just going to copy some news items and put them over there. Okay, copy. All right, so now I want to paste. So I have this kind of fake community content. And let us change the state of all of that content to published. All right, so this is our community page, which doubles as a folder. That too we're going to make published. And this is our community content. On our community page. We could do a listing. And that listing would come instead of by tag. I mean, instead of by tag or by type, we could do it by location. So location. And in this case, the location is community. So we can not use absolute path. Let's use community. So anything in community should begin to show up. So it's not filtering location, community. Community. And community content. Okay. So now if we click save, we now have a listing. And we can do a list of all of the things that we want to do. Not perfect, but it gives us a good idea of the direction that we could take it in. And then finally, the last exploration is creating an image gallery. And for that. That's a matter of having all the images. And let's add a page. We'll call it our images. And we'll do a listing. And our criteria will be by type. And we will select anything that is an image. And image gallery. There we go. Now, because these pages can hold other things, we could take this more images. And. But actually we don't need to do anything because this. The folder that we created. It's just a matter of moving the folders themselves. So. Let's see what happens. Okay. All right. I think we're probably exhausted. What I can probably go through for this session. But just to kind of. Go through what we did. We can create image galleries. One way of creating sections now with the. The folder. We can create a page. And use that page as a holder. Which is a different from how we did it before where. We use folders and had display pages. So the page itself is the display and the folder. So that's that's interesting. And that's another aspect of getting in and out and the folder based approach. At this point we should be able to work with blocks and know what the content type is. And understand the benefit of organizing things around content types. And. What context. And what we do is we do a lot of the text. A lot of the work that we do is done. Where. The objects exist. And this is a different paradigm from possibly having a dashboard where you do everything. And finally. Pages. Are actually containers. And that's something that. I may have been aware of, but didn't pay attention to. The new approach. And that's actually very pivotal to the new approach. It means that. You can create a page. And then fill it with content. And it's acts as its own landing page. And that's. That's actually quite useful. To be aware of. So where do we go from here? Just to wrap up. Well, in the future. I think we have a lot of things that we need to know about. And that's why we do that. And we are very comfortable with working with add-ons. And with theming. So. Basic customization with theming. And creating your first add-ons to add more functionality. But even before creating your own add-ons. There is a whole set of add-ons. In fact, there is a website called awesome. And it's called awesome. Which is a good place to go. If you need to just see what is available today. For. The plan six front end stuff. So there's a whole bunch of. Add-ons for doing different things. So that's going to be worth looking at. And that's going to be a great place to go. And. And I think. That's about it. We are done. With the training for today. Thank you for your time. And. Go and learn some more plan six and start making some useful things with it.
Trainer: David Bain In case you are unsure about any of these terms we will do a quick review BEFORE going on to the rest of this workshop. A little background, Principles & concepts, Logging in & out, preferences and password management, Folder management and the basic publication workflow. We will not look at theming or customisation.