• Không có kết quả nào được tìm thấy

23rd International Conference, FOSSACS 2020 Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020 Dublin, Ireland, April 25–30, 2020, Proceedings

Nguyễn Gia Hào

Academic year: 2023

Chia sẻ "23rd International Conference, FOSSACS 2020 Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020 Dublin, Ireland, April 25–30, 2020, Proceedings"


Loading.... (view fulltext now)

Văn bản

ETAPS 2020 was the 23rd edition of the European Joint Conferences on Software Theory and Practice. On behalf of the attendees of ETAPS 2020, I would like to thank all speakers for their inspiring and interesting speeches.

Learning of Flocking Controllers

1 Introduction

In this setting, flaking is achieved when agents join together to minimize the cost function of the entire system. We presented centralized and distributed solutions to achieve this form of "declarative flocking" (DF), both of which were formulated in terms of model-predictive control (MPC) [2].

Fig. 1: Neural Flocking Architecture
Fig. 1: Neural Flocking Architecture

2 Background

Model-Predictive Control

A centralized model assumes that complete information about the herd is available to a single "global" controller, which uses the states of all agents to calculate their next optimal accelerations. The first term J(k) is the centralized model-specific cost, evaluated for T control steps (it embodies the predictive aspect of MPC), starting at time step k.

Declarative Flocking

In a distributed environment where an agent's knowledge of the behavior of its neighbors is limited, an agent cannot calculate the exact future behavior of its neighbors. The cost function is normalized by the number of agent pairs, |A|·(|A−2 1|); as such, the cost does not depend on the size of the herd.

3 Additional Control Objectives

The control law for a predator consists of a single term that causes it to move toward the centerline of the flock with maximum acceleration. The MPC cost functions used in our review of neural slicing are weighted sums of the cost function terms introduced above.

4 Neural Flocking

Training Distributed Flocking Controllers

To avoid obstacle with target search, we use CMPC with the cost function given in Eq. The position, velocity and acceleration of the predator are indicated by ppred, vpred, apred respectively.

5 Experimental Evaluation

  • Preliminaries
  • Results for Basic Flocking
  • Results for Obstacle and Predator Avoidance
  • DNC Generalization Results
  • Statistical Model Checking Results

We use this algorithm to obtain a joint (, δ) approximation of the average convergence rate and average normalized convergence time for the DNC. While the results for the convergence rate (as expected) are numerically similar to the results in Table 2, the results in Table 4 are much stronger because they come with the guarantee that their (, δ) approximations of the actual mean values.

Fig. 3: Performance comparison for basic flocking with collision avoidance, aver- aver-aged over 100 test runs.
Fig. 3: Performance comparison for basic flocking with collision avoidance, aver- aver-aged over 100 test runs.

6 Related Work

In [9], reinforcement learning and flock control are combined with the objective of predator avoidance, where the learning module determines safe spaces in which the flock can navigate to avoid predators. However, their approach to predator avoidance is not distributed, as it requires a majority consensus of the flock to determine its action to avoid predators.

7 Conclusions

In [7], an uncertainty-aware reinforcement learning algorithm is developed to estimate the probability of a mobile robot colliding with an obstacle in an unknown environment. In [4], an approach based on Bayesian inference is proposed that allows an agent in a heterogeneous multi-agent environment to estimate the navigation model and the goal of each of its neighbors.

The third-party images or other materials in this chapter are included in the Creative Commons license of the chapter, unless otherwise noted in a line of credit accompanying the materials. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by law or exceeds the permitted use, you must obtain permission directly from the copyright holder.

On Well-Founded and Recursive Coalgebras

The corresponding concepts are those of well-founded and recursive coalgebras for the endofunctor, which first appear in the work of Osius [22] and Taylor [23,24]. He then proved the general recursion theorem that all well-founded coalgebras are recursive, since every endofunctor on sets (and on more general categories) preserves inverse images.

2 Preliminaries

Proposition2.3 [27]. For every set function F there exists an essentially unique set function F¯ which coincides with F on non-empty sets and functions and preserves finite intersections (hence monomorphisms). For each objectAwe, bySub(A) denotes the position of all subobjects of A (represented by monomorphisms:SA), where ≤sif exists iwiths=s·i.

3 Recursive Coalgebras

11] showed that recursion semantically models divide-and-conquer programs, as shown by the example of Quicksort. Note that the last equation reflects the idea that Quicksort is a divide-and-conquer algorithm.

4 The Next Time Operator and Well-Founded Coalgebras

It is well-grounded if the state transition graph is well-grounded (ie has no infinite path). Thus, a coalgebra(A, α) is well-founded if there are no infinite paths in its canonical graph.

5 The General Recursion Theorem and its Converse

9 . For every set functor preserving inverse images, the following properties of a coalgebra are equivalent

Taylor [23, Cor.9.9] showed that for functors preserving inverse images, a terminal well-founded coalgebra is an initial algebra.

6 Closure Properties of Well-founded Coalgebras

The fact that subcoalgebras of a well-founded coalgebra are well-founded does not necessarily require the assumption that Sub(A) is a frame. If A has universally smooth monomorphisms and F preserves finite intersections, then every subcoalgebra of a well-founded coalgebra is well-founded.

Timed Negotiations

This implies that the calculation of minimal or maximal execution time for deterministic (but unhealthy) acyclic negotiations cannot be done in PTIME (unless NP=PTIME). It is based on the calculation of the maximum execution time of critical paths in the negotiations.

2 Negotiations: Definitions and Brexit example

Soundness and Reachability)

A negotiation is healthy if each run from the initial configuration can be extended to a final run. The hardness part comes from a reduction of 3-CNF-SAT which can be found in the proof of Theorem 3.

3 Timed Negotiations

Its minimum runtime, denoted by mintime(N), is the smallest δ(ρ) in the entire finite timed run ρ of N. For the maximum runtime, it is not a problem to assign a unique maximum runtime to each node.

4 High level view of the main results

The reason for the asymmetry between minimal and maximal execution times of a negotiation is that the execution time of a path is maxp∈Pμk(p), for μk the last timed valuation, which breaks the symmetry between min and max.

5 Deterministic Negotiations

With this we claim that Nφ has a final sequence if it is satisfiable which completes the first step of the proof. To consider the decision variant, we reformulate this problem as checking whether any bit of the minimum execution time is 1.

Fig. 3. A part of N φ where clause c j is ( i 2 ∨ ¬i ∨ ¬i 3 ) and clause c k is ( i 4 ∨ ¬i ∨ i 5 ).
Fig. 3. A part of N φ where clause c j is ( i 2 ∨ ¬i ∨ ¬i 3 ) and clause c k is ( i 4 ∨ ¬i ∨ i 5 ).

6 Sound Negotiations

We associate with ρπ the time running run ρ+π which associates to each node the latest possible execution date. The calculation of the maximum execution time for an acyclic sound negotiationN = (N, n0, nf,X) can be done in timeO(|N|+|X).

7 k -Layered Negotiations

Algorithmic properties

Proof (sketch). The first step is to calculate Si layer by layer following its inductive definition. We say that N is k∗ thread bounded, meaning that there cannot be more thatk∗ nodes in the same contextX of any layer.

Minimal Execution Time

Given a k-layer negotiation N and T written in unary, it can be set to PTIME if the minimum execution time of N is Basically, we have τ ∈ Ti if there exists a path reaching X ={n∈Ni |τ(n)=⊥}, and this path reaches node n∈X afterτ(n) time units.

Fig. 5. The negotiation encoding Knapsack
Fig. 5. The negotiation encoding Knapsack

8 Conclusion

However, change action models are very general and do not share the nice properties of a Cartesian differential category. In this paper, we introduce Cartesian difference categories as a bridge between Cartesian differential categories and change action models.

2 Cartesian Differential Categories

Cartesian Left Additive Categories

Cartesian Differential Categories

We also refer to [4, section 4] for the term calculus, which may help to better understand the axioms of the Cartesian differential category. A canonical example of a Cartesian differential category is the category of real smooth functions, which we will discuss in Section 5.1.

3 Change Action Models

Change Actions

Since a Cartesian difference category is a generalization of a Cartesian differential category, we leave the discussion of the intuitions of these axioms for later in section 4.2 below. The intuition for these axioms will be explained in more detail in Section 4.2 when we explain the axioms of a Cartesian difference category.

Change Action Models

Informally, a change action model consists of a rule, which for each objectAaf X associates a change action above it, and for each map a choice of a derivative. We will generalize this result and show in section 4.4 that a Cartesian difference category also always provides a change action model.

4 Cartesian Difference Categories

  • Infinitesimal Extensions in Left Additive Categories
  • Cartesian Difference Categories
  • Another look at Cartesian Differential Categories
  • Cartesian Difference Categories as Change Action Models In this section, we show how every Cartesian difference category is a particu-
  • Linear Maps and ε -Linear Maps

Using the same definition, we can also define linear maps in the Cartesian difference category. Linear maps in the Cartesian difference category satisfy many of the same properties found in [4, Lemma 2.2.2].

5 Examples of Cartesian Difference Categories

  • Smooth Functions
  • Calculus of Finite Differences
  • Module Morphisms
  • Stream calculus

The category Abω is a Cartesian difference category where the infinitesimal extension is given by the truncation operator,ε(f) ([ai]) =z(f([ai])). A causal map is linear (in the sense of a Cartesian difference category) if and only if it is a group homomorphism.

6 Tangent Bundles in Cartesian Difference Categories

The Tangent Bundle Monad

The naturalness of η and μ and the monad identities will follow from the remaining difference combinator axioms. When X is a Cartesian differential category with the difference structure that arises by setting ε= 0, this tangent bundle monad coincides with the standard tangent monad corresponding to its tangent category structure [7, 15].

The Kleisli Category of T

To define the difference combinator for the Kleisli category, first note that difference combinators by definition do not change the codomain. This allows one to construct many examples of interesting and exotic Cartesian difference categories, such as the Kleisli category of Cartesian difference categories (or by repeating this process, obtaining the Kleisli category of the Kleisli category).

7 Conclusions and Future Work

Bradet-Legris, J., Reid, H.: Differential forms in nonlinear cartesian differential categories (2018), Foundation Methods in Computer Science. Most relevant to this paper is [2], which, independently of [10], proposed the same syntax and axiomatization for the usual signal flow calculation and shares with our contribution the same methodology: the use of string diagrams as a mathematical playground for the compositional study of different types of systems.

2 Background: the Affine Signal Flow Calculus

  • Syntax
  • String Diagrams
  • Denotational Semantics and Axiomatisation
  • Affine vs Linear Circuits

The resulting theory is that of Affine Interacting Hopf Algebras (aIH). The generators in (1) form a Hopf algebra, those in (2) form another Hopf algebra, and the interaction of the two leads to two Frobenius algebras. A crucial property is that each polynomial fraction can be expressed as a kind of finite circuit (0, 1).

3 Operational Semantics for Affine Circuits


Set of(n, m) trajectories are the arrows →mof a propTraj with composition and monoidal product given as in Definition 6. The reader should note that the use of trajectories is not a semantic device to get rid of problematic computations.

4 Contextual Equivalence and Full Abstraction

From Polynomial Fractions to Trajectories

The missing link between polynomial fractions and trajectories are (formal) Laurent series: we now recall this notion. For the next step, notice that trajectories are really permutations of Laurent series: every pair of vectors (u, v) ∈ k((x))n ×k((x))m, as on the left below, yields a trajectory κ(u, v), defined for alli∈Zas right below.

Proof of Full Abstraction

Since both are symmetric monoid functors from a free prop, it is enough to check the theorem for the ACirc generators. Note that it is not trivial: since we are considering land contexts, it does not make sense to simply consider "identity" contexts.

5 Functional Behaviour and Signal Flow Graphs

Conversely, all relations that are graphs of rational matrices can be expressed as signal flow graphs. In other words, signal flow graphs can be seen as processing the value stream from left to right.

6 Realisability

It does not belong to SF, but it can be read as a signal flow graph with an input bent and moved to the bottom right. Similarly, an input port in an input port that is not an output port.

7 Conclusion and Future Work

In this paper, we consider a variant of the synthesis problem that allows us to model programs with a variable number of processes. We show that the general case of the synthesis problem is undecidable for FO2 logic.

Fig. 1. Representation of P -execution as a mathematical structure
Fig. 1. Representation of P -execution as a mathematical structure

3 Parameterized Synthesis Problem

For the three formulas ϕ2,ϕ3, and ϕ4, note that, since it is an environmental action, if there is at least one process that is exclusively controlled by the Environment, then there is no winning strategy. We show, however, that there is no discontinuity. Ns,Ne,Nse), then we know that Synth(F[R],Ns,Ne,Nse) is solvable, since it can be reduced to a finite number of simple synthesis problems over a finite alphabet.

Satisfiability and Normal Form for FO[ ∼ ]

We show that their encoding can be expressed in our logic, even if we limit it to two variables, and can also be adapted to the. The normal form can be obtained using known normal form constructions [23,41] for general FO logic [2], or using Ehrenfeucht-Fra¨ıss'e games [39], or using making of a direct inductive transformation in the spirit of [ 23].

From Synthesis to Parameterized Vector Games

Finally, the formula naturally translates into an acceptance conditionF ⊆CL over configurations, whereCi is the set of local acceptance conditions, which are of the form (sns, ene, meaning) where, e, se∈ {=,≥} andns, we, nse∈ N. LetT denote the set of system transitions, the set of environment transitions, and T =Ts∪T the set of all transitions.

5 Results for FO[ ∼ ] via Parameterized Vector Games

We show that D=C[→(N+1, ne, nse)] wins for System by showing a corresponding winning strategy from D that will carefully control the placement of the extra token. Since D wins for System, by induction hypothesis, Cis wins for System, say by winning strategyf.

Fig. 3. Acceptance conditions for a game with no cutoff wrt. ( 0, 0, N )
Fig. 3. Acceptance conditions for a game with no cutoff wrt. ( 0, 0, N )

6 Conclusion

The computational complexity of the unbounded bounded domino problem (with implications for logical decision problems). Murawski, editors, Foundations of Software Science and Computation Structures - 20th International Conference, FOSSACS 2017, Held as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2017, Uppsala, Sweden, April Proceedings, Volume 10203 of Lecture Notes in Computer Science, pages.

Controlling a random population

Our key technical contribution is to demonstrate the computability of the sequential flow problem by reducing it to a boundedness question expressed in second-order cost monad logic using the max-flow min-cut theorem. We construct a reduction from the former to (a number of cases) the latter in Section 4 and show the decidability of the sequential flow problem in Section 5 .

2 The stochastic control problem

The question is whether for all n∈ N there exists a strategy that ensures reaching almost safely from sn. Generalizing this example shows that if the answer to the stochastic control problem is "no", then the smallest number of tokensn for which there is no almost sure strategy to reach tn from sn can be exponential in |Q|.

Fig. 1. The controller can almost surely reach t n from s n , for any n ∈ N .
Fig. 1. The controller can almost surely reach t n from s n , for any n ∈ N .

3 The sequential flow problem

A subset is downward closed if and only if it is a finite union of incommensurable ideals. An ideal is included in a downward closed set if and only if it is included in one of the ideals of its decomposition.

4 Reduction of the stochastic control problem to the sequential flow problem

Eve wins if all tokens are in target state, or if Adam interrupts infinitely. Equivalently, this is the set of streams for which when played in the game GM.

5 Computability of the sequential flow problem

It is decisive whether F can be reached from all configurations of I using only flows from Flows. Since streams are special cases of capabilities, we can compare streams to capabilities in the same way.

Fig. 4. An instance of the sequential flow problem. We let Flows = a ↓ ∪ b ↓ ∪ c ↓ where a = ω ( q 2 , q 2 ) + ( q 2 , q 3 ) + ω ( q 4 , q 4 ), b = ω ( q 1 , q 2 ) + ( q 3 , q 4 ) + ω ( q 4 , q 4 ), and c = ω ( q 1 , q 1 ) + ( q 2 , q 1 ) + ω ( q 2 , q 2 )
Fig. 4. An instance of the sequential flow problem. We let Flows = a ↓ ∪ b ↓ ∪ c ↓ where a = ω ( q 2 , q 2 ) + ( q 2 , q 3 ) + ω ( q 4 , q 4 ), b = ω ( q 1 , q 2 ) + ( q 3 , q 4 ) + ω ( q 4 , q 4 ), and c = ω ( q 1 , q 1 ) + ( q 2 , q 1 ) + ω ( q 2 , q 2 )

6 Conclusions

We now define a cost MSO formula ˜Ψ(s) which is equivalent (in terms of cost functions) to the minimal cost of cutting the previous graph and hence to Ψ(s). The second question concerns more accurate modeling of biological systems: can we refine the stochastic control problem by taking into account the synchronization time of the controller and limiting it to reasonable bounds.


Valk, R., Jantzen, M.: Residual vector sets with applications to decidability problems in the petri dish. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution, and reproduction in any medium or format, so as long as you give appropriate credit to the original author(s) and source, you must provide a link to the Creative Commons license and indicate whether changes have been made.

Decomposing Probabilistic Lambda-Calculi

Related Work

In all of the aforementioned works, probabilistic λ-calculus were implicitly endowed with a call-by-name or call-by-value strategy for the reasons described above. Our permutative reduction is a refinement of that for the probabilistic λ-calculus of name calling [20] and is an implementation of the equation theory of (ordered) binary decision trees with rewrite [27] .

2 The Probabilistic Event λ -Calculus Λ PE

The reduction = β ∪ p in ΛPE consists of a β-reduction β and a permutative or p-reduction p, both of which are defined as contextual closures of the rules given in Figure 1. N M By inspecting the rewriting rules in Figure 1, we can then denote the normal forms of p and as follows.

Fig. 1. Reduction Rules for β -reduction and p-reduction.
Fig. 1. Reduction Rules for β -reduction and p-reduction.

3 Properties of Permutative Reduction

A key feature of p-reduction is that the choice of ⊕a permutes from the position of the application argument, but the generator of a does not, as below. C[ ] Note that all six reduction rules from ⊕λ to ⊕ in Figure 1 are of the following form.

4 Confluence

Parallel Reduction and Permutative Reduction

We address this by adapting labeled terms p-reduction, which is a strategy in p that permutes past the labeled redex in one step. By confluence, R pM, and by induction on the sum of path lengths in p from R (smaller than from N), we obtain R pM and thus N pM.

Complete Reduction

For the upper and left regions, by Lemma 20, any reduction path maps to one Mp.

5 Strong Normalization for Simply-Typed Terms

Please note that the type structure is exactly that of the usual, vanilla, simple typed λ calculus (although the terms are of course different), and so we can reuse most of the usual strong normalization proofs, for example in the version given by Ralph Loader's notes [21], page 17. If M is a set L⊕aP, we can use Lemma 23 and the induction hypothesis and conclude.

Fig. 4. Labeling Terms
Fig. 4. Labeling Terms

6 Projective Reduction

In this way we will preserve the intuition of both permutative and projective reduction: we will obtain a qualified version of equivalence (4) (see (5) below) and we will be able to reduce any generator on the backbone of the article: under (other) generators and choices and under abstractions and in function position. Projective reduction is an invariant for permutative reduction as follows (with example for c2 symmetric to c1 and where D[ ] is the context).

7 Call-by-value Interpretation

To simulate the reduction step, if occupies the then position in θ, then the nth position in the context stream must be element. Since β-reduction survives the translation and labeling process intact, we can simulate probabilistic call-to-value reduction from projective and β-reduction.

8 Conclusions and Future Work

In: Proceedings of 43th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2016. This characterization corrects Theorem 1 in [4] and highlights the flaw in the proof of membership problem.

Fig. 1: (a) and (b): two MSCs that violate causal delivery. (c) and (d): an MSC and its conflict graph
Fig. 1: (a) and (b): two MSCs that violate causal delivery. (c) and (d): an MSC and its conflict graph

3 k -synchronizable Systems

It is not the case that an MSC is k-synchronous if and only if every Hamiltonian cycle in its conflict graph has size at most k and if no RS edge occurs in any cyclic path. This graph is not Hamiltonian, and the largest cycle Hamiltonian is indeed only of size 4.

Fig. 2: (a) the MSC of Example 1.1. (b) the MSC of Example 1.2. (c) the MSC of Example 2 and (d) its conflict graph.
Fig. 2: (a) the MSC of Example 1.1. (b) the MSC of Example 1.2. (c) the MSC of Example 2 and (d) its conflict graph.

4 Decidability of Reachability for k -synchronizable Systems

This means that at each step we only have a partial view of the global conflict graph. Once we have this enriched local view of the conflict graph, we'll take the extended version of it.

Fig. 3: Deduction rules for extended dependency edges of the conflict graph Example 3. Fig
Fig. 3: Deduction rules for extended dependency edges of the conflict graph Example 3. Fig

5 Decidability of k -synchronizability for Mailbox Systems

Note that there are only finitely many abstract configurations of the form (l, B) with a number of control conditions and B :P→(2P×2P). It follows that it is determinable whether a given abstract configuration of the form (l, B) is reachable from the initial configuration to ac-synchronizable execution.

6 k -synchronizability for Peer-to-Peer Systems

Finally, the recognition of poor execution works in the same way as for mailbox systems. As for mailbox systems, we can thus conclude that for a given k synchronization is decidable.

7 Concluding Remarks and Related works

Models consist of objects (in the sense of OO programming) with attributes (a.k.a. tagged records), e.g., source model Consists of three objects identified by their oids (object identifiers) #A, #J, #M ( think about employees of some companies) with attribute values ​​as shown in the table: attributeExpr.refers to Experience measured by a number of years and Depart.is the column of department names. The table schema, that is, the attributes tripleSAof (Name, Expr., Depart.) with their value domains StringStringString,IntegerIntegerInteger,StringStringString, defines a model spaceA.

Fig. 1: Example of update propagation180Z. Diskin
Fig. 1: Example of update propagation180Z. Diskin

Hình ảnh

Fig. 1: Neural Flocking Architecture
Fig. 3: Performance comparison for basic flocking with collision avoidance, aver- aver-aged over 100 test runs.
Fig. 3 and Table 1 compare the performance of the DNC on the basic-flocking problem for 15 agents to that of the MPC controllers
Table 2: DNC Performance Generalization for BF Agents Avg. Conv. Conv. Avg. Conv. ICR

Tài liệu tham khảo