Difference between revisions of "Java track"
(Created page with "The Java track is concerned with the monitoring of Java programs. A benchmark will consist of a Java program and a specification (see the rules document). Please follow the e...") |
(→Team 4 (Mufin)) |
||
(12 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
The Java track is concerned with the monitoring of Java programs. A benchmark will consist of a Java program and a specification (see the rules document). | The Java track is concerned with the monitoring of Java programs. A benchmark will consist of a Java program and a specification (see the rules document). | ||
− | Please follow the example of Team 0 in how to organise your submission i.e. provide a team name and description and create a page per benchmark (named Track TeamX | + | Please follow the example of Team 0 in how to organise your submission i.e. provide a team name and description and create a page per benchmark (named Track TeamX BenchmarkY). |
Please use the below template to create your benchmark Wiki pages. | Please use the below template to create your benchmark Wiki pages. | ||
Line 17: | Line 17: | ||
The ExampleName tool works by magic dust and hope, utilising a new constant time algorithm for sorting. This paragraph is an example of the fact that there should be a few sentences written about the team and its tool. The tool is developed by John Lennon and Tintin. Further information about the tool can be found at [http://www.ficticiousAddress.com]. | The ExampleName tool works by magic dust and hope, utilising a new constant time algorithm for sorting. This paragraph is an example of the fact that there should be a few sentences written about the team and its tool. The tool is developed by John Lennon and Tintin. Further information about the tool can be found at [http://www.ficticiousAddress.com]. | ||
− | * [[ | + | * [[ Java Team0 Benchmark1 | Benchmark 1 ]] |
− | == Team 1 == | + | == Team 1 (BeepBeep3) == |
− | == Team 2 == | + | == Team 2 (Larva) == |
+ | LARVA is a runtime verification platform for runtime verification of critical systems. The language was specifically designed to allow a user to specify the security properties of a JAVA software system. Once these are specified in a text file, the compiler generates the necessary code in JAVA and AspectJ which verifies that the properties in the script are being adhered to during the execution of the system. The advantage of taking this approach is that the monitoring is done without manually changing any code in the system being monitored. Further information about the tool can be found at [http://www.cs.um.edu.mt/svrg/Tools/LARVA/]. | ||
+ | |||
+ | * [[ Java Team2 Benchmark1 | Benchmark 1 ]] | ||
+ | |||
+ | * [[ Java Team2 Benchmark2 | Benchmark 2 ]] | ||
+ | |||
+ | * [[ Java Team2 Benchmark3 | Benchmark 3 ]] | ||
+ | |||
+ | == Team 3 (MarQ) == | ||
+ | |||
+ | MarQ (Monitoring at runtime with QEA) monitors specifications written as Quantified Event Automata (QEA) [http://www.cs.man.ac.uk/~david/sm/qea.pdf]. These are described in the original QEA paper. The MarQ tool is overviewed in a tool paper [http://www.cs.man.ac.uk/~david/sm/marq.pdf]. QEA is based on the notion of trace-slicing, extended the existential quantification and free variables. The MarQ tool is written in Java and uses AspectJ for online monitoring of Java programs. Further details and examples can be found here [http://www.cs.man.ac.uk/~david/sm/reger-thesis.pdf] and here [http://www.havelund.com/Publications/syde-havelund-reger-2015.pdf]. The tool is available on github [https://github.com/selig/qea] but is unstable, please contact the author if you want a stable version. | ||
+ | |||
+ | * [[ Java Team3 Benchmark1 | Benchmark 1 ]] | ||
+ | |||
+ | * [[ Java Team3 Benchmark2 | Benchmark 2 ]] | ||
+ | |||
+ | * [[ Java Team3 Benchmark3 | Benchmark 3 ]] | ||
+ | |||
+ | == Team 4 (Mufin) == | ||
+ | |||
+ | Mufin is a tool for runtime verification of object-oriented software systems. It uses a novel algorithm for monitoring the individual behaviour and interaction of an unbounded number of runtime objects. This allows for evaluating complex correctness properties that take runtime data in terms of object identities into account. In particular, the underlying formal model can express hierarchical interdependencies of individual objects and the proposed algorithm uses union-find data structures to manage individual instances. The approach is described in a paper [http://www.isp.uni-luebeck.de/research/publications/runtime-monitoring-union-find-structures] and more information about the tool can be found at [http://www.isp.uni-luebeck.de/mufin]. | ||
+ | |||
+ | * [[ Java Team4 Benchmark1 | Benchmark 1 ]] | ||
+ | * [[ Java Team4 Benchmark2 | Benchmark 2 ]] | ||
+ | * [[ Java Team4 Benchmark3 | Benchmark 3 ]] |
Latest revision as of 17:56, 2 June 2016
The Java track is concerned with the monitoring of Java programs. A benchmark will consist of a Java program and a specification (see the rules document).
Please follow the example of Team 0 in how to organise your submission i.e. provide a team name and description and create a page per benchmark (named Track TeamX BenchmarkY).
Please use the below template to create your benchmark Wiki pages. Do not modify the template.
Benchmark Template - Java track
Contents
Team 0 (ExampleName)
The ExampleName tool works by magic dust and hope, utilising a new constant time algorithm for sorting. This paragraph is an example of the fact that there should be a few sentences written about the team and its tool. The tool is developed by John Lennon and Tintin. Further information about the tool can be found at [1].
Team 1 (BeepBeep3)
Team 2 (Larva)
LARVA is a runtime verification platform for runtime verification of critical systems. The language was specifically designed to allow a user to specify the security properties of a JAVA software system. Once these are specified in a text file, the compiler generates the necessary code in JAVA and AspectJ which verifies that the properties in the script are being adhered to during the execution of the system. The advantage of taking this approach is that the monitoring is done without manually changing any code in the system being monitored. Further information about the tool can be found at [2].
Team 3 (MarQ)
MarQ (Monitoring at runtime with QEA) monitors specifications written as Quantified Event Automata (QEA) [3]. These are described in the original QEA paper. The MarQ tool is overviewed in a tool paper [4]. QEA is based on the notion of trace-slicing, extended the existential quantification and free variables. The MarQ tool is written in Java and uses AspectJ for online monitoring of Java programs. Further details and examples can be found here [5] and here [6]. The tool is available on github [7] but is unstable, please contact the author if you want a stable version.
Team 4 (Mufin)
Mufin is a tool for runtime verification of object-oriented software systems. It uses a novel algorithm for monitoring the individual behaviour and interaction of an unbounded number of runtime objects. This allows for evaluating complex correctness properties that take runtime data in terms of object identities into account. In particular, the underlying formal model can express hierarchical interdependencies of individual objects and the proposed algorithm uses union-find data structures to manage individual instances. The approach is described in a paper [8] and more information about the tool can be found at [9].