Models and Tools to Manage Security in Multiagent Systems

Transcript

Models and Tools to Manage Security in Multiagent Systems
UNIVERSITÀ DEGLI STUDI DI PARMA
DIPARTIMENTO DI INGEGNERIA DELL’INFORMAZIONE
PARMA (I) - VIALE DELLE SCIENZE
TEL. 0521-905800 - FAX 0521-905798
Michele Tomaiuolo
Models and Tools to Manage Security
in Multiagent Systems
Dissertazione conclusiva presentata per il dottorato di ricerca in
Tecnologie dell’Informazione - XVIII Ciclo
Gennaio 2005
Table of Contents
1Introduction.................................................................................................... 7
2Agents for service composition...................................................................... 9
2.1Service Composition Frameworks.......................................................10
2.2Standardization.................................................................................... 12
2.2.1Web services............................................................................... 12
2.2.2Grid services................................................................................13
2.2.3FIPA Agents................................................................................14
2.3Agent-based grid services.................................................................... 16
2.4References............................................................................................17
3Agentcities and openNet...............................................................................21
3.1Network................................................................................................22
3.2Agentcities Network Architecture....................................................... 23
3.2.1Agent Platform Directory............................................................25
3.2.2Agent Directory...........................................................................25
3.2.3Service Directory.........................................................................25
3.2.4Drawbacks...................................................................................26
3.3openNet Network Architecture............................................................ 27
3.3.1Platform Service..........................................................................28
3.3.2Platform and Agent Directory Service........................................ 30
2
Models and Tools to Manage Security in Multiagent Systems
3.3.3Service Discovery....................................................................... 30
3.4Service Composition............................................................................31
3.4.1Event Organizer...........................................................................32
3.4.2Trade House................................................................................ 34
3.4.3Auction House.............................................................................36
3.4.4Ontology Service.........................................................................37
3.4.5SME Access................................................................................ 39
3.4.6Venue Finder............................................................................... 40
3.4.7Banking Service.......................................................................... 42
3.4.8Security Service for e-Banking................................................... 43
3.5Conclusions..........................................................................................44
3.5.1References................................................................................... 44
4Public-key infrastructures.............................................................................47
4.1X.509 Public Key Certificates............................................................. 52
4.1.1Version........................................................................................ 53
4.1.2Serial Number............................................................................. 53
4.1.3Signature......................................................................................54
4.1.4Issuer, Subject............................................................................. 54
4.1.5Validity........................................................................................56
4.1.6Subject Public Key Info.............................................................. 57
4.1.7Issuer Unique ID, Subject Unique ID......................................... 58
4.2Certificate extensions...........................................................................58
3
4.2.1Subject Alternative Name........................................................... 58
4.2.2Issuer Alternative Name..............................................................59
4.2.3Subject Directory Attributes........................................................59
4.2.4Authority Key Identifier, Subject Key Identifier........................ 60
4.2.5Key Usage, Extended Key Usage, Private Key Usage Period.... 60
4.3CA Certificates.....................................................................................61
4.3.1Basic Constraints.........................................................................61
4.3.2Name Constraints........................................................................ 61
4.3.3Certificate Policies...................................................................... 62
4.3.4From a CA tree to a CA “jungle”................................................63
4.4Private extensions................................................................................ 64
4.5Certificate Revocation......................................................................... 64
4.6X.509 Attribute Certificates.................................................................65
4.7Globus Proxy Certificates.................................................................... 68
4.8PGP...................................................................................................... 70
4.9SPKI.....................................................................................................71
4.9.1Authorization Certificate.............................................................72
4.9.2Name Certificates........................................................................74
4.9.3Certificate Revocation.................................................................75
4.9.4Logical foundation...................................................................... 76
4.10Conclusions........................................................................................77
4.11References..........................................................................................78
4
Models and Tools to Manage Security in Multiagent Systems
5Trust Management in Multi-agent Systems..................................................81
5.1Trust, Security and Delegation............................................................ 82
5.2Security Threats in a Distributed MAS................................................85
5.3Access Control in a Distributed MAS................................................. 86
5.4Delegation Certificates.........................................................................87
5.5Key Based Access Control...................................................................89
5.6Local Names........................................................................................ 92
5.7Distributed RBAC................................................................................93
5.8Trust Management Principles.............................................................. 94
5.9Conclusions..........................................................................................96
5.10References..........................................................................................97
6Security in JADE.......................................................................................... 99
6.1Principals, Resources and Permissions.............................................. 102
6.2User Authentication........................................................................... 104
6.3Certificate Encoding.......................................................................... 104
6.4Access Control................................................................................... 106
6.5Semi-Automated Trust Building........................................................107
6.6Conclusions........................................................................................109
6.7References..........................................................................................110
7Security in openNet.................................................................................... 113
7.1Rule-based agents.............................................................................. 115
7.1.1Drools4JADE............................................................................ 115
5
7.1.2Application-level security......................................................... 117
7.2Interoperability...................................................................................119
7.2.1XrML and ODRL...................................................................... 120
7.2.2SAML Overview....................................................................... 120
7.2.3SAML Specifications................................................................ 122
7.2.4SAML from the “Trust Management” Perspective...................124
7.2.5Authentication Context............................................................. 126
7.2.6XACML Overview....................................................................127
7.2.7XACML from the “Trust Management” Perspective............... 129
7.2.8Threshold Subjects.................................................................... 130
7.3Conclusions........................................................................................131
7.4References..........................................................................................132
8Conclusions................................................................................................ 135
9Acknowledgments...................................................................................... 137
6
Models and Tools to Manage Security in Multiagent Systems
1 Introduction
While a number of architectures and systems are dealing with the problem of
service composition, some issues remain open.
To obtain the user's trust, applications must provide logically sound,
predictable results. In fact, much effort is being applied to semantically
enriched environments, where service are annotated and are distinguished not
only through simple plain text pattern matching routines, but also through
some analysis of their intended goals and their internal processes.
These thesis is meant to build on such semantically enriched environments,
but to augment them to allow the secure delegation of access rights among
cooperating entities.
In fact, if the dynamical and intelligent composition of services has to be
achieved, it clearly requires some delegation of goals and duties among
partners. But these delegations can never come into effect, if they're not
associated with a corresponding delegation of privileges, needed to access
some resources and complete delegated tasks, or achieve desired goals.
Also, the user should always be considered as the ultimate source of trust,
and he should be provided with means to carefully administer the flow of
delegated permissions. In particular, no a-priori trusted parties should be
supposed to exist in the system, as this would imply some obligated choice
for the user, and without real choice, there's no real trust.
Moreover, the presence of some third party as a globally trusted entity
implies that all systems participating in the global environment have to
equally trust it. While probably appearing as a feasible solution to enable the
widespread use of digital signatures, in reality, global deployment of “public
key infrastructures” has fallen short, well below expectations.
Introduction
7
Nowadays, new technologies, as protocols and certificate representations, are
gaining momentum. They follow a different approach toward security in
global environments, an approach which paradoxically is founded on the
concept of “locality”.
Federation of already deployed security systems is considered the key to
build global security infrastructures. Users are no more imposed some out of
the box solution to their particular security issues. They're no more asked to
rebuild the whole system, nor they're obliged to make it dependent upon
some global authority, to make it interoperable with others.
Instead they're provided means to manage the trust relations they build with
other entities operating in the same, global environment. In the same manner
as people collaborate in the real world, systems are being made interoperable
in the virtual world. Cooperations and agreements among companies and
institutions are making virtual organizations both a reality and a necessity.
But they'll never come into success if existing technologies will not match
their needs.
This thesis deals with trust management in open and decentralized
environments. Analysis and solutions are geared towards peer to peer
networks, intended not only as a technology, but above all as a web of trust
relationships, where parties interoperate directly, without reliance on any
centralized directory or authority.
Securing access to the resources made available by the peers is a requirement
to make peer to peer networks a more widespread paradigm of cooperation.
The secure management of trust relationships, the ability to precisely control
the flow of delegated permissions to trusted entities, are a fundamental
requirement to allow the composition of the more disparate services provided
on the network.
8
Models and Tools to Manage Security in Multiagent Systems
2 Agents
composition
for
service
Today, a number of technologies, like web services, grid services and
autonomous agent systems, are all emerging as unifying standards for the
integration of distributed applications and components, enabling cooperative
applications among different organizations and enterprises. Trying to
overcome the strong differences among these approaches, a number of
research works are being directed toward the identification of a more generic
architecture, to allow the composition of heterogeneous services provided on
the web. In particular, agent technology has long shown potential for
providing great advances in interoperation between heterogeneous systems
by enabling:
•
High level (more semantic) system-system communication
•
Dynamic service agreements between systems
•
Proactivity
Agent technology also naturally lends itself to peer-to-peer interactions,
which are potentially much richer than client-server approaches. In this field,
the Agentcities/openNet initiative is trying to create a next generation
Internet, based upon a worldwide network of services. These services,
ranging from simple e-commerce to integrating business processes into a
virtual organization, are clustered using the metaphor of a real or virtual city.
They can be accessed across the Internet, and have an explicit representation
of the capabilities that they offer. The ultimate aim is to enable the dynamic,
intelligent and autonomous composition of services.
In fact, the main rationale for using middle agents is their ability to adapt to
Agents for service composition
9
rapidly evolving environments and yet being able to achieve their goals. In
many cases, this can only be accomplished by collaborating with other agents
and leveraging on the services provided by cooperating agents. This is
particularly true when the desired goal is the creation of a new service to be
provided to a user, or even to the whole community, as this scenario often
calls for the composition of a number of simple services that are required to
create the desired compound service.
2.1 Service Composition Frameworks
In [25] a number of existing service composition frameworks are analyzed,
showing they often rely on a common set of basic components and features
provided by an underlying infrastructure level, including:
Service discovery – to find out all instances of a particular service. A useful
feature is the ability to discover the services on the basis of their particular
functionality, and not the way they are invoked.
Service Coordination and Management – to coordinate the services involved
in the composition. This component usually provides (to different extents)
fault tolerance, availability, efficiency, optimization of used resources, as to
reduce the cost of the composite service. Scalability issues require this
component not to be completely centralized.
Information Exchange Infrastructure – to enable communication among the
different entities involved in the composition. It should allow the integration
of services following different type of information exchange, as message
passing and remote method invocation.
One of most advanced platforms to achieve service composition is eFlow,
developed by HP [26]. The system is made of a service discovery system, a
composition engine and elementary services. The composition engine
10
Models and Tools to Manage Security in Multiagent Systems
controls the state of all composition processes, which are modelled as graphs
consisting of service nodes, event nodes and decision nodes. Moreover,
transaction regions can be defined to identify sections of the graph to be
executed as atomic operations.
Another platform for service composition is SWORD, described in [27]. It is
founded on a rule-based expert system that is used to find a plan for realizing
the desired composite service. Each existing service is modelled as a rule,
and its inputs and outputs are specified as preconditions and postconditions.
The sequence of rules activated by the rule-engine represents a plan for the
composition.
A different approach is used instead in the Ninja service composition
platform developed at UC, Berkeley [28]. Its main aim is enabling clients
with different characteristics and network connection abilities to invoke a set
of existing network services. One of the founding concepts is that of a service
path, matching the pipe/filter model of software architecture. It consists of a
set of operators, modelling the invoked services, and connectors, network
connections to transport data from one operator to the following one. A
logical path is created using a shortest path strategy.
The same ideas are at the basis of Paths [29] and other similar projects [30].
All these systems will probably take great benefit from the standardization of
a formalism to represent service interfaces [18]. In [31] the concept of path is
used for addressing load balancing and stability issues. An analogy is
supposed to exist between quality of service issues in routing algorithms and
service composition, and in both cases the cost of using a particular link is
supposed to be inversely proportional to its available capacity. To help
selecting a sequence of services for a path, i.e. a composite service, the leastinverse-available-capacity metric is introduced, modelled after the leastdistance metric in network literature.
Agents for service composition
11
2.2 Standardization
Web services, grid services and agent-based grid services defined a “path”
towards a unifying technologies for the integration of distributed applications
and components to provide different kinds of services via Web. In the
following, we present a short introduction of the state of art on these three
topics.
2.2.1 Web services
Web services [2] are emerging as an interesting abstraction to build
distributed systems. Its strength relies above all on the possibility to link
together heterogeneous components, developed using various programming
languages and paradigms and deployed in different computing environments.
Web services publish their interfaces and invocation methods on intranet or
internet repositories, where they can be dynamically discovered by clients.
Eventually an XML based protocol can be used to invoke the services and
obtain their results.
The most important standards which web services are built on are the SOAP
protocol [3], to transfer messages between distributed applications, or
services, and the XML language to encode exchanged data. Essentially,
SOAP is a one-way stateless message transfer protocol. More complex
interactions can be built by sending multiple one-way messages and
exploiting features of the underlying protocols. The SOAP protocol can lay
on different message-oriented technologies, like HTTP or SMTP. Even if the
HTTP protocol is widely adopted as a de-facto founding technology, this is
not a strict requirement of the SOAP standard. No assumptions are made on
the semantic of the exchanged messages. SOAP only deals with the routing
of messages on reliable transport mechanisms, possibly even across internet
12
Models and Tools to Manage Security in Multiagent Systems
firewalls. Conversely, the XML extensibility features are used to convey
whatever structured information among distributed components.
Web services can be advertised and discovered on UDDI registries [4],
available either as intranet accessible services or as internet public
directories. Conceptually, the information provided in a UDDI registry is
threefold: a “white pages” service contains a description of the registered
businesses, contact information and a simple categorization; a “yellow pages”
service lists all the services provided by each business, some categories and
other short information that describe the services; a “green pages” service
provides the technical details of each service, for example its type and the
URL where it can be contacted. In facts, each service can be associated with
a service type, where interface definitions, message formats, message
protocols, and security protocols can be defined.
WSDL [5], an XML based language, can be used to describe the interfaces,
together with their invocation methods, exposed by web services. These
descriptions can be advertised and discovered using UDDI registries. WSDL
describes a Web service at two different levels. A first, more abstract, level
allows the description of the messages that a service can send and receive,
typically using an XML schema. Message exchange patterns can be
described as operations, and operations can be grouped in interfaces.
Another, more concrete, level allows to specify the bindings between
interfaces and transport protocols. Here services are associated with concrete
endpoints, i.e., network addresses where they can be contacted.
2.2.2 Grid services
The Open Grid Services Architecture (OGSA) [6] is an effort towards the
standardization of a grid architecture. The proposed standard is based on web
services concepts and technologies. The founding concept is the grid service,
Agents for service composition
13
that is a web service exposing a well defined set of interfaces and behaviors.
The complete interface of a Grid service is described in a WSDL document
and advertised through public registries.
Mechanisms are provided for discovering the characteristics of available
services. A standard representation of service data is defined as a structure of
XML elements and a standard method is exposed by grid services for pulling
their individual service data.
Registry services can be used to register information about grid services
instances and handle map services can translate handles into concrete
references to access grid service instances.
A standard factory interface is defined, together with its semantics, to allow
the dynamic creation and management of new service instances.
Two interfaces are defined for freeing the resources associated with grid
service instances. Either services can be destroyed explicitly, or their lifetime
can be managed by defining a termination time (soft lifetime management).
The termination time can be refreshed by keep-alive requests.
Abstractions are provided to allow the asynchronous notification of
interesting state changes in grid services. Services can expose notification
source or notification sink interfaces, to subscribe and push service data in an
event-driven fashion.
2.2.3 FIPA Agents
A different, but somewhat parallel, effort to standardize a framework for the
integration of remote services and applications is instead founded on
autonomous software agents. The FIPA standards [8] describe at various
levels how two autonomous agents can locate each other by registering
themselves on public directories and communicate by exchanging messages.
14
Models and Tools to Manage Security in Multiagent Systems
FIPA agents are situated into a specific environment, which should provide
the resources and services they need. In particular, FIPA defines directory
services for registering and discovering agents and services, and transport
services for delivering messages.
The agent directory service provides a location where agents can register
their descriptions and where they can search for other agents they want to
interact with. Basically, the agent directory service stores entries, where each
agent name is bound to one or more agent locators, i.e. transport addresses
where agents can be contacted. Message-based interfaces are also defined for
the creation and lifecycle management of deployed agents.
The service directory service, instead, provides a location where agents and
other entities can register and discover descriptions for available services.
The entries stored in service directory service bind a service, identified by
means of a unique name, to a service type, some service locators to
concretely access the service, and optional attributes. Compared to web
services directories and UDDI, the FIPA service directory service lacks the
so-called green pages service, where technical details and interfaces to
contact the service could be registered and discovered dynamically.
In FIPA agent systems, agents communicate by exchanging messages which
represent speech acts. Each message is structured as a tuple of key-value
pairs and is written in an agent communication language, such as FIPA ACL.
Its content, instead, is expressed in a content language, such as KIF or SL.
Ontologies, listed within the ontology slot of the message, can ground the
content to a semantic domain. The sender and the receivers of a message are
expressed as agent names, i.e. unique identifiers for agents.
When a message is sent to a remote platform, it is encoded into the payload
of a transport message, using a representation appropriate for the particular
transport mechanism, for example an HTTP connection. The transport
Agents for service composition
15
message can also have an envelope, containing transport addresses of
involved agents and all relevant information about how to deliver the
message.
2.3 Agent-based grid services
Trying to overcome the strong differences among these approaches, a number
of research works are being directed toward the identification of a more
generic architecture, in particular, to allow the composition of heterogeneous
services provided on the web.
The DARPA CoABS Grid [7] leverages existing Java network technologies,
like RMI and Jini, and FIPA compliant [8] agent platforms for agent
messages, to build a grid infrastructure. One of the main aims of the project is
to demonstrate the value of grid concepts as a glue to achieve interoperability
among different multi-agent systems, including RETSINA [9] and OAA [10].
Interconnectivity among objects and other components is also supported.
In the SoFAR project [11,12], agents realize a grid system where they are
able to provide services. In particular, grid agents advertise the services they
provide through UDDI repositories and WSDL descriptions.
In [13], authors deal with the composition of web services to implement
business processes. WSDL is used for the operational and behavioral
descriptions of services. BPEL4WS [14], WS-Coordination [15] and WSTransaction [16] are used for composing and choreographing services.
Instead, in [17], DAML-S [18] is preferred to other solutions to represent not
only properties and capabilities of web services, but even workflows of
compound services. In particular, DAML-S is argued to be best fit QoS
issues, as these cannot be separated from semantics and context of services.
In [19] extended UDDI registries are used for the implementation of a QoS16
Models and Tools to Manage Security in Multiagent Systems
based service discovery system. Each service provider is associated with two
descriptions: a functional interface, for the service invocation, and a
management interface, for the QoS attributes and performance
characteristics.
2.4 References
[1]
JADE Home Page, 2004. Available from http://jade.tilab.com.
[2]
Web Services Description Working Group Home Page, 2003.
Available from http://www.w3c.org/ws/.
[3]
SOAP Specifications
http://www.w3c.org/TR/soap/.
[4]
Home
Page,
2003.
Available
from
UDDI Home Page, 2003. Available from http://www.uddi.org.
[5]
WSDL Specifications Home
http://www.w3c.org/TR/wsdl20/.
Page,
2003.
Available
from
[6]
Open Grid Services Architecture Home Page, 2002. Available from
http:// http://www.globus.org/ogsa/.
[7]
CoABS
Home
http://coabs.globalinfotek.com.
[8]
Page,
2001.
Available
from
FIPA Home Page, 1996. Available from http://www.fipa.org.
[9]
K. Sycara, J.A. Giampapa, B.K. Langley, M. Paolucci. The
RETSINA MAS, a Case Study. In: Software Engineering for Large-Scale
Multi-Agent Systems: Research Issues and Practical Applications, A. Garcia,
C. Lucena, F. Zambonelli, A. Omicini, J. Castro, eds. Vol. LNCS 2603, pp.
232—250, 2003, Springer-Verlag, Berlin, Germany.
[10]
D.L. Martin, A.J. Cheyer and D.B. Moran. The Open Agent
Agents for service composition
17
Architecture: A Framework for Building Distributed Software Systems.
Applied Artificial Intelligence 13:91-128. 1998.
[11]
L. Moreau. Agents for the Grid: A Comparison for Web Services
(Part 1: the transport layer). In Proc. IEEE International Symposium on
Cluster Computing and the Grid, Berlin, Germany, 2002.
[12]
A. Avila-Rosas, L. Moreau, V. Dialani, S. Miles, X. Liu. Agents for
the Grid: a Comparison with Web Services (part II: Service Discovery). In
Proceedings of Workshop on Challenges in Open Agent Systems
(AAMAS02), pp. 52-56, 2002, Bologna, Italy.
[13]
F. Curbera, R. Khalaf, N. Mukhi, S. Tai, S. Weerawarana. The next
step in Web services. Communication of ACM, 46(1):29-34, 2003.
[14]
Business Process Execution Language for Web Services 1.1, 2002.
Available from http://www.ibm.com/developerworks/library/ws-bpel/.
[15]
Web Services Coordination 1.0, 2002. Available from http://www106.ibm.com/developerworks/library/ws-coor.
[16]
Web Services Transaction 1.0; 2002. Available from http://www106.ibm.com/developerworks/library/ws-transpec.
[17]
C. Patel, K. Supekar, Y. Lee: A QoS Oriented Framework for
Adaptive Management of Web Service Based Workflows. Proc DEXA 2003,
pp. 826-835, 2003.
[18]
DALM-S
Home
http://www.daml.org/services/.
Page,
2003.
Available
from
[19]
R. Al-Ali, O. Rana, D. Walker, S. Jha, S. Sohail. G-QoSM: Grid
Service Discovery Using QoS Properties. Computing and Informatics
Journal, 21 (4), 2002.
[20]
BeanShell
Home
Page,
2004.
Available
18
Models and Tools to Manage Security in Multiagent Systems
from
http://www.beanshell.org.
[21]
Drools Home Page, 2004. Available from http://www.drools.org.
[22]
C.L. Forgy. Rete: A Fast Algorithm for the Many Pattern / Many
Object Pattern Match Problem. Artificial Intelligence 19(1), pp. 17-37, 1982.
[23]
E.J. Friedman-Hill. Jess, the Java Expert System Shell. Sandia
National
Laboratories,.
2000.
Available
from
http://herzberg.ca.sandia.gov/jess.
[24]
A. Poggi, G. Rimassa, M. Tomaiuolo. Multi-user and security
support for multi-agent systems. In proc. Of WOA 2001: 13-18. Modena,
Italy. 2001.
[25]
Chakraborty D., Joshi A. Dynamic Service Composition: State of
the Art and Research Directions. December, 2001.
[26]
Casati F., Ilnicki S., Jin L.J., Krishnamoorthy V., Shan M.C.
Adaptive and Dynamic Service Composition in eFlow. HPL-2000-39. March,
2000.
[27]
Ponnekanti S.R., Fox A. Sword: A Developer Toolkit for Building
Composite Web Services. WWW 2002.
[28]
Ninja.
UC
Berkeley
Computer
http://ninja.cs.berkeley.edu/overview. 1999.
Science
Division.
[29]
Kiciman E., Fox A. Separation of Concerns in Networked Service
Composition. May, 2001.
[30]
Kiciman E., Fox A. Using Dynamic Mediation to Integrate Cots
Entities in a Ubiquitous Computing Environment. 2000.
[31]
Raman B., Katz R. H. Load Balancing and Stability Issues in
Algorithms for Service Composition, IEEE Infocom 2003.
Agents for service composition
19
20
Models and Tools to Manage Security in Multiagent Systems
3 Agentcities and openNet
Agentcities is a network of FIPA compliant agent platforms that constitute a
distributed environment to demonstrate the potential of autonomous agents.
One of the aims of the project is the development of a network architecture to
allow the integration of platforms based on different technologies and
models. It provides basic white pages and yellow pages services to allow the
dynamic discovery of the hosted agents and the services they offer. An
important outcome is the exploitation of the capability of agent-based
applications to adapt to rapidly evolving environments. This is particularly
appropriate to dynamic societies where agents act as buyers and sellers
negotiating their goods and services, and composing simple services offered
by different providers into new compound services.
One of the main reasons to use autonomous software agents is their ability to
interact to show useful social behaviours rapidly adapting to changing
environmental conditions. But the most interesting applications require that
large and open societies of agents are in place, where collaborating and
competing peers are able to interact effectively. In a context where a number
of possible partners or competitors can appear and disappear, agents can
highlight their ability to adapt to evolving social conditions, building and
maintaining their networks of trust relations within a global environment.
The greatest effort to create such a large and open society of autonomous
software agents is Agentcities [1]. The project, that started on the 1st of July
as a research project founded by the European Commission, developed a
network of FIPA-compliant agent platforms spanning over the whole globe.
A number of complex agent-based applications are currently deployed on the
network and, thanks to their compliance to the standards defined by FIPA
(Foundation for Intelligent Physical Agents), they are able to communicate in
Agentcities and openNet
21
an effective way to reciprocally exploit the service they provided to the
community.
In fact, to allow the integration of different applications and technologies in
open environments, high level communication technologies are needed. The
project largely relies on semantic languages, ontologies and protocols in
compliance with the FIPA standards [7].
One of the main achievement has been the demonstration of a working
application that, relying on the marketplace infrastructure deployed on the
Agentcities network, was able to dynamically discover and negotiate the
services offered by different service providers. The collected services were
then composed into a new compound service to be offered back on the
marketplace, eventually allowing other agent based applications to
dynamically discover it and, if useful to achieve their own goals, add it to
their plans of action.
3.1 Network
The Agentcities network [13] grows around a backbone of 14 agent
platforms, mostly hosted in Europe. These platforms are deployed as a 24/7
testbed, hosting the services and the prototype applications developed during
the lifetime of the project. The backbone is an important resource for other
organizations, even external to the project, that can connect their own agentbased services, making the network really open and continuously evolving.
22
Models and Tools to Manage Security in Multiagent Systems
Figure 3.1. The Agentcities backbone
Currently, the Agentcities network counts 160 registered platforms, among
which 80 have shown activities in the last few weeks. The platforms are
based on more than a dozen of heterogeneous technologies, including Zeus,
FIPA-OS, Comtec, AAP, Agentworks, Opal. More than 2/3 of them are
based on JADE [9] and its derived technologies, as LEAP and BlueJADE.
3.2 Agentcities Network Architecture
The structure of the Agentcities network architecture version 1.0 consists of a
single domain (“the network”) with a collection of FIPA 2000 Agent
Platforms deployed in it. All these agent platforms are visible to one another
and can exchange messages using HTTP or IIOP communication services.
There is a collection of FIPA 2000 agents, each of which is running on one of
the agent platforms in the network, and a collection of services, each of
which is offered by one or more agents in the network. The central node,
hosting the network services for the whole network, represents the critical
point of the entire architecture. As a matter of fact the components and
network services supporting the global network are configured in the form of
Agentcities and openNet
23
a star topology with core directory and management facilities hosted at a
single root node. This leads to problems with robustness (single point of
failure) and scalability (potential overloads at the central point).
Fig. 3.2. Agentcities Network generic architecture
The network services, offered by the central node, are: Agent Platform
Directory, Agent Directory and Service Directory. Each of these services is
accessible over the Internet using both standard FIPA interfaces (access for
agents) and web interfaces (access for humans). All services are publicly
available and can be used by both project partners and third parties (non
Agentcities.RTD partners) alike. Figure 3.2 shows the generic architecture
for Agentcities network directory services (platforms, agents and services).
For each type of directory a polling agent uses a data source of registered
entities (platforms, agents or services) and regularly polls the instances
found. The resulting information is accessible through agent and web
interfaces.
In the following each of these services are described in terms of its functions,
interfaces and other possible useful criteria.
24
Models and Tools to Manage Security in Multiagent Systems
3.2.1 Agent Platform Directory
This service provides mechanisms for finding which platforms exist in the
network and which are currently active. It is based on a database of registered
platforms. The content of the directory is a single entry per platform that
includes all the data registered by users for the platform. The directory
contains an active component that regularly contacts registered platforms to
check if they are running or not. Agents are able to access the list of
platforms currently registered and the list of platforms currently running by
sending queries to a dedicated Global Agent Platform Directory agent.
3.2.2 Agent Directory
This service provides mechanisms for finding which agents are currently
registered and active in the network. More precisely it lists which agents are
registered on known platforms, platforms which are currently registered in
the database of the Agent Platform Directory service. The Agent Directory is
therefore directly dependent upon the Agent Platform Directory and uses it as
a data source. Even though the objective is to have information only about
agents currently registered and active in the system, in practice the
information is not always up to date. The freshness of this information
depends on the time interval between two queries to the local AMS services
on known platforms.
3.2.3 Service Directory
This is the more interesting service from the point of view of the service
composition.
Agentcities and openNet
25
The Service Directory provides mechanisms for finding services that are
currently being provided in the network; it works in the same way as the
Agent Directory.
The directory contains entries of the type defined in the FIPA Agent
Management specification [7] for entries in the FIPA-DF yellow pages
service. The service description definition is shown in Table 3.1.
Frame
service-description
Ontology
FIPA-Agent-Management
Parameter
Description
Presence
Type
name
The name of the service.
Optional
String
type
The type of the service.
Optional
String
Reserved
Values
fipa-df
fipa-ams
protocol
A list of interaction protocols Optional
supported by the service.
Set
of
String
ontology
A list of ontologies supported by Optional
the service.
Set of
String
language
A list of content languages Optional
supported by the service.
Set
of
String
ownership
The owner of the service
String
properties
A list of properties
discriminate the service.
Optional
that Optional
FIPA-AgentManagement
Set
of
property
Table 3.1. FIPA definition of a service description. Replicated from
[FIPA00023]
3.2.4 Drawbacks
The deployed network services, based on the network architecture 1.0,
proved to be useful tools and provided an important live update on the state
of vital features of the network. Despite this, a number of issues needed to be
26
Models and Tools to Manage Security in Multiagent Systems
addressed in order to improve the network functionality. A first issue
concerns the centralization and the strictly related problems of robustness and
scalability. A second issue deals with the service independence. In fact the
Agent and Service Directory services depend upon the Platform Directory
service, meaning that a failure in the later would cause a failure in the both of
the former. A further issue regards the Agent and Service Directory service
agents, which need only register on their local platform to appear in the
global directories. This fact is not known to the AMS and DF services on the
local platforms, since they have no way of knowing that their local
information is being replicated elsewhere. Furthermore agents and services
cannot be explicitly registered in global directories, they can only be
registered locally. Finally registering a platform in the system requires a
human to access a web site. These main issues and other drove the partners of
the projects to design and implement a new version of the network
architecture.
3.3 openNet Network Architecture
While the FIPA 2000 standard will continue to be the backbone of the
Agentcities network the new version of the architecture aims at generalise the
network model to make it possible to employ or link to other technologies
such as web services, GRID services or peer-to-peer networks. This is
achieved by adopting an abstract model based on the notion of actors and
services and mapping this to a number of different implementation
technologies. Furthermore the changes carried out make possible to flexibly
specify and describe detailed structural information about deployed networks
of actors and services. This is achieved by adopting a model based on domain
structures that can be used to describe and implement arbitrary organisational
structures. Domains are instantiated as directories represented by collections
of directory services that implement domain policies.
Agentcities and openNet
27
The Agentcities/OpenNet network depends upon data exchange
between network nodes in order to maintain a live view of which
platforms are available, what characteristics they have and which are
up and running at any one time. The first generation of this service
was based on an extremely simple “ping protocol” which provided
notification only of liveness status of a platform queried.
Recently, a number of more advanced optional protocols/mechanisms
have been specified, to allow network nodes to discover more about
one another and for network information services to provide a richer
picture of the network state.
3.3.1 Platform Service
The solution is based on the architecture of Figure 3.3, which shows
the following services:
•
A Platform Service (PS) to be deployed on Agentcities
network nodes to provide information on for local platform
state and actively test other nodes in the network. A PS
effectively represents the agent platform to the remainder of
the Agentcities network.
•
A Network Service (NS) to aggregate and display network
information data.
28
Models and Tools to Manage Security in Multiagent Systems
Figure 3.3. Network Services in the Agentcities network, Platforms P1 and
P2 both host PS services which share information about the platform with
the remainder of the network. The PS service maintains a registry of existing
platforms and the NS aggregates network data to provide views of the
network.
In terms of communication, the system requires the following
protocols:
•
A simple Ping Protocol which includes a challenge response
to test whether a particular agent is reachable in the network.
•
A Platform Profile Protocol to obtain / share additional
platform information between platforms beyond the current
Agentcities and openNet
29
FIPA Platform Profile.
•
A Network Testing Protocol to request and query test results
for tests which have been carried out between platforms.
3.3.2 Platform and Agent Directory Service
The system also relies on the availability of a Directory Service (DS),
which is the default mechanism for announcing platforms and agents
in the network and to search for platforms and agents registered in the
network (or part of the network). The directory is structured as a DNS
style hierarchy with explicit delegation of naming authority for
subdomains and federated directory components. The main idea is of a
tree structure, with the hierarchy roughly corresponding to
organizational structure, and names using "." as the character to mark
the boundary between hierarchy levels. Each node and leaf on the tree
corresponds to a resource set (which may be empty). The system
makes no distinctions between the uses of the interior nodes and
leaves. The domain of all Agentcities platforms is made up of the
platforms listed in the Agentcities Platform Directory (that is in the
root and all subdirectories which come into being via delegation).
3.3.3 Service Discovery
Another component, which is currently undergoing development and which
will make the Agentcities network sounder, is a DF-like agent founded on a
JXTA peer to peer network [18].
JXTA technology is a set of open protocols allowing any connected device
on the network to communicate and collaborate in a peer to peer fashion.
30
Models and Tools to Manage Security in Multiagent Systems
JXTA peers can form peer groups, which are virtual networks where any peer
can seamlessly interact with other peers and resources, be them connected
directly or through intermediate proxies.
In [17] a set of new components and protocols are described, to allow the
implementation of a DF-like service on a JXTA network. These include:
•
Generic Discovery Service – a local directory facilitator, taking part
in the peer-to-peer network and implementing the Agent Discovery
Service specifications.
•
Agent Peer Group – a child of the JXTA Net Peer Group that must
be joined by each distributed discovery service.
•
Generic Discovery Advertisements – to handle agent or service
descriptions, for example FIPA df-agent-descriptions.
•
Generic Discovery Protocol – to enable the interaction of discovery
services on different agent platforms. It’s a request/response
protocol to discover advertisements, based on two simple messages,
one for queries and one for responses.
3.4 Service Composition
The main rationale for using agents is their ability to adapt to rapidly
evolving environments and yet being able to achieve their goals. In many
cases, this can only be accomplished by collaborating with other agents and
leveraging on the services provided by cooperating agents. This is
particularly true when the desired goal is the creation of a new service to be
provided to the community, as this scenario often calls for the composition of
a number of simple services that are required to create the desired compound
service.
Agentcities and openNet
31
Figure 3.4. Event Organizer scenario.
In the following pages we will show how the different components hosted on
the Agentcities network can be used to orchestrate the composition of simple
services to build a new compound service.
Figure 3.4 depicts the reference scenario of a demonstration held at the end of
the project, when a number of individual applications deployed by the
different partners were shown to work in close integration to achieve this
goal.
3.4.1 Event Organizer
The Event Organizer is an agent-based prototype application showing the
results that can be achieved using the services provided by the Agentcities
project. It allows a conference chair to organize an event, booking all needed
32
Models and Tools to Manage Security in Multiagent Systems
venues and arranging all needed services, and then sell the tickets for the new
event.
Using the web interface of the Event Organizer, its user can list a set of
needed services, fixing desired constraints on each individual service and
among different services. The global goal is then split into sub-goals,
assigned to skilled solver agents [2].
The Event Organizer uses the marketplace infrastructure deployed on the
Agentcities network to search for suitable venues. These are matched against
cross-service constraints and, if found, a proper solution is proposed to the
user as a list of services that allow the arrangement of the event. As a matter
of facts, the task of selecting services that validate all the constraints is
distributed between the Trade House, which checks the constraints regarding
individual services, and the Event Organizer, which instead checks the
constraints that link the features of two different services.
The selected services are then negotiated on the marketplace with their
providers and a list of contracts is proposed to the user for final acceptance.
The negotiation process involves the servant agents, representing the buyer
and the seller, hosted on the Trade House. If successful, the negotiation ends
suggesting a point of balance between the contrasting interests of the buyer
and the seller. It can take into account multiple attributes of the traded good,
for example the price and the period of booking.
Finally, after the new event has been successfully organized and all contacts
have been accepted and signed, the tickets for it can be sold, once again using
the marketplace infrastructure.
The whole process requires the cooperation of a number of partners. Each
one of them can exploit the directory services of the Agentcities network to
dynamically discover the location of the others.
1.
The Event Organizer directly interacts with a Trade House to search
Agentcities and openNet
33
for venues and negotiate selected services.
2.
Other agent-based applications, as the Venue Finder and the SME
Access, are responsible to offer goods on the Trade House and to
negotiate them on behalf of their users.
3.
A Banking Service takes care of managing the banking accounts of
the involved partners, securing all requests against tampering and
eavesdropping.
4.
An Auction House is used to create auctions and sell the tickets of
the new event.
The interesting part of the game is that these tickets are available for other
agent-based applications. In the integrated demonstration staged at the end of
the project an Evening Organizer, helping its user to arrange an evening out
for example booking a restaurant and buying the tickets for a concert, was
able to discover the new event and bid for some tickets on the Auction
House.
3.4.2 Trade House
The Trade House is an agent-based application that allows the advertisement
and negotiation of products. It is not limited to a fixed set of known products,
but can manage any kind of tradeable goods described through a custom
ontology. In this sense, we can say that the Trade House is productindipendent, since it can load the ontologies that describe new products at run
time. In fact, whenever an external user agent advertise a product defined
through a new ontology, this is dynamically added to the system. The trading
engine supports the negotiation of different attributes at the same time, trying
to mediate the interest defined explicitly by buyers and sellers.
The main features of the Trade House are:
34
Models and Tools to Manage Security in Multiagent Systems
•
Ontology independence – The Trade House supports dynamicallyloaded, user-defined ontologies, both in the advertising and in the
trading. Thus, agents are not required to map explicitly their
ontology into a different one, already known by the system, but they
can use their own. Doing so, the Trade House is able to offer its
services for any kind of product, or being more general, for any kind
of concept defined by means of an ontology. The user-defined
ontologies are indeed published in the Agentcities Ontology Server.
•
Constraint-based search – Search into the product repository is done
through constraints, which can be defined for each product’s
attribute. Thus, the search process might be considered as a
matchmaking process, since the obtained results are bounded by
user-defined constrains.
•
Federation – Each instance of the Trade House can be federated
either with other instances of the Trade House or with instances of
the Auction House. Through federation, houses can share their
product repositories to provide the users access to a wider market,
which is a distributed one. This federated search works in a peer-topeer fashion: each house forwards search requests to their
neighbours and collect the obtained results, from which the user
agent could decide to join another house.
•
Multi-attribute negotiation – The Trade House contains a trading
engine which can adjust the properties of a product to maximize the
satisfaction of both the seller and the buyer. Such setting is
evaluated into the intersection of the interest ranges defined by both
parties, weighted with their preferences. A range and a preference
are required for each attribute agents want to negotiate for.
•
Servant agents – In order to ease the interaction with the Trade
Agentcities and openNet
35
House, user agents can customize a servant agent by defining their
own negotiation strategies and their own interests. Then, most of the
required interactions between user agents and the system are
delegated to their servant agent, reducing the complexity of usage in
customer’s side.
3.4.3 Auction House
The Auction House is an agent-based service for businesses that is designed
to allow for a broad range of typical auction-style trading activities.
The main features of an Auction House are the following:
•
Concurrent auctions – Users of the Auction House may
concurrently create, participate in, and bid on multiple auctions of
different type including English, Dutch, and Vikrey auctions.
•
Bidding strategies – The Auction House provides different
parameterized bidding strategies, appropriate for each type of
auctions.
•
History functions – Users, both auctioneers and bidders, are allowed
to inspect their past or current activities on auctions, making the
auction house even more convenient to use.
•
Constraint-based search – Any user that is registered at an Auction
House may search for items that are currently traded in the Auction
House as commodities. Users may search for trading opportunities
for a given type of commodity, either locally at the Auction House
they are actually registered at, or globally in any Auction or Trade
House that is physically connected to the local one via the Internet.
•
Integrated payment – The auction house provides an integrated
payment service to its users, exploiting the Agentcities virtual
36
Models and Tools to Manage Security in Multiagent Systems
Banking Service. Payment calculations include the price of the
traded item, auction house registration fee, and warranty.
•
Ontology independence – Users may upload their individual
ontology for describing their items to be auctioned in the auction
house. These ontologies are published and available for use at the
Ontology Server.
3.4.4 Ontology Service
The purpose of building an ontology sever is to provide a common
knowledge base in an agent community whereby a better understanding can
be achieved amongst agents with minimum human involvement. Ideally,
agents can communicate with each other without human intervention that is
currently necessary to interpret the semantics of messages between them. The
task can be divided into three significant parts. First, using a web ontology
language to formalise domains by defining classes and properties of those
classes, defining individuals and assert properties between them. Second, to
support reasoning about these classes and individuals to the degree permitted
by the formal semantics of the language. Last but not least, to build a
management system that conducts both ontologies and users. The
management system plays an important role in building knowledge bases that
are shared and well-maintained by users. The idea is to set up an open
collaborative environment that shows the following features:
•
Ontology language – The ontology language used in the current
version is DAML + OIL [4]. Jena library (HP Labs) is used as a
base to maintain ontology repository. Some reasoning functions,
such as querying relationship between entities amongst ontologies,
consistency checking before asserting or retracting a relationship,
are also implemented.
Agentcities and openNet
37
•
Import statements – Ontologies are centrally maintained and are
linked to each other through ontology imports statements. With the
imports mechanism, a distributed knowledge base hierarchy is built
so that a new ontology can be plugged in easily and inherent the
needed general knowledge base instead of building it totally from
scratch. The imported ontology can also be treated as intersection
with the other ontology that imports the same ontology. In order to
maintain the consistency of the existing knowledge base, an
assertion or a retraction that violent the consistency is disallowed by
the system.
•
Role-based access control – User access control is introduced to
build a collaborative environment. Each ontology is associated with
an access group managed by its owner. An ontology owner can
configure the access rule for different kind of user and can assign
users to the ontology’s group. Every user is rated against some
criteria, such as the number of successful assertions that has been
introduced by the user. The rating of a user can be used
automatically by program to judge whether the user can be added to
a group. An ontology therefore can be set up as “open”, which
means a standard for joining the ontology’ group is specified by the
owner. Any user who satisfies the standard gains the access of the
open ontology. Of course, any contribution to the ontology will still
be checked for consistency with existing knowledge base to secure
the system.
•
Interface – An agent is associated with Ontology server to
communicate with other agents through FIPA ACL Language. A set
of APIs is provided for calling ontology server’s functionality. An
interactive web access written in JSP is also built to demonstrate
those functionalities.
38
Models and Tools to Manage Security in Multiagent Systems
3.4.5 SME Access
SME Access is an agent generation and hosting service for businesses,
specifically designed to allow them to deploy a presence on the Agentcities
network. The system is primarily split into two elements: the web interface
and the agent server.
•
Web interface – The web interface is a collection of Java Server
Pages, backed by a Struts architecture of form beans and servlets.
The JSP pages provide user registration and login, as well as agent
configuration, and agent control through the user's page. The
information required to generate the agent is stored within the scope
of the web application, and when a request is made to generate the
agent, that data is passed via RMI to the agent server element.
•
Agent server – The agent server uses the Zeus toolkit as the basis of
most of its operation. It's primary purpose is the management of the
agents, allowing them to be generated, run, stopped and deleted, all
via RMI invocations. When the agents are run, they are linked to
pre-existing classes that provide the behaviour associated with that
service type. All agent management functions are accomplished
through the web interface, but the agents, once run, will operate on
the server without any interaction from the user. In the future, it is
hoped that the agents will be able to notify the user of events that
happen, possibly through linking to a database the user already
operates. The service permits registered users to configure a
collection of agents to represent aspects of their business, then run
them, whereupon they will register with a selected agent platform in
order to advertise their presence to agents there.
•
Agent creation – The application allows businesses to create agents
according to templates within the application. These templates a
Agentcities and openNet
39
compliant with instances of the same agent type on the Agentcities
network. Currently, there are compatible templates for restaurants,
hotels and cinemas. Therefore, any agents that can query those
service types will be able to interact with the SME Access instances
of them. All the user needs to provide is the information the agent
will respond with, and will base its decisions on. This information is
collected through the web interface to the application, which is an
easy to use collection of forms.
•
Agent deployment – Once the agents are created, they can then be
generated and deployed to the hosting platform, where they will run
and make themselves available for communication with other
agents. Their status can be queried from the web interface, and their
details changed
•
Advertisement – In order to facilitate service discovery, the agents
can register with any of the directory facilitators on the Agentcities
network, and will write out DAML-S service descriptions [4] to an
accessible location, creating both service instance descriptions and
service profile descriptions. A process model has been written for
each service type, and is placed in the same location. Therefore,
remote agents can discover the location of the instances, and
determine what service they provide. Additionally, some of the
templates provide compatibility with the Trade House system, and
can register their goods with the system, providing another avenue
for custom to arrive at the business.
3.4.6 Venue Finder
The Venue Finder Service provides B2B services to Market Service domains
and personal agents. The service is modelled into two levels, in the first it
40
Models and Tools to Manage Security in Multiagent Systems
allows venue based service domains to create instances of individual venues
they would like to publish and sell. Whilst the second level, allows venue
procuring services to access the system for querying, negotiating and booking
of venues. The general motivation for this work is to provide venue-finder
services to support the expanding use of thirdparty venues for corporate
(business) entertainment of staff and clients, and to support private functions
such as weddings and birthday celebrations.
•
Heterogenous venues – The Venue Finder is able to semantically
convert heterogeneous venues into a common communicable
interactive system and additionally offer its services to the Trade
House. The venues are generated by converting information
regarding venues into a general ontology.
•
Advertisement on the Trade House – When the venues load into the
venue finder domain, it also attempts to register itself with the Trade
House. The venues would then publish and overload its ontology
over the market place ontology to enable negotiation between offers
or contracts placed on the market place, and simultaneously provide
semantic information of the actual venues through a generic
ontology of the venue finder.
•
Integrated payment – Relatively, the venue finder also provides the
Trade House with payment information that enables the Payment
Service to complete a sale and purchase agreement.
•
Match-making routines – Apart from relying on the Trade House
infrastructure to advertise and negotiate its venues, the finder can
even provide information of venues to support service selection and
aggregation through match-making routines. In fact the Venue
Finder is able to directly match the constraints of venues against the
requirements of the Event Organizer.
Agentcities and openNet
41
3.4.7 Banking Service
The expression “banking services” refers to the set of processes and
mechanisms that a virtual agent-based banking institution offers to the agents
accessing the Agentcities marketplace. This mainly includes:
•
electronic payment service – for enabling agents to make and
receive side payments;
•
account management service – for creating, maintaining and closing
bank accounts.
The banking service design consists therefore of two main sub-sets that are
described in the following as two distinct frameworks, even if they both rely
upon the same unique ontology for banking services.
The payment service includes the set of operations and mechanisms allowing
the transfer of money between distinct accounts either under the control of
the same bank or under the control of different banks. With the term “bank”
we refer here to a virtual banking institution.
The account management service groups the set of actions and operations
needed to manage bank accounts. It is possible to:
•
open a bank account;
•
dynamically verify the information about an account;
•
close an existing bank account.
All these operations require the agent requiring the service to be performed to
be an “authorized” entity. Authorization and authentication of agents
accessing the banking services has been implemented integrating the service
with the security library and the infrastructure provided on the Agentcities
network.
42
Models and Tools to Manage Security in Multiagent Systems
3.4.8 Security Service for e-Banking
Services being developed by the Agentcities project require, and would
benefit from having security. In particular, security mechanisms have been
developed to protect the access to the Banking Service, supporting core
security requirements such as message confidentiality, message integrity and
agent authentication. Additionally, an authorisation model is closely linked to
the authentication model using simple policies or profiles. In order for an ebanking institution to support payment mechanisms for the purchase and sale
of goods, an appropriate level of security is needed to secure e-commerce
transactions within the Agentcities network.
Specifically, both the electronic payment and the account management
services of the Banking Service are supported by the design and
implementation of a distributed security service.
The services are developed using a modular approach where security
processes and behaviours are separated to support ease of integration between
various agent-based systems. The system provides two key functionalities:
•
A Certification Authority (CA) service which provides credential
management services. It is published on an agent server hosted on
the Agentcities network.
•
A plug-in for agent security management. It is installed on clients
and the Banking Service to provide the necessary security support.
The security service acts as a plugin for agent systems to
communicate securely by offering the following features: end-to-end
confidentiality, mutual authentication, data integrity, and session
and fingerprint management.
Agentcities and openNet
43
3.5 Conclusions
Agentcities is certainly the greatest effort to create a global network of FIPAcompliant agent platforms. It is giving a great impulse toward the openness
and interoperability of different agent platforms and agent-based
applications, paving the way for a large and distributed society of agents.
In this context, where new cooperating and competing peers can rapidly
appear, agents can show their ability to adapt to evolving environmental
conditions, creating and maintaining their network of social relations.
Exploiting the marketplace infrastructure deployed on the Agentcities
network, it is possible to dynamically discover and negotiate some services
offered by different providers.
In particular, an application has been developed which can search for and
book all the venues required to organize an event, for example a concert or a
conference, in a service-composition process. Once the event is organized, it
can be again advertised as a compound service on the marketplace and
eventually used by other agents to plan their actions and achieve their own
goals.
3.5.1 References
[1]
Agentcities.RTD,
reference
http://www.agentcities.org/EURTD/.
IST-2000-28385,
[2]
Bergenti F., Botelho L. M., Rimassa G., Somacher M. A FIPA
compliant Goal Delegation Protocol. In Proc. Workshop on Agent
Communication Languages and Conversation Policies, AAMAS, Bologna,
Italy, 2002.
[3]
Castelfranchi C., Falcone R., Socio-Cognitive Theory of Trust.
44
Models and Tools to Manage Security in Multiagent Systems
http://alfebiite.ee.ic.ac.uk/docs/papers/D1/ab-d1-cas+fal-soccog.pdf.
[4]
DAML http://www.daml.org/.
[5]
Durfee E. H. Coordination of Distributed Problem Solvers. Kluwer
Academic Publishers, Boston, 1988.
[6]
Finin T., Labrou Y. KQML as an agent communication language. In
J.M. Bradshaw (ed.), Software Agents, MIT Press, (Cambridge, MA, 1997),
291-316.
[7]
FIPA spec. XC00023H. FIPA Agent Management Specification.
http://www.fipa.org/specs/fipa00023/.
[8]
FIPA spec. XC00037H. FIPA Communicative Act Library
Specification. http://www.fipa.org/specs/fipa00037/.
[9]
JADE Home Page, 1999. Available at http://jade.cselt.it.
[10]
Jennings N. R., P. Faratin P., Lomuscio A. R., Parsons S., Sierra C.
and Wooldridge M., Automated Negotiation: Prospects, Mehods and
Challenges. In International Journal of Group Decision and Negotiation.
10(2), pages 199-215, 2001.
[11]
Maes, P., Agents that reduce work and information overload.
Communications of the ACM vol. 37 no. 7, pp 30 – 40, July 1994
[12]
Pitt J., Kamara L., Artikis A.. Interaction Patterns and Observable
Commitments
in
a
Multi-Agent
Trading
Scenario.
http://alfebiite.ee.ic.ac.uk/docs/papers/D1/abd1-pitkamart-ipoc.pdf.
[13]
The Agentcities Network . http://www.agentcities.net/.
[14]
Wong H. C., Sycara K., A taxonomy of middle-agents for the
internet. Agents-1999 Conference on Autonomous Agents 1999.
[15]
Yuan, Yufei, Liang, T.P. & Zhang, Jason J. "Using Agent
Agentcities and openNet
45
Technology to Support Supply Chain Management: Potentials and
Challenges", Michael G. DeGroote School of Business Working Paper No.
453, October 2001.
[16]
Zlotkin G. and Rosenschein J. S. “Mechanisms for Automated
Negotiation in State Oriented Domains”. In Journal of Artificial Intelligence
Research 5, pages 163-238, 1996.
[17]
FIPA, 2003. http://www.fipa.org/.
[18]
JXTA, 2003. http://jxta.org/.
46
Models and Tools to Manage Security in Multiagent Systems
4 Public-key infrastructures
Public-key cryptography is the basis for digital signature, and it is founded on
public/private key pairs. The scalability of this technology is assured by the
fact that only the private component of the public/private key pair must be
protected, while the public component can be distributed on public networks,
thus allowing interested parties to use security services.
The idea itself is as old as the paper of Diffie and Hellman [16], which in
1976 described, for the first time, a public key cryptographic algorithm.
Given a system of this kind, the problem of key
distribution is vastly simplified. Each user
generates a pair of inverse transformations, E
and D, at his terminal.
The deciphering
transformation, D, must be kept secret but need
never be communicated on any channel. The
enciphering key, E, can be made public by
placing it in a public directory along with the
user's name and address. Anyone can then
encrypt messages and send them to the user, but
no one else can decipher messages intended for
him. [16]
Before Diffie and Hellman published their algorithm, key distribution was a
highly risky process. The revolutionary idea of public key cryptography was
to greatly simplify this problem. But it was soon realized that, even if the
public key can be distributed freely, some form of integrity must be assured
to make it usable in security services.
In fact, most security services require the public key to be associated with
Public-key infrastructures
47
other information, and this binding must be protected. In particular, the user
of the public key must be assured that:
•
the public key, and the information associated to it, must be
protected in its integrity against unnoticed tampering;
•
the association between the public key and other information has
been gathered in a trusted manner.
In fact, a data integrity mechanism is not sufficient, by itself, to guarantee
that the binding between the public key and its owner (or any other
information associated to it) has been verified in a trustworthy manner.
Moreover, any implemented protection scheme should not affect the
scalability of the overall public-key infrastructure.
These goals are at the basis of each public-key infrastructure, and in the
following we will briefly see if and how they are matched by the X.509
infrastructure. In particular, we'll see an overall description of public key
certificates and some details of the syntax and semantics of the X.509 version
3 public-key certificate.
Digital certificates were originaly introduced to ensure the integrity of public
keys, thus providing a scalable solution to the key distribution problem. Their
primary function was to bind names to keys or keys to names.
Before continuing, however, it's worth spending some words about “digital
certificates”. First of all, the expression itself is not very precise, as it could
include paper certificates after being digitized. Also, it's confusing, as it
seems to suggest that security services can be enabled by presenting proper
certificates. In reality, digital certificates, per se, don't provide any security,
but can be used together with digital signatures to provide some additional
information about the sender of the message. In contrast, digital signatures
have an intrinsic meaning, at least demonstrating that the sender of the
message has access to a particular private key.
48
Models and Tools to Manage Security in Multiagent Systems
The original idea of encapsulating the public key into a signed data structure
before distributing it to its users can be traced back to 1978, when Loren
Kohnfelder presented it in his bachelor's thesis in electrical engineering from
MIT [15], entitled "Towards a Practical Public-Key Cryptosystem". As the
integrity of digital certificates can be garanteed by their signature, they can
be held and distributed by untrusted entities, thus in principle assuring the
desired performance and scalability properties to the whole system.
Public-key communication works best when the
encryption functions can reliably be shared
among the communicants (by direct contact if
possible). Yet when such a reliable exchange of
functions is impossible the next best thing is to
trust a third party. Diffie and Hellman introduce
a central authority known as the Public File.
Each individual has a name in the system by
which he is referenced in the Public File. Once
two communicants have gotten each other's keys
from the Public File they can securely
communicate. The Public File digitally signs all
of its transmissions so that enemy impersonation
of the Public File is precluded.
In Kohnfelder's mind, this Public File should have sobstituted trusted couriers
for distributing cryptographic key, implementing a sort of woldwide, always
available, on-line telephone book.
Thus, the idea of a trusted third party as a scalable and secure solution to the
problem of key integrity is not very new. Actually, it has been around for
almost three decades now.
Public-key infrastructures
49
A reasons for its failure to reach real world applications can probaly traced
back to poorly designed standards for certificates and infrastructures, which
make PKI difficult to deploy and to use, expensive, hardly interoperable, not
matching real users needs and requirements. In [12] Peter Gutmann collected
a number of official statements about PKI in general, and about X.509 in
particular.
Trust and authentication has been a huge
problem for us. We haven’t got a solution for
authentication. We’ve been trying with PKI for
about 10 years now and its not working because
it’s a pain to implement and to use. (PKI is ‘Not
Working’- Government Computing, UK)
A recent General Accounting Office report says
the federal government’s $1 billion PKI
investment isn’t paying off. […] The GAO says
widespread adoption is hindered by ill-defined
or nonexistent technical standards and poor
interoperability
[…]
Despite
stagnant
participation, federal officials are continuing to
promote the [PKI]. (Billion Dollar Boondoggle InfoSecurityMag, US)
Five years after then finance minister John
Fahey launched Gatekeeper to drive public and
business confidence in e-commerce, government
department and agency interest in PKI is almost
zero. A spokesperson for the Attorney-General’s
Department said: “I am very grateful for the fact
that none of my colleagues has come up with a
50
Models and Tools to Manage Security in Multiagent Systems
good use for it. When they do, I will have to do
something about it”. (Gatekeeper goes Missing The Australian)
The company would have done better to
concentrate on making its core PKI technology
easier to deploy, a shortcoming that became a
key reason Baltimore’s UniCERT PKI
technology never went mainstream. (End of the
line for Ireland’sdotcom Star - Reuters)
Based upon overseas [Australia, Finland,
Germany, Hong Kong, US] and New Zealand
experiences, it is obvious that a PKI
implementation project must be approached with
caution. Implementers should ensure their risk
analysis truly shows PKI is the most appropriate
security mechanism and wherever possible
consider alternative methods. (International and
New Zealand PKI experiences across
government – NZ State Services Commission)
All the above statements represent a quite spread sentiment due to unclear
and hardly understandable standards, but a deeper analisys take into accont
the very idea of a global directory of names. Later in this dissertation, we'll
deal with these issues and about different approaches that could be adopted.
In the following pages, instead, we'll briefly analyze the X.509 standard, its
sintax, semantics, its advantages and weaknesses.
Public-key infrastructures
51
4.1 X.509 Public Key Certificates
X.509 PK Certificates [5] were defined in thee subsequent major versions.
The first version was defined in 1988, but didn't enjoy a wide deployment,
above all because it didn't provide enough flexibility. Version 2 made not a
great improvement, as it simply added two optional fields. Version 3,
released in 1997, finally added optional extensions, at least allowing a higher
degree of flexibility. This way, it finally addressed one of the most basic
requirements from the (potential) user base. In June 2000 a new
recommendation was released, including various changes and two additional
extensions.
Since the original X.509 spec is somewhat vague and open-ended, every nontrivial group which has any reason to work with certificates has to produce an
X.509 profile, which nails down many features which are left undefined in
X.509.
Although X.509 defines certain requirements associated with the standard
fields and extensions of a certificate, other issues must be further refined in
specific profiles to fully address interoperability considerations. The
difference between a specification (X.509) and a profile is that a specification
doesn't generally set any limitations on combinations of what can, or cannot,
appear in various certificate types, while a profile sets these limitations.
The Internet Engineering Task Force (IETF) Public Key Infrastructure X.509
(PKIX) Working Group introduced another profile in January 1999, with the
publication of RFC2459. In April 2002, RFC2459 was replaced with
RFC3280. Although the intended audience of RFC3280 is the Internet
community, a number of its recommendations are adopted as general rules,
and in fact compliance with this profile is maintained wherever possible.
The structure of a X.509 certificate is defined using the Abstract Syntax
52
Models and Tools to Manage Security in Multiagent Systems
Notation number One (ASN.1). Details about the most important fields will
be provided in the following of this chapter.
Certificate ::= SEQUENCE {
tbsCertificate
TBSCertificate,
signatureAlgorithm
AlgorithmIdentifier,
signature
BIT STRING
}
TBSCertificate ::= SEQUENCE {
version
[ 0 ]
Version DEFAULT v1(0),
serialNumber
CertificateSerialNumber,
signature
AlgorithmIdentifier,
issuer
Name,
validity
Validity,
subject
Name,
subjectPublicKeyInfo
SubjectPublicKeyInfo,
issuerUniqueID
[ 1 ] IMPLICIT UniqueIdentifier OPTIONAL,
subjectUniqueID
[ 2 ] IMPLICIT UniqueIdentifier OPTIONAL,
extensions
[ 3 ] Extensions OPTIONAL
}
4.1.1 Version
It is a simple integer representing the version of the certificate. The counting
starts from 0 – which stands for version 1.
4.1.2 Serial Number
It is an integer, that must be unique for each certificate issued by a
Public-key infrastructures
53
Certification Authority. Its size is not specified clearly, varying in different
implementations from 32 to 160 bits, and should thus be treated as an opaque
blob.
4.1.3 Signature
Thuogh under a misleading name, it represents the identifier of the algorithm
used to generate the certificate signature. It is an OID, i.e. an Object
Identifier. The real signature, instead, is a field of Certificate and is thus held
outside of the TBSCertificate structure. In Certificate the algorithm identifier
is also repeated, and must match this in TBSCertificate, though the security
reasons for this are not clear (if an attacker can forge the certficate signature,
then he can also easily modify this algorithm OID).
4.1.4 Issuer, Subject
Issuer is the X.500 Distinguished Name of the Certification Authority that
issued the certificate. Its presence is mandatory.
Subject is the X.500 Distinguished Name of the certificate owner. It must be
present, except if an alternate name is defined in extensions, in which case it
can be left null.
54
Models and Tools to Manage Security in Multiagent Systems
Figure 4.1. Example of a X.500 Distinguished Name. Adapted from [12].
The idea apperead to monopoly telcos a naural a viable solution for runnng a
general-purose, one-size-fits-all, global directory. However, after the X.500
standard was first released, the reality was no one having the minimm idea on
how to organise the hierarchy, and all efforts to define naming schemes were
eventually unseccessfull. Users, and even people and organizations in the
standard committee could not agree on the semantics of name properties. The
standard, overloaded of complex relations and hardly understandable
definitions, was far from adapting to all real cases (nomadic people,
international organizations, people covering multple positions in a business
etc.).
Moreover, privacy and business reasons obstacled the adoption of such a
public directory, as companies and government agencies were certainly not
going to make public their internal organization and affiliation, as well as
common users didn't want to have all their ana g rafic and personal data
exposed to the world.
Public-key infrastructures
55
Figure 4.2. Distinguished Names [diagram from X.521]
Using common names as global unique identifiers, then, paves the way for
further problems, as how to distinguish two users or two employees with the
same name, or which directory search for obtaining a user's key. These
problems are well known and discussed, and also have their own names,
being known as the “Which John Smith?” problem and the “Which
directory?” problem. As it is often stated, the whole X.509 infrastructure is a
complicated way to turn a key distribution problem into a name distribution
problem, which is not simpler under any perspective.
4.1.5 Validity
Validity ::= SEQUENCE {
56
notBefore
UTCTIME,
notAfter
UTCTIME
Models and Tools to Manage Security in Multiagent Systems
}
Validity represents the time window in which the certificate should be
considered valid, unless revoked. According to their particular value, these
dates/times have to be represented in UTCTime or in GeneralizedTime.
In coming up with the worlds least efficient
machine-readable time encoding format, the ISO
nevertheless decided to forgo the encoding of
centuries, a problem which has been kludged
around by redefining the time as UTCTime if the
date is 2049 or earlier, and GeneralizedTime if
the date is 2050 or later (the original plan was
to cut over in 2015, but it was felt that moving it
back to 2050 would ensure that the designers
were either retired or dead by the time the issue
became a serious problem, leaving someone else
to take the blame). [10]
This has led to different interpretations of time values in different profiles
and in different software products. Other potential problems come from the
definition
of
UTCTime
as
either
YYMMDDHHMMZ
or
YYMMDDHHMMSSZ (i.e. with or without trailing seconds) in different
specifications and profiles, hardening interoperability efforts.
4.1.6 Subject Public Key Info
It is the public key conveyed by the certificate. Its presence is mandatory.
According to the specific public key algorithm, it can be either a sequence of
values or a single integer.
Public-key infrastructures
57
4.1.7 Issuer Unique ID, Subject Unique ID
They are two optional unique identifiers for the certificate issuer and subject,
respectively. They were added in version 2 to allow the reuse of names over
time. Their use is unclear and very rare in implementation practice. They are
deprecated in the IETF profile.
4.2 Certificate extensions
As anticipated, since version 3, X.509 certificates allow a number of
extensions to be attached. Each extension can be marked with a criticality
flag. All extensions marked critical must be understood and processed,
otherwise the certificate must be rejected. Non-critical extensions are meant
to be optional hints, carrying information which is not strictly required to
process the certificate. For this reason, they can be safely ignored if not
understood. The real usefullness of non-critical extensions is not clear, and
often their use simply burdens the certificate.
It is also convenient to distinguish extensions by their semantic, expressing
them constraints on the usage of the certificate and its public key, or
conveing some information, associated with the subject or the issuer.
Constraint extensions include Basic Constraints, Key Usage and Certificate
Policies. Informational extensions, on the other side, don't limit the
certificate, but carry information, as Key Identifiers or Alternative Names.
Another possible classification is whether the extension is intended for
Certification Autorities, end-entities, or both.
4.2.1 Subject Alternative Name
This extension is probably the most important, and surely the most frequently
used. It allows to bind alternative name forms to the public key of the
58
Models and Tools to Manage Security in Multiagent Systems
certificate. Allowed alternative names include, for example, e-mail addresses,
domain names, IP addresses, URIs, and so on. Alternative names can be
added to certifcates which have a subject DN set, but more often they are
used as a more viable and useful alternative to X.500 names. The IETF
profile mandates the use of at least one Subject Alternative Name if the
Subject DN is not present in the certificate. It also specifies this extension to
be marked critical.
4.2.2 Issuer Alternative Name
It indicates alternative name forms associated with the issuer of the
certificate, resembling the role of Subject Alternative Name. Using this
extension, it is possible to specify an issuer by his e-mail address, his IP
address, or some kind of URI. IETF specifies that, if present, it must be
critical. At the same time, however, it mandates the presence of the Issuer
DN in all certificates, being this extension present or not.
One reason is that alternative names cannot be used for certificate chaining
purposes, since it's not clear how to match two certificates with multiple
alternative names. The confusion is whether all the items in the altName must
match or any one of the items must match. This motivation already led the
S/MIME group to declare DNs mandatory for certificate chaining.
The original confusion probably lies in the double meaning of Alternative
Names, which can be used either to add identifying information to principals
listed in a X.500 directory, or to provide an alternative way to identify
principals outside the DN space.
4.2.3 Subject Directory Attributes
This extension was originally meant to associate some attributes with the
subject of the certificate. This extension is not often used, but some
Public-key infrastructures
59
applications rely on it to describe the access permissions associated with the
subject. While this could appear as a viable solution to delegate permissions
to trusted parties, it should be noted that, in general, permissions have a
shorter lifetime than identity certificates. For this reason, it is not desirable to
put them in the same certificate used to associate a user with his public key,
as their expiration would invalidate the whole certificate.
Thus, to avoid unnecessary revocations, the use this extensions for
authorization information should be avoided. Even using X.509 certificates,
other options are available, including Attribute Certificates and Proxy
Certificates [3, 4].
4.2.4 Authority Key Identifier, Subject Key Identifier
These two optional felds are used to uniquely identify a particular key among
the ones owned by the issuer or the subject of the certificate, respectively.
Their use is mandatory according to the IETF profile.
4.2.5 Key Usage, Extended Key Usage, Private Key
Usage Period
The Key Usage and Extended Key Usage extensions limit the way the public
key can be used: for digital signature, non-repudiation, key or data
encipherment, key agreement, certificate and CRL signature. As some of
these functions are reserved to CAs, these extensions must be synchronized
with the Basic Constraint extension (see below). Their interpretation,
anyway, greatly varies among the different profiles. Software vendors and
products manage them in a variety way, which are often incompatible.
The Privat Key Usage Period is a limitation of the period of validity of the
private key corresponding to the public key carryed by the certificate. This
way, the time-window for a public key and its corresponding private key can
60
Models and Tools to Manage Security in Multiagent Systems
be different. The interpretation of the effective time window is not clear,
leading to possible contrasting opinions about the validity of a signature, and
for this reason the IETF profile recommends against the use of this extension.
4.3 CA Certificates
In X.509, some extensions are defined for CA certificates only. They include
Basic Constaints, Name Constraints and Certificate Policy.
4.3.1 Basic Constraints
In particular, the Basic Constraints extension is precisely used to distinguish
public keys owned by Certification Autorities from the ones owned by end
users. In fact, this field is typically present only in CA certificates, while it is
marked false in the rare case it's added to an end-entity certificate. The X.509
standard suggests (without mandating), this extension to be marked critical.
Ambiguous situations can be solved adhering to the IETF profile, which
mandates the extensions to be marked critical. The extension must be present
whenever the public key of the certificates can be used to verify certificate
signatures. A Path Length Constraint can be associated with this extension, to
limit the length of certicate chains originating from the public key of the
certificate.
4.3.2 Name Constraints
Another extension, the Name Constraints extensions, allows to limit the
namespace managed by a CA. Through the Permitted Subtrees and Excluded
Subtrees attributes, it is possible to specify which name patterns and subtrees
a CA is allowed or denied to manage.
Unfortunately, due to several problems regarding the X.500 string encoding
Public-key infrastructures
61
rules, it is possible fo a CA to choose an unusual, but perfectly valid
encoding, which would make a name legal, even if it lies outside the allowed
namespace. For this reason, it is preferable to use Permitted Subtrees instead
of Excluded Subtree, as these limitations could be overcome in some ways.
Both the X.500 standard and the IETF profile mandate this extension to be
marked critical, if present in a certificate. As a matter of fact, it is rarely used
and supported by existing software systems.
4.3.3 Certificate Policies
[A Certificate Policy is] a named set of rules
that indicates the applicability of a certificate to
a particular community and/or class of
application with common security requirements.
[1]
A Certificate Policy can appear only in CA certificates and is specified as a
set of OIDs and optional qualifiers. The IETF profile stands against the use of
qualifiers. Nevertheless, it defines two possible qualifiers: Certification
Practice Statement and User Notice. It is a common understanding that,
while a Certificate Policy provides a high level description of the intended
use of the certificate, instead a Certification Practice Statement is a detailed
document describing the internal procedures of the CA, and so could contain
extremely sensitive data. It is debated, however, how these documents can
play a role in noticing end-users and arranging cross-certifications.
Policy Mappings is another extension for CA certificates only, and it
indicates the equivalence among some policy identifiers, potentially allowing
two or more CA domains to interoperate. The scope of Certificate Policies
and Policy Mappings can be limited through the use of other extensions, as
the Policy Constraints and the Inhibit Any Policy extensions.
62
Models and Tools to Manage Security in Multiagent Systems
A Policy Authority is defined as the entity which can establish Certificate
Policies, but its real nature and duties must be considered domain-specific. It
is eventually responsible for the CA to effctively adhere the policies.
Certification Authorities can delegate some of their responsabilities to a
Registration Authority, which acts as a local office of the CA to make contact
with users easier, but whithout being able to actually sign certificates. A
Registration Authority can, for instance, establish and confirm the identity of
an individual, generate and manage the lifecycle of shared secrets and key
material related to an end user, initate requests for certificate issuance and
revocation. The entire process should be described in CPs and CPSs,
although a precise specification about how this should be done does not
exists.
4.3.4 From a CA tree to a CA “jungle”
Through the use of Basic and Name Constraints extensions, certification
autorities are able to delegate their name definition function to a hierarchy of
subordinate autorities. However, this strictly hierarchical model never came
into effect, as a number of indipendent top-level autorities have always
coexisted. To accommodate this situation, yet allowing interoperability, toplevel certification autorities started to issue cross-certifications among them.
This way, certificates issued by a single autorities resulted valid also in the
other domains. Tracing “trust” relationships in such a jungle of crosscertified autorities is virtually impossible.
It's worth nothing that X.509 CA certificates are clearly distinguished from
end-entity certificates. Thus, delegation of the name definition function is
only possible among certification autorities. Conversely, end-entity keys can
in practice be used only for non-repudiation and for some privacy services.
This certainly reflects the intrinsically centralized nature of the X.500 and
Public-key infrastructures
63
X.509 standards (apart from the clear desire of the major CAs to keep in their
hands the power – and profits – of issuing PK Certificates).
However, this is quite limiting in different contexts, where delegation of
duties, goals and privileges is a key issue. While the Globus community is
proposing the idea of X.509 Proxy Certificates [4], other PKI systems, as
SPKI, follow radically different approaches.
4.4 Private extensions
Other extensions defined in the X.509 standard are limited to domain-specific
applications. Their usage is very limited, and interoperability is out of
question. Private extensions specified in X.509 include:
Authority Information Access – describing how to contact the issuer of the
certificate, to obtain information and services. It's only for CA certificates.
Subject Information Access – describing how to contact the subject of the
certificate, and how information and services can be obtained. It's for both
end-entity and CA certificates.
4.5 Certificate Revocation
For a number of reasons, including information contained in certicates
becoming obsolete or false, private keys being lost, ect., means must be
provided to revocate certificates. This is expecially important for long-lived
certificates.
For this purpose, X.509 specifies the structure and use of Certificate
Revocation Lists (CRLs). In their essence, CRLs are a modern version of the
printed books once used in supermarkets, where unvalid checking account
and credit card numbers were listed.
64
Models and Tools to Manage Security in Multiagent Systems
The design of CRLs in X.509 followed exactly the model of the book of bad
checking account numbers. If a certificate has to be revoked, then its
identifier is added to the new list to issue. If the revoction is urgent, then the
next list is issued and distributed immediately. As soon as it is received by a
client application, the new CRL replaces the previous ones. X.509 defines
two extension to deal with CRLs: CRL Distribution Point and Freshest CRL
Pointer.
However, the result of this process cannot be predicted, being at least timedependent. In fact, a client application may not know a new CRL has been
issued, replacing the previous one even before its expration date. In the
X.509 design, moreover, CRLs are allowed to revoke a certificate
retroactively, i.e. to mark a certificate unvalid even before the CRL is really
issue. This way, the real meaning of a certificate is samewhat lost, as an user
cannot be sure it has been revoked without him knowing. In [11] the author
compares this situation to a violation of the ACID properties of transaction,
easily demonstrating that validating a X.509 certificate is in a nondeterministic procedure.
As a result, an attacker can exploit this security hole by preventing CRLs to
be delivered, without using cryptography but simply blocking the network
traffic (for example through a DOS attack). On the other hand, blacklists of
credit card numbers were abandoned tens of years ago, as they simply didn't
work.
4.6 X.509 Attribute Certificates
Attribute Certificates [3] were standardized when the third version of X.509
specifications was made public. However attribute certificates does not
convey public keys and must not be confused with public key certificates.
Instead, attribute certificates are used to associate privilege attributes with the
Public-key infrastructures
65
certificate subject. However, the association of a subject with these attributes,
which often have a shorter validity period than public keys, still requires a
public key certificate. In fact the subject of an X.509 attribute certificate is
expressed as a DN, which must still be resolved to a public key to be useful
for security services. By joining attribute and public key certificates, X.509
standards aim at providing a full Privilege Management Infrastructure (PMI).
Though PKI and PMI solve different problems, there are interesting
parallelism in X.509 specifications about PK and Attribute certificates.
The information actually conveyed by an attribute certificate can vary, and it
is considered comletely domain dependent in many X.509 environments. One
of the most clear example of application of the X.509 PMI is PERMIS [14].
In this case, attribute certificates bind roles assignments, encoded as XML
documents, to the principals of the system.
66
Models and Tools to Manage Security in Multiagent Systems
PKI
PMI
Certificate
Public Key Certificate Attribute Certificate (AC)
(PKC)
Certificate issuer
Certification Authority Attribute Authority (AA)
(CA)
Certificate user
Subject
Certificate binding
Subject’s Name
Public Key
Revocation
Certificate Revocation Attribute
Certificate
List (CRL)
Revocation List (ACRL)
Root of trust
Root
Certification Source
Authority or Trust (SOA)
Anchor
Subordinate authority
Subordinate
Certification Authority
Holder
to Holder’s
Name
Privilege Attribute(s)
of
to
Authority
Attribute Authority (AA)
Table 4.1. Comparison between PKI and PMI in X.509 [14]
The structure of the XML documents is described in custom DTDs, thus
making the solution limited to a particular domain, and certificates are stored
in LDAP directories, according to the distinguished names (DNs) of the
certificate subjects, as suggested by the X.509 standard. In fact, the PERMIS
project is one of the most complete deployments of the X.509 public key and
privilege management infrstructures. Yet, as previously stated, relying on
DNs and centralized CAs makes the whole system dependent on external
entities, and ultimately limits its scalability and applicability. Not all
organizations permit or want to name principals as DNs and publish their
Public-key infrastructures
67
personal details in a public directory, neither all can afford the cost of
certificates issued by global CAs.
Moreover, as attribute certificates bind authorization to a name, this name
still need to be bound to a public key. So, two autorities are involved in the
authorization system, and this is correct in a X.509 environment, since in
most cases CAs should not be responsible for role assignment. However,
having two authorities in the system makes it more vulnerable. In fact, if just
one of the two is subverted, then the whole authorization mechanism is open
to the attacker.
4.7 Globus Proxy Certificates
The Globus Toolkit is a widespread middleware for Grid applications. In
these environments, delegation of tasks and duties is a key issue. Assigned
tasks, of course, must be associated with a corresponding empowerment to
access the resources needed to complete the tasks.
Proxy Certificates [4] were proposed by the Globus community as a solution
to this problem. After the first implementation in the toolkit, proxy
certificates were proposed as a RFC to the IETF PKIX working group, for
standardization. In Globus, proxy certificates are part of an overall security
infrastructure which uses X.509 public key certificates for authentication,
TLS for extabilishing secure communication channels. In particular, proxy
certificates, proposed as an extension to X.509 certificates, are designed to
meet the requirements of grid applications about delegation.
Like a public key certificate, a proxy certificate binds a public key to a
subject, identified through a DN. Unlike public key certificates, however,
they are not issued by a CA, but by an end-entity. This way, proxy
certificates allow keys conveyed in end-entity certificates to be used to sign
further certificates, empowering other entities to take actions on their behalf.
68
Models and Tools to Manage Security in Multiagent Systems
The delegated entity (or proxy) must have a unique name, which is obtained
by adding a relative distinguished name component (RDN) to the issuer’s
name.
This approach allows proxy certificates to be created dynamically, in a very
light and fast way, with respect to the process of obtaining a public key
certificate from a CA. Proxy certificates can be used to delegate tasks to
short-lived processes, or to some other trusted parties, operating on behalf of
a user.
A new extension, the Proxy Certificate Information extension, distinguishes
proxy certificates, and proxy keys, from the original key empowering them.
This extension also allows to specify a delegation policy, to limit the access
rights granted to the proxy, and the level of further delegation allowed for the
proxy.
As a number of policy languages already exist and are used in applications,
the Globus community designed a flexible mechanism to allow the use of
different policy languages, for example XACML or XrML, in proxy
certificates. Of course, the issuer of the certificate and the final service
provider must agree on a common policy language, understood by both. This
is achieved through two fields in the PCI extension: a Policy field and a
Policy Method Identifier, which is an object identifier (OID).
It must be finally noted that, while adherence to X.509 guidelines guarantee
that existing applications can be quite easily modified to accept proxy
certificates, proposed extensions are not part of any really accepted and
widespread standard, and are not understood by any software product, but
those developed by the Globus project.
“Note that the GSI and software based on it
(notably the Globus Toolkit, GSI-SSH, and
GridFTP) is currently the only software which
Public-key infrastructures
69
supports the delegation extensions to TLS (a.k.a.
SSL). The Globus Project is actively working
with the Grid Forum and the IETF to establish
proxies as a standard extension to TLS so that
GSI proxies may be used with other TLS
software.”
[http://www.globus.org/security/overview.html]
4.8 PGP
A different trust and certificate management model was introduced by 1991,
when Phil Zimmermann first presented PGP [17]. It appeared clear to
Zimmermann that the creation of a comprehensive X.500 directory was a too
complex process, subject to political and economical pressures. Thus,
security applications shouldn't have been based on it. For this reason, he
proposed a different approach toward security, first of all for the exchange of
authenticated and ciphered messages. Instead of designing the PKI around a
centralized, hierarchical authority, PGP allowed each key to sign certificate.
The nature of these certificates, in principle, is not different from X.509 PK
certificates, binding a public to a global name. The global uniqueness of the
name if related to the domain application, in fact, PGP mainly deals with
email messages. The basic assumption was that, if enough principals signed a
key, its association with the public key could be trusted, realizing a
decentralized "web of trust".
An important role in the process is assigned to the so-called introducer.
According to the theory of “six degree of separation”, each person can
establish a link with each other person on the planet with no more than six
intermediaries. These can play the role of introduces, assuring the identity of
the people they know for others.
70
Models and Tools to Manage Security in Multiagent Systems
However, even when multiple issuers sign a certificate, their independence
cannot be easily assured in a real scenario, and in general can be
presupposed. A single user could sign multiple certificates using different
keys, trying to give value to false identity associations. This threat can be
avoided if users only assign trust to other users who they know use the
necessary care when signing certificates. The semantic of a PGP certificate
itself is not clear, leaving ambiguities about the identity of the subject and
about the means of identifications used by the certificate issuer.
Moreover, the whole framework doesn't solve the problem of locally defined
names, instead relying on unique names assigned by the Internet DNS. After
this binding is verified, the problem of associating authorization credentials
with the name is still open, especially if there's no previous body of
knowledge about the subject of the certificate.
4.9 SPKI
In the Simple Public Key Infrastructure [13], the very foundation of digital
certificates is re-thought, trying to make them really useful in application
scenarios. They note that what computer applications need is not to identify
keyholders, but to make decisions about them. Often these decisions are
about whether to grant access to a protected resource or not.
In available PKI systems, these decisions should be taken on the basis of a
keyholder's name. However, a keyholder's name does not make much sense
to a computer application, other than use it as an index for a database. For
this purpose, they argue, the name must be unique, and must be associated
with the needed information. But, for the same reason, it is extremely
unlikely that the given name by which we identify people could work on the
Internet, as it will not be unique.
Moreover, since the explosion of the Internet, contact with person became
Public-key infrastructures
71
often only digital, without ever encountering partners personally. In this
cases, which are more and more common, there is no body of knowledge to
associate with the name. Trying to build an on-line, global database of facts
and people is obviously unfeasible, since it will face privacy problems, as
well as business unwillingness to disclosure sensible data about their
employees and their contacts.
4.9.1 Authorization Certificate
In SPKI, on the other side, security is founded not just on identity, or given
names, but on principals and authorization. In general, a principal is any
entity that can be taken accountable for its own actions in the system, and in
particular, principals in SPKI “are” simply public keys. This means that each
principal must have its own public key, through which it can be identified,
and each public key can be granted rights to access system resources.
The key concept in SPKI in fact is authorization, and more precisely
distributed authorization. Each entity in the system has the responsibility to
protect its own resources, and it is the ultimate source of trust, being able to
refute or accept any request to access the resource. On the other end, each
entity can access some resources without being listed in a comprehensive
access control list (ACL).
In fact, in SPKI, ACLs are relegated to a margina role, while a central role is
played by authorization certificates. A basic authorization certificate defines
a straight mapping: authorization → key.
The complete structure of a certificate is defined as a 5-tuple:
72
1.
issuer: the public key (or an hash of it) representing the principal
who signs the certificate;
2.
subject: the public key (or, again, an hash, or a named key)
Models and Tools to Manage Security in Multiagent Systems
representing the principal for whom the delegation is intended to;
other types of subjects are allowed, but they can be always resolved
to a public key; for example, a threshold subject can be used to
indicate that k of n certificate chains must be resolved to a single
subject (i.e. to a public key) to make the authorization valid;
3.
delegation: a flag to allow or block further delegations
4.
authorization: an s-expression which is used to represent the actual
permissions granted by the issuer to the subject through the
certificate;
5.
validity: the time window during which the certificate is valid and
the delegation holds.
Thus, through an authorization certificate, a manager of some resources can
delegate a set of access rights to a trusted entity. This newly empowered
principal can, on its side, issue other certificates, granting a subset of its
access rights to other entities. When finally requesting access to a resource,
the whole certificate chain must be presented. Precise algorithms are
presented in the SPKI proposal to combine certificates in a chain and to solve
them to an authorization decision.
It can be easily noted that, in the whole process of delegation, identities and
given names never appear. Keyholder names are certainly important, and
careful identification is obviously a necessary condition before delegation
can be granted. Otherwise principals (i.e. public keys) cannot be associated
with the humans ultimately responsible for their actions.
But the interesting thing is that this association is never used in the
authorization process, as in fact it is not necessary. This results in a radical
simplification of the whole security infrastructure. Also, the whole system is
much more flexible, allowing arbitrary delegation of permissions and
anonymous communications (in the sense that user's identity is never
Public-key infrastructures
73
communicated through the network). Above all, trust chains are made part of
the system, being its real core, and they can be easily traced following the
chains of authorization certificates issued by the involved principals.
4.9.2 Name Certificates
The Simple Digital Security Infrastructure (SDSI), which then became part of
the SPKI [13] , showed that local names could not only be used on a local
scale, but also in a global, Internet-wide, environment. In fact local names,
defined by a principal, can be guaranteed to be unique and valid in its
namespace, only. However local names can be made global, if they are
prefixed with the public key (i.e. the principal) defining them.
A convention of SDSI is to give names defined in certificate a default
namespace, being the issuer of the certificate itself. Otherwise local names
have always to be prefixed with a public key which disambiguates them.
When used in this way, names become Fully Qualified SDSI Names.
Compound names can be built by joining local names in a sequence. So, for
example, PK1's Joe's Bill can be resolved to the principal named 'Bill' by the
principal named 'Joe' by the principal PK1.
Another type of SPKI certificates is defined for associating names with their
intended meaning: name → subject. A SPKI Name Certificate doesn't carry
an authorization field but it carries a name. It is 4-tuple:
74
1.
issuer: the public key (or an hash of it) representing the principal
who signs the certificate;
2.
name: a byte string;
3.
subject: the intended meaning of the name; it can be a public key or
another name;
4.
validity: the time window during which the certificate is valid and
Models and Tools to Manage Security in Multiagent Systems
the delegation holds.
There's no limitation to the number ok keys which can be made valid
meanings for a name. So in the end, a SPKI name certificate defines a named
groups of principals. Some authors [8] interpret these named groups of
principals as distributed roles.
4.9.3 Certificate Revocation
Validity conditions of certificates usually come in the form of time windows,
but other options are defined in the SPKI proposal, including on-line tests
like certificate revocation lists (CRLs), revalidation and one-time.
When discussing about X.509 we noted that their management doesn't allow
to obtain consistent results, making the evaluation of the validity of a
certificate a non-deterministic process. In SPKI, instead, the computation of
authorization is always deterministic by design.
Three conditions are defined to use CRLs in SPKI:
1.
the certificate must designate a key to sign the CRL and some
locations to retrieve it;
2.
the CRL must have validity dates set;
3.
the validity intervals of CRLs must not intersect, i.e. a new CRL
cannot be issued to replace a still valid one and without users
expecting it.
It is suggested to use delta CRLs whenever possible. Under these conditions,
a CRL is a completion of a certificate rather than an announcement to the
world about a change of mind. Another possibility is to use revalidations,
which in a sense are a positive version of CRLs. They are subject to the same
conditions of CRL. Finally one-time revalidations allow a principal to issue
Public-key infrastructures
75
one-shot delegations, which expire as soon as they're first used.
Anyway, in most cases on-live revalidation can be avoided is short-lived
certificates are issued. This requires a more careful administration of
delegation, but, on the other side, it improves performance and makes the
whole system better adhering the principle of least privilege [18]. In SPKI,
using short-lived certificates is possible, since delegation certificates are
clearly distinguished from identity certificates and, usually, the information
they convey has an intrinsic validity period, that is related to the time needed
for the subject to complete the delegated task.
4.9.4 Logical foundation
Several research works focused on giving the SPKI theory a logical
foundation. In particular, in [6] authors provide a generalized setting to study
the problem of compound names for authorization decisions. In [7] the
problem is restricted to adhere SDSI names, only. However, in [8] it is
proved that this logic does not capture the key features of SDSI, and an
alternative solution is proposed. In particular, the conclusions that can be
derived using the axioms of [6, 7] are not monotonic, i.e. a decision to allow
access to a resource can be changed to a deny, if more certificates are
provided. SDSI, instead, is monotonic by design, being this an important
feature. In fact, is in a distributed application, it would difficult to guarantee
that all available certificates have been collected.
Moreover, in [8], an interesting discussion deals with the importance of
authorization certificates. Even recognizing that the flexibility and the degree
of precision in permission handling is improved by authorization certificates,
authors demonstrate that most use cases can be satisfied by using local names
and name certificates, only. In their perspective, local names are the
distributed counterpart of roles in role based access control (RBAC)
76
Models and Tools to Manage Security in Multiagent Systems
frameworks. Like roles, local names can be used as a level of indirection
between principals and permissions. Both a local name and role represent at
the same time a set of principals, as well as a set of permissions granted to
those principals. But, while roles are usually defined in a centralized fashion
by a system administrator, local names, instead, are fully decentralized. This
way, they better scale to internet-wide, peer-to-peer applications, without
loosening in any way the principles of trust management (see the following
chapter).
The revolution of SPKI is it empowers local entities to protect their own
resources. They're the ultimate source of all trust relationships, and
centralized “trusted” third parties can be completely avoided. Relying on an
external entity, and thus trusting it, should not be imposed by technical
limitations, but it should be a choice founded on security considerations.
There cannot be trust when there's no choice.
4.10 Conclusions
A lot of work has been done in the field of PKI. Though, even if the basic
ideas were presented about thirty years ago, effective deployment is well
below expectations. Reasons for this failure can be traced back to poorly
designed standards, which leave too many issues unclear and open to
contrasting interpretations. Also, most implementations lack adherence to
even basic specifications.
But this failure has led many researchers to question the very foundations of
PKI as it is currently conceived. The deployment global directory of names is
perceived by many as a too complex process, as it is the deployment of a
globally trusted certification authority.
Alternative solutions have been proposed, showing that other choices are
possible, at least from a technical perspective. PGP is virtually the only used
Public-key infrastructures
77
application for email authentication and privacy. However, its infrastructure
is bound to a particular model of trust, and it fits well for those applications
where authentication is the most important issue.
A different approach has been proposed first in SDSI, and then in SPKI.
These proposals don't suppose any globally trusted authority exists. But they
also avoid the very concept of global names, paving the way for completely
distributed solutions to the authorization problem.
4.11 References
[1]
Housley, R. et al. Internet X.509 PKI Certificate and CRL Profile.
IETF RFC 2459, January 1999. http://www.ietf.org/rfc/rfc2459.txt.
[2]
Housley, R. et al. Internet X.509 PKI Certificate and CRL Profile.
IETF RFC 3280, April 2002. http://www.ietf.org/rfc/rfc3280.txt.
[3]
Farrell, S., Housley, R. An Internet Attribute Certificate Profile for
Authorization.
IETF
RFC
3281,
April
2002.
http://www.ietf.org/rfc/rfc3281.txt.
[4]
Tuecke, S. et al. Internet X.509 Public Key Infrastructure Proxy
Certificate
Profile.
IETF
RFC
3820,
June
2004.
http://www.ietf.org/rfc/rfc3820.txt.
[5]
ITU-T Rec. X.509 (revised). The Directory – Authentication
Framework, International Communication Union, 1993.
[6]
Abadi, M. On SDSI's Linkd Local Name Spaces. Journal of
Computer Security, 6 (1-2), pp. 3-21. 1998.
[7]
Halpern, J. van der Meyden, R. A Logic for SDSI's Linked Local
Name Spaces. In Proc. 12th IEEE Computer Security Foundations Workshop,
pp 111-122. 1999.
78
Models and Tools to Manage Security in Multiagent Systems
[8]
Li, N. Local names in SPKI/SDSI. In Proc. 13th IEEE Computer
Security Foundations Workshop, pages 2--15. IEEE Press, 2000.
[10]
Gutmann,
P.
X.509
Style
Guide.
http://www.cs.auckland.ac.nz/~pgut001/pubs/x509guide.txt. October 2000.
[11]
Gutmann, P. Cryptographic Security Architecture: Design and
Verification. Springer, ISBN 0387953876. 1993.
[12]
Gutmann, P. How to build a PKI that works. 3rd Annual PKI R&D
Workshop. NIST, Gaithersburg MD. April 12-14, 2004.
[13]
Ellison, C., Frantz, B., Lampson, B., Rivest, R., Thomas, B.,
Ylonen, T. SPKI certificate theory. IETF RFC 2693, September 1999.
[14]
Chadwick, D.W., Otenko, O. The PERMIS X.509 Role Based
Privilege Management Infrastructure. SACMAT’02, Monterey, California,
USA. June 3-4, 2002.
[15]
Kohnfelder, L. Toward a Practical Public Cryptosystem. Bachelor's
thesis, pages 39-44. Dept. Electrical Engineering, MIT, Cambridge, Mass.
1978.
[16]
Diffie, W., Hellman, M.E. New directions in cryptography. IEEE
Trans. Inform. Theory, IT-22, 6, pp.644-654. 1976.
[17]
How PGP works. http://www.pgpi.org/doc/pgpintro/. 1999.
[18]
Jerry H. Saltzer and Mike D. Schroeder, The protection of
information in computer systems, Proceedings of the IEEE, vol. 63 (no. 9),
pp. 1278-1308, Sept 1975.
Public-key infrastructures
79
80
Models and Tools to Manage Security in Multiagent Systems
5 Trust Management in Multiagent Systems
Multi-agent systems allow the design and implementation of software
systems using the same ideas and concepts that are the very founding of
human societies and habits. These systems often rely on the delegation of
goals and tasks among autonomous software agents, which can interact and
collaborate with others to achieve common goals.
But, while complex behaviors can emerge only in large and open societies of
agents, really useful interactions, just as in human societies, can be set
correctly only on the base of trust relations. Thus, to become an appealing
solution for building real world applications, multi-agent systems should
provide mechanisms to manage trust in open contexts, where potential
partners and competitors can join and leave the community.
Moreover, security mechanisms should be provided, to ease the management
of security policies on the base of evolving trust relationships, allowing
agents to interact with other agents without facing security breaches,
especially when they form large societies with changing social and
economical conditions.
Security-critical applications usually suppose the existence of different
parties that have different and probably contrasting objectives. Agents should
not be able to pose threats to their competitors, but they should be able to
effectively collaborate with their partners. This can be accomplished if each
delegation of duties is accompanied by a corresponding delegation of
permissions, required to complete the task or achieve the goal.
A security model is present here, where delegation certificates are the
Trust Management in Multi-agent Systems
81
foundation of a distributed security infrastructure, and where trusted third
parties and centralized directories of names are avoided. Trust management
principles are applied to agent-based systems to realize systems that can
implement secure access control mechanisms. All trust relationships can be
founded on solid local believes, without relying on global directories of
names and globally trusted certification authorities. In fact both of them make
the system more centralized and may introduce additional points of breach,
especially if their politics are not known in detail.
5.1 Trust, Security and Delegation
Multi-agent systems are founded on the cooperation of autonomous agents,
each one persecuting its own different interests, yet collaborating to achieve
common goals. In this context, a key concept is delegation of goals and tasks
among agents. But delegation, just as in human societies, is usually
associated with a risk. And the decision of facing this risk is necessarily
related to some form of trust.
Trust is an important aspect of human life, and it has been studied under
different point of views, for example in the context of psychological and
sociological sciences, or to draw of specific economical models. Both
Luhmann [17] and Barber [18], just to take two famous examples, analyze
trust as a social phenomenon. In particular, Luhmann argues that trust is a
fundamental instrument to simplify life in human society, ending with the
idea that human societies can exist only on the base of trust. Barber
associates the idea of trust with some expectations about the future: about the
persistence of social rules, about the technical competence of the partners,
and about the intention of the partner of carrying out their duties, placing
others’ interests before their own.
On the other side, other researchers analyze trust mainly in its psychological
82
Models and Tools to Manage Security in Multiagent Systems
forms. Deutsch [19] describes trust in terms of personal believes and
perceived, or expected benefits.
Gambetta [20] is the first to give a definition of trust which is more grounded
in mathematics, as a “subjective probability with which an agent assesses that
another agent or a group of agents will perform a particular action…”. This
definition appears more useful than the previous ones in the context of multiagent systems. In fact it is founded on the mathematical concept of
“probability”, and this makes trust a quantifiable concept.
Yet, as Castelfranchi and Falcone argue [13], the definition of trust as a
“subjective probability” hides too many important details, thus being too
vague to be applied in real cases. Instead, they present trust using a sociocognitive approach, providing a deep analysis of the agent’s believes, and the
way they can influence trust. In particular, they list the beliefs about
competence, disposition, dependence and fulfilment as important components
of trust in every delegation, even towards objects and non-cognitive agents.
Instead, delegation towards cognitive agents requires the delegating agent to
hold additional believes about willingness, persistence and self-confidence of
the partner, at least about the specific domain of the delegation.
Then, using the socio-cognitive approach, trust can be evaluated as a
continuous function of its constituents [15], more precisely of the certainty of
its constituent beliefs. But, though trust is a continuous function, the decision
to delegate is necessarily discontinuous in its nature. The agent can just
decide to delegate or don’t delegate, and this decision has to take into account
not only the degree of trust, but even other factors. These factors, including
the importance of the goal, the perceived risk of frustrating the goal, the
increased dependence on the trustee, and all other costs or possible damages
associated with the delegation, will all influence a threshold function which
will be eventually compared with the degree of trust for deciding whether to
delegate or not.
Trust Management in Multi-agent Systems
83
Following this approach, security is deeply intertwined with both the degree
of trust and the threshold function. In fact, security can certainly influence
positively the trust on the partner, especially if security includes auditing
mechanisms, certifications and message signatures which can help to
associate a principal with its own actions and social behaviours. An even
stronger degree of trust can be achieved when social interactions are founded
on “contracts”, i.e. signed documents that will make the agents responsible
for their own actions against an authority, a trusted third party able to issue
norms, and to control and punish violations.
On the other hand, security mechanisms can be useful to limit the costs of a
failed delegation. For example, delegation often comes in the twofold aspect
of delegation of duties (performing actions or achieving goals), and
delegation of corresponding permissions (rights to access the needed
resources). In this case authorization mechanisms can be used to grant to the
delegated agent only a limited number of access rights to valuable resources,
thus limiting the damage that could be received from a misbehaving partner.
In this way, security can be useful to reduce the threshold, and thus it can
make delegation possible in a greater number of cases.
Moreover, when proper authorization mechanisms are available, delegation
can be modulated according to the degree of trust, starting from the
delegation of a single action, granting only the smallest set of strictly needed
access rights, up to the delegation of a full goal, without specifying a plan or
some needed actions to achieve it, and providing access to the largest set of
available resources.
84
Models and Tools to Manage Security in Multiagent Systems
5.2 Security Threats in a Distributed
MAS
Abstracting from other details and highlighting the components that can take
reciprocal malicious actions, a distributed multi-agent system can be modeled
through two different components:
agents: in its very essence, an agent can be thought as an atomic component
with an associated thread of execution; an agent can communicate with local
or remote agents (in both cases through ACL messages, i.e. Agent
Communication Language messages, defined by the FIPA standard) as well
as with its hosting environment (container) by means of local method
invocations; each agent also exposes methods to be managed by the
underlying container and relies on it to send and receive messages and to
access needed resources;
containers: they constitute the local environment where agents are hosted and
provide them several services; one of their main duty is to provide agents
with an ACC (Agent Communication Channel), so they can exchange
messages; to complete their various tasks, containers can communicate on
public networks with other containers and different remote components;
containers can also host message transport protocols (MTPs) to allow
communications with agents living on remote hosts.
Even if the environment where agents are hosted is often referred to as a
platform, we will try to avoid this term. In fact while a platform, as defined
by FIPA specifications [10], can be in general constituted by different
distributed components, we define a container as a local runtime environment
to be handled as an atomic component from the security point of view. In
JADE [11] a platform is a collection of containers typically distributed across
several hosts, hosting a certain number of agents.
Trust Management in Multi-agent Systems
85
As fighting against security threats concerns, the final goal should be to have
all interfaces exposed by agents and containers masked, both at the
infrastructure level (interactions among remote components involving
network communications) and at the agent level (actions on hosting
environment and ACL message exchanges), so that specific actions are
performed only if adequate security conditions are met. Each method
invocation as well as any request delivered by an ACL message should be
considered a threat (represented by red lightnings in Figure 5.1) to be
accurately analyzed before granting access. A detailed classification of treats
and proposed countermeasures is provided in [9].
Figure 5.1. Security threats in multi-agent systems
5.3 Access Control in a Distributed MAS
Traditional security frameworks take their decisions about authorizations to
86
Models and Tools to Manage Security in Multiagent Systems
services and resources by using access control lists and identity certificates
issued by globally trusted authorities. But weak and transient trust
relationships, and corresponding delegations of permissions among trusted
components, cannot be easily managed with access control lists (ACLs).
Moreover, relying on trusted third parties introduces an additional point of
security weakness for the whole systems. These concerns are becoming more
and more important as relations among components providing on-line
services can easily disappear, and new ones rise, as soon as social or
economical conditions change. Peer-to-peer and ubiquitous computing trends
may only exacerbate fickleness of relations among distributed components.
Moreover, appeal of agents comes from their ability to interact to achieve
common goals. Agent-based applications often rely on delegation of tasks
and goals among cooperating parties. These delegations of duties require
corresponding delegations of permissions needed to perform required tasks or
to achieve required goals. Delegated permissions could be used on their own,
or joined with permissions obtained in other ways. While managing a local
resource, an agent could choose to exploit access rights delegated by current
requester, perhaps joined with a subset of its own access rights, but not to
exploit permissions received by other requesters. Staging these behaviors in a
system based on access control lists is not simple.
5.4 Delegation Certificates
Our approach to enforce platform security restricts access control lists, or
policy files, to a small number of pre-defined entries, each one linking a
trusted principal to a set of permissions. These permissions, quantifying a
level of trust between platform security manager and known principals, are
then packed in signed certificates and distributed to authenticated principals.
Trust Management in Multi-agent Systems
87
Figure 5.2. Structure of delegation certificates
Essentially through a delegation certificate an issuer can grant a subject
access to a resource (if allowed itself). As shown in Figure 5.2, each
certificate carries a list of permissions, a delegation flag, and a validity
period, to regulate the amount of trust that is transferred.
If a delegation certificate has its delegation flag set, then the subject of the
certificate can further delegate received access rights to another subject. Even
if this should happen on the basis of sound relationships, either technical or
economical ones, each principal is free to choose its trusted partners.
The possibility to delegate permissions paves the way for a distributed
management of access rights, which mimics security policies based on access
control lists, but as a result of a bottom-up process and without relying on a
large centralized repository.
In this process, certificate chains take form, allowing access rights to flow
from original issuers (resource managers) to final subjects (users of protected
88
Models and Tools to Manage Security in Multiagent Systems
resources). Moreover, when different chains intertwine, certificates can
dynamically form complex graphs, called delegation networks, as to fit
evolving trust relations. A full theory of delegation networks is developed in
[4].
5.5 Key Based Access Control
Certificates can be used to implement access control in different ways. One
way (shown in Figure 5.3) is to join a delegation certificate, issued to a
principal represented by its name, with an identity certificate, issued by a
trusted certification authority. Another way (shown in Figure 5.4) is to issue a
delegation certificate directly to a principal represented by its public key. In
both cases, each principal will use its private key to sign its access requests.
The main issue about the first approach is that it requires a certification
authority (a trusted third party) to sign identity certificates, so there are two
issuer keys that can be potentially subverted. If, instead, authorizations are
directly issued to keys, then there’s only one authority and one key to protect.
Another concern is about names as linkage between certificates. The
authorization certificate must be issued to a name defined by a certification
authority, so the first key has to use a foreign (global) name space and to
make a guess about what a name means. This guess is subject to mistakes and
attacks, as different principals may have similar names in distinct
namespaces.
If, however, the same key, in its own local name space, issues both
certificates, then above concerns don’t apply. But performance issues remain,
about the burden of signing, sending, storing and verifying one more
certificate per delegation. These issues can be superseded only if names are
really useful at the access control level. Otherwise Occam’s razor applies.
Trust Management in Multi-agent Systems
89
Figure 5.3. Identity-based access control
Figure 5.4. Key-based access control
When a principal requests access to a protected resource, it attaches a
complete certificate chain and a signature to its request message. The
resource manager will first authenticate the request and check each
certificate. Expired or tampered certificates will be promptly discarded.
The final set of permissions granted by the chain will be evaluated as the
intersection of the set of permissions granted to the first issuer (this set could
be read in an access control list) with every set of permissions authorized by
90
Models and Tools to Manage Security in Multiagent Systems
single certificates. In the same way, the final validity period of the chain will
be evaluated as the intersection of validity periods defined for single
certificates.
Figure 5.5. Access control with certificate chains
In particular, the resource manager will verify that:
1.
first certificate is issued by a known manager of the resource;
2.
each certificate is issued by the subject of previous certificate;
3.
last certificate is issued to the principal that is making the request;
4.
required permissions are listed in each certificate.
It’s important to underline that, as every principal can sign its own
certificates, one could delegate more permissions than it really has. Thus the
final set of permissions can be safely evaluated only by intersecting the sets
of permissions carried by each certificate.
Figure 5.5 shows a principal sending a request to a resource manager. It’s
worth noting that each principal involved in the chain is directly represented
by its public key and not by its name.
Trust Management in Multi-agent Systems
91
If a number of permissions are needed to access a resource, then different
certificate chains can be attached to the request: the set of granted rights will
be evaluated as the union of rights that flow through each individual
delegation chain.
5.6 Local Names
Even if access control can be implemented relying only on authorization
certificates and public keys, this doesn’t imply names should be avoided
altogether. People are used to manage names in several situations, and they
prefer dealing with names more than cryptographic keys, even while defining
security policies. But names that people habitually use are not globally
unique names. These names are rather local names, and need to be unique
only for the person or the organization that defines them.
Even if local names are defined in the namespace of their principal (i.e. a
public key), they can be useful to others, too. Indeed, in [3] authors show that
a local name can be managed as a global name, if both the public key and the
identifier are listed explicitly, without relying on a globally trusted public
certification authority or a global directory of names. For example K1 Alice
and K2 Bob are proper names defined by principal K1 and principal K2,
respectively. A more explicit syntax for names could be K1’s Alice, as to
emphasize that the identifier Alice is the one defined by K1, precisely.
Local namespaces can also be reciprocally linked, by means of extended
names. Extended names consist of a principal followed by two or more
identifiers. Example of extended names are K1 Alice Bob or K2 Bob Carol,
referring to the entity defined Bob by the entity defined Alice by principal
K1, or to the entity defined Carol by the entity defined Bob by principal K2,
respectively.
Principals are allowed to export the names they defined, by signing name
92
Models and Tools to Manage Security in Multiagent Systems
certificates. A name certificate binds a local identifier to a subject expressing
the meaning intended for that name. Public keys, simple and extended names
are all legal subjects of name certificates.
So, a certificate can link a name to a public key, better separating the name
one uses to refer to a principal, from the key a principal uses. Also, having
different certificates that bind a single name to a number of different keys is
perfectly legal. Each issued certificate defines a key as a valid meaning for
the defined name. One can easily assume that each key is a member of a
named group. Given that a name certificate can link a name to another name,
then defining complex hierarchies of names, for example to represent roles
and domains, is simply a matter of issuing appropriate certificates.
5.7 Distributed RBAC
A desirable feature, especially useful when administering a complex platform
with a large number of users, is the ability to grant a set of permissions to a
role. If local names are used, then it’s easy to have roles organized as groups,
where principals can be added simply by issuing a name certificate. If a
principal plays a role (i.e. it is a member of a group), then it will be granted
all permissions intended for that role. Having such a hierarchy of principals
in place, then each user can represent a parent node for all his agents and
containers, or each container can represent a parent node for the agents it
hosts.
If a hierarchical organization of principals proves useful, the same is valid for
resources. In large complex systems, the ability to delegate responsibilities to
manage a group of resources to a domain administrator is a common issue.
Domains are especially useful when defining target of permissions, as they
name an entire set of resources within a single permission. If agents are
organized in a hierarchy of named groups, then a principal could obtain the
Trust Management in Multi-agent Systems
93
right to manage an entire group of them, for example all agents owned by
Bob, or all agents hosted on Container1, or even all agents hosted on a
container owned by Alice. Each principal can define its own namespace, so
each entity that controls some resources can define its own named groups of
resources. As a rule of thumb, permissions should ever be expressed using
the namespace of the principal eventually responsible for access control, so
the authorizer never needs to rely on external naming authorities.
5.8 Trust Management Principles
Till this point, we described means to enforce a configurable policy on the
overall agent platform. However, looking at the model defined at the
beginning of this paper, we can see it as composed of different cooperating
components.
Following the approach of [7], this system can be described as a community
of peers, each one able to play the role of a controller or a requester. If an
entity ultimately controls access to resources, being able to authorize or
refuse requests for their usage, it plays the role of a controller; if an entity
requests to access resources controlled by other entities, it plays the role of a
requester.
To have a sound system, all these peers should adhere rules of trust
management. In [6] these rules are summarized as:
94
1.
be specific: ‘Alice trusts Bob’ is a too vague concept; it has to be
better quantified in expressions as ‘Alice trusts Bob to read file.txt
in her home directory today’;
2.
trust yourself: all trust decisions should be founded on sound, local
believes; when possible, trusted third parties should be avoided,
especially if their mechanisms and politics are not known;
Models and Tools to Manage Security in Multiagent Systems
3.
be careful: even the best implementation can be violated if users
behave superficially and expose reserved data.
Applying these rules requires each component to be described as an
authority, responsible for protecting its local resources and for managing its
trust relations. This modus operandi will provide trust relations among
platform components a better grounding, and the possibility to define policies
both at the container level and at the agent level.
This applies to agents that, on one side, can define application specific
permissions and issue certificates for granting them to trusted partners and,
on the other side, can use their own private key to sign requests, thus being
acknowledged by other resource managers.
But this applies to containers, too. Indeed containers have to protect
resources of the underlying operating system, like files and network
connections, as well as the agents they host. This resources need to be
protected from threats posed by external (to the container) entities, or even
internal entities, i.e. hosted agents.
Finally, organizing a platform as a community of peers connected by trust
relations, allows each component to easily adapt to larger environments,
where multiple platforms interact. And relying only on public keys helps
keeping things simple. An agent can seamlessly sign authorization
certificates for its neighbors, or for agents living on different platforms.
Indeed, both local and remote agents are simply identified as keys.
In addition, dealing with public keys, only, paves the way for a two-layer
security infrastructure. Under a higher level, where interconnections among
agents and containers are handled and secured, a lower level can be sketched,
where generic security means are provided to allow distributed trust
management. This layer can be clearly separated from agent related
packages, thus being useful in traditional applications, too. Sharing a
Trust Management in Multi-agent Systems
95
common low-level security infrastructure, where key-based authorities take
charge of distributed management of access rights, allows agents and
components based on different technologies to interoperate without
weakening resource protection.
5.9 Conclusions
Agent-based systems allow the development of software applications thought
the integration of software entities, deployed as autonomous agents which
can collaborate to achieve common goals or compete to satisfy selfish
interests. Thus, applications can be modeled on the same ideas and concepts
that drive human societies, and delegation of both duties (delegation of tasks
and goals) and permissions (delegation of access rights) is a key issue. But
delegation, to be an effective instrument, must be based on some kind of trust
among collaborating agents. Analyzing trust from a MAS perspective
requires identifying its basic constituents, i.e. the basic beliefs that can drive
agents to adopt delegation effectively, founding their decisions on a
mathematical ground.
But to allow the development of useful trust relations, agent systems should
provide proper security frameworks. Traditional frameworks take advantage
of cryptography techniques, certificate authorities (CA) and access control
lists (ACLs) for granting levels of trust amongst peers and users. But
different approach must be followed to bring security services further, toward
largely distributed agent-based systems where development of dynamic trust
relations, collaboration and delegation of duties become regular activity of
many entities into a large-scale system.
The adoption of concepts from SPKI and trust management principles leads
to a model where policies are based on keys instead of names. Authorization
does not need certificate directories for binding names to public keys. This
96
Models and Tools to Manage Security in Multiagent Systems
makes delegation among agents both understandable in its basic constituents,
as each decisions can be founded on local beliefs, and sound from a security
point of view, as access to resources can be granted according to local
policies.
5.10 References
[1]
Poggi, A., Rimassa, G., Tomaiuolo, M. Multi-user and security
support for multi-agent systems. Proc. WOA 2001 (Modena IT, September
2001), 13-18.
[2]
Vitaglione, G. JADE Tutorial - Security Administrator Guide.
http://jade.cselt.it/doc/tutorials/SecurityAdminGuide.pdf. September 2002.
[3]
Ellison, C., Frantz, B., Lampson, B., Rivest, R., Thomas, B.,
Ylonen, T. SPKI certificate theory. IETF RFC 2693, September 1999.
[4]
Aura, T. On the structure of delegation networks. Proc. 11th IEEE
Computer Security Foundations Workshop (Rockport MA, June 1998), 1426.
[5]
Blaze, M., Feigmenbaum, J., Lacy, J. Decentralized trust
management. Proc. 1996 IEEE Symposium on Security and Privacy
(Oakland CA, May 1996). 164-173.
[6]
Khare, R., Rifkin, A. Weaving a web of trust. World Wide Web
Journal, 2, 3 (Summer 1997), 77-112.
[7]
Li, N., Grosof, B. A practically implementable and tractable
delegation logic. Proc. 2000 IEEE Symposium on Security and Privacy
(Oakland CA, May 2000), 29-44.
[8]
Paajarvi, J. XML Encoding of SPKI Certificates. Internet Draft
draft-paajarvi-xml-spki-cert-00.txt, March 2000.
Trust Management in Multi-agent Systems
97
[9]
Jansen, W., Karygiannis, T. Mobile agent security. NIST Special
Publ. 800-19.
[10]
FIPA. http://www.fipa.org.
[11]
JADE. http://jade.tilab.com.
[12]
JAAS. http://java.sun.com/products/jaas/.
[13]
Castelfranchi C., Falcone R., Socio-Cognitive Theory of Trust.
http://alfebiite.ee.ic.ac.uk/docs/papers/D1/ab-d1-cas+fal-soccog.pdf.
[14]
Castelfranchi, C., Falcone, R. Principles of trust for MAS: cognitive
anatomy, social importance, and quantification. Proc. 3rd International
Conference on Multi-Agent Systems, pp. 72-79, Paris, France, 1998.
[15]
Castelfranchi, C., Falcone, R. Pezzullo, G. Belief sources for trust:
some learning mechanisms. Proc. 6th Workshop on Trust, Privacy, Deception
and Fraud In Agent Societies (Melbourne, 2003).
[16]
Lamsal, P. Understanding Trust and Security. 20th of October 2001.
http://www.cs.helsinki.fi/u/lamsal/papers/UnderstandingTrustAndSecurity.pd
f.
[17]
Luhmann, N. , Trust and Power, pp. 4 Wiley, New York, NY, 1979.
[18]
Barber, B., The Logic and Limits of Trust, pp. 9-17, Grammercy
Press, New Brunswick, NJ, 1959.
[19]
Deutsch, M. Cooperation and Trust: Some Theoretical Notes. In
Nebraska Symposium on Motivation, Nebraska University Press, 1962.
[20]
Gambetta, D. Can we trust trust? In Trust: making and breaking
cooperative relations, electronic edition, chapter 13, pp. 213-237, 2000.
98
Models and Tools to Manage Security in Multiagent Systems
6 Security in JADE
As a concrete outcome of addressing the security and delegation issues in
multi-agent systems, this section presents a case study about the design and
implementation of multi-user and security support for JADE, a software
framework to develop multi-agent systems in compliance with the FIPA
specifications. JADE supports most of the infrastructure-related FIPA
specifications, like transport protocols, message encoding, and white and
yellow pages agents. Moreover it has various tools that ease agent debugging
and management.
Figure 6.1. Current JADE platform
However, no form of security was built into the JADE agent platform before
version 2.61, and the system itself was a single-user system, where all agents
belonged to a single owner and had equal rights and permissions. This means
that it was not possible to use JADE in several real world applications, such
as electronic commerce.
The architecture of a JADE system, shown in Figure 6.1, is centered on the
Security in JADE
99
concept of platform, which is essentially a federation of agent containers
possibly distributed across multiple hosts. Apart from federated containers,
each platform includes other components, as the AMS (Agent Management
System) and the DF (Directory Facilitator) described by FIPA specifications.
In particular, the AMS runs a white pages service; agents can reach it through
ACL messages to register or deregister themselves, and to search for other
agents; it is notified by containers about relevant events, as births and deaths
of their hosted agents; the AMS, on its turn, can contact each container when
it needs to manage (create, kill, suspend, resume) the hosted agents. We
didn’t explicitly draw it in our generic model because, even if some systems,
as JADE, include it as a separate component, it’s not the general case. In
some systems, and maybe even in future version of JADE, the functionalities
of the AMS could be carried on by the containers themselves or by the
underlying network infrastructure, as suggested in Figure 6.2.
Figure 6.2. P2P-based JADE platform
The DF, instead, is not directly involved in the management of the platform
and it can be considered as an application-level service. In fact in JADE its
duties are brought on by an agent or by a federation of agents hosted on the
100
Models and Tools to Manage Security in Multiagent Systems
platform.
Securing an agent platform implies all hosted resources listed into the
previous section have to be protected, including reserved data, agents,
containers, files. Resources have to be protected from both external and
internal threats, preventing external entities from damaging the platform in
any way and preventing agents from harming other agents and from
accessing resources without required authorizations.
So, efforts have been directed toward:
1.
securing communications among infrastructure-level components,
i.e. among containers and with the AMS; this implies reciprocal
authentication of connected parties, to assure only trusted
components can join the platform, as well as integrity and
confidentiality of exchanged data;
2.
forcing agents to adhere to the defined policies; this requires each
agent to be associated with an authenticable principal and with a
certain set of granted permissions; means must be provided for
agents to delegate permissions to each other, too.
As the first point concerns, a number of existing technologies are designed to
assure protection of transmissions on public networks, among which the SSL
(Secure Sockets Layer) protocol is emerged as the standard for secure
communications on the Internet. SSL is designed to secure general-purpose
network connections, and it can guarantee the integrity and confidentiality of
TCP connections. It allows mutual authentication of both parties in a network
connection, too. This feature allows a container to protect itself from
intrusions, preventing malicious applications from masquerading as trusted
components of the platform. Moreover, as SSL is placed at the socket level, it
can be easily inserted in an existing network infrastructure, as network
security is encapsulated in a very low level and its details remain hidden in
Security in JADE
101
that level.
The security model that is included in JADE from version 2.61 focused on
the protection of a platform from malicious actions taken by external entities
or by hosted agents and some simplifications were adopted to address these
specific threats: in particular only one authority and one couple of
cryptographic keys was in place in each platform, associated with the AMS,
and so the responsibility to define the security policy was strongly
centralized, even if agents and containers could access the authority to ask for
delegation certificates that later they could distribute to their trusted partners.
In the following subsections, we will present a generalized model where
multiple entities, including agents and containers, can sign and distribute
their own certificates.
6.1 Principals,
Permissions
Resources
and
In our system, a principal is any entity that can take actions and can be held
responsible for them. Agents, containers and the AMS are certainly to be
considered principals. Users cannot directly perform actions on the platform,
but they take responsibility for the actions performed by their own agents and
containers on their behalf. So, in a JADE system, users are represented as
principals, too. Even external entities, given they are able to access the
platform and take some actions on it, for example leveraging on available
message transport protocols, should be considered principals.
Resources needing access protection in multi agent systems certainly include
all resources of underlying environments, as file systems and network
connections. These resources must be protected from unauthorized access,
leveraging on existing security frameworks when feasible. But multi-agent
102
Models and Tools to Manage Security in Multiagent Systems
systems have to protect their agents and their infrastructures, too.
Remote containers must protect themselves from reciprocal threats.
Unauthorized actions could include suspending or killing an agent, routing a
false message, closing the container. Agents themselves could represent
threats to their containers, especially when agent mobility is allowed. In
many cases the running environment is based on some kind of reference
monitor, so agent actions can be controlled and limited; but denial of service
and other subtle attacks are difficult to prevent.
In their turn, agents must have a strong trust toward their hosting containers,
as they have no means to prevent a container from stealing their code and
data, slowing or stopping their execution. Only after the fact actions can be
taken, assuming that a detailed log of system events could be traced.
Permissions express which actions can be taken on which resources. Typical
permissions include target identification, and a list of actions allowed on that
target; both targets and actions are domain dependent. Permissions are
usually stored in policy files and access control lists, where each known
principal is bound to a set of granted permissions.
In multi agent systems, proper permissions must be at hand to represent
actions that are specific to their particular context. These permissions should
list (groups of) agents or containers as targets, and then actions allowed on
them. JADE adds to the large set of permissions defined by Java other
permissions to describe actions on agents and containers. Actions on agents,
that can be listed in an AgentPermission object, include delivering messages,
suspending, resuming, creating, killing, moving and cloning. Containers can
be created, killed, or asked to host new agents, copy or clone them and these
actions can be listed in a ContainerPermission object.
Agents could want to define their own permissions, too. These permissions
should protect application level resources, as inner data structures, network
Security in JADE
103
connections, physical devices, files and databases managed by the
application.
6.2 User Authentication
Agents and containers are bound to users at creation time. JADE becomes a
multi-user system, similarly to modern operating systems, and users ‘own’
agents and containers. This binding is managed by the platform and the user
is prompted for classical username and password. JADE containers have a
password file against which passwords are checked. As in many other
systems, hashes are stored instead of clear text passwords.
Agents can use the permissions defined by the platform policy and, as
mentioned above, also permissions delegated from others. Actually the agent
keeps a set of certificates into its own CertificateFolder. All the delegation
certificates passed by others are kept into this folder. At creation time, just
after the validation of username and password, the container authority takes
all the permissions contained into the local policy file and referring to that
user. Then creates a delegation certificate containing all those permissions
and adds it to the certificate folder of the agent. This certificate can be used
by the agent like any other delegation. This way the system delegates at
creation time the permissions defined by the policy file.
6.3 Certificate Encoding
Various specifications have been proposed to encode authorization
certificates. For example, the original SPKI proposal [3] suggested to encode
every object using s-expressions. But nowadays a better choice could be
XML, allowing the exploitation of existing tools to encode and decode
certificates. Moreover, adopting XML greatly ease the work of signing the
104
Models and Tools to Manage Security in Multiagent Systems
certificates, as XML-DSIG is winning more and more consent from the
developers’ community, and good tools are beginning to appear. An example
of an XML-encoded certificate is presented in Figure 6.3, following some of
the guidelines proposed in [8].
<?xml version="1.0" encoding="UTF-8"?>
<Signature xmlns="http://www.w3.org/2000/01/xmldsig/">
<SignedInfo>
<CanonicalizationMethod Algorithm="http://www.w3.org/…"/>
<SignatureMethod Algorithm="http://www.w3.org/2000/01/xmldsig/dsa"/>
<Reference IDREF="auth-cert_1234">
<Transforms>
<Transform Algorithm="http://www.w3.org/…"/>
</Transforms>
<DigestMethod Algorithm="http://www.w3.org/2000/01/xmldsig/sha1"/>
<DigestValue Encoding="http://www.w3.org/2000/01/xmldsig/base64">
oYh7…TZQOdQOdmQ=</DigestValue>
</Reference>
</SignedInfo>
<SignatureValue>gYFdrSDdvMCwCFHWh6R…whTN1k==</SignatureValue>
<dsig:Object Id="auth-cert_1234" xmlns=""
xmlns:dsig="http://www.w3.org/2000/01/xmldsig/">
<cert>
<issuer><public-key>
<dsa-pubkey><dsa-p>ewr3425AP…1yrv8iIDG</dsa-p>…</dsa-pubkey>
</public-key></issuer>
<subject><hash-of-key><hash hash-alg="sha1">
AmmGTeQjk65b82Jggdp+0A5MOMo=
</hash></hash-of-key></subject>
Security in JADE
105
<delegation/>
<tag><set>
<agent-permission>
<agent-name>agt1@platform</agent-name>
<agent-action><set>create kill</set></agent-action>
</agent-permission>
<agent-permission>
<agent-name>agt2@platform</agent-name>
<agent-action><set>send receive</set></agent-action>
</agent-permission>
</set></tag>
<validity>
<notbefore>2003-04-15_00:00:00</notbefore>
<notafter>2003-04-18_00:00:00</notafter>
</validity>
<comment>
Subject can create/kill agt1 and communicate with agt2
</comment>
</cert>
</dsig:Object>
</Signature>
Figure 6.3. An XML-encoded authorization certificate
6.4 Access Control
Once got a delegation of a JADE permission, an agent can perform the
permitted actions just like it always had the permission to do so. For
example, once received a delegation for moving from a container to another,
106
Models and Tools to Manage Security in Multiagent Systems
the agent can just move as would normally do. Actually, behind the scene,
the container authority carefully checks all the times that the action is really
allowed, looking at the certificate folder of the agent.
First, all the certificates in the certificate folder are verified. This is
performed locally by the container hosting the agent, since certificates carry
all the information needed to complete the task. At this point a new Java
protection domain is created with all the delegated permissions, and bound to
the executing thread. Our approach mimics the JAAS [12] principal-based
access control mechanism but extends it to deal with the delegation of policy
responsibilities among trusted components.
In fact, while JAAS assume that policy resides in a local file and all policy
decisions can be defined locally, this is not always possible in distributed and
evolving environments. Moreover, in multi-agent systems, principals are not
necessarily bound to a fixed set of permissions, as an agent can play different
roles at different times, exploiting only a subset of the certificates and the
permissions it has been delegated.
6.5 Semi-Automated Trust Building
As example of leveraging our security model in order to build trust
relationships, here we briefly describe a mechanism to access a resource or a
service according to certain access control rules in a semi-automatic fashion.
An agent ‘A’ wants to request a certain service to agent ‘B’. Access to this
service is ruled by a Policy Decision Point (PDP) which makes use of a
certain access control policy. The agent ‘A’ also owns a certain set of
credentials kept into his Credential Folder, common types of credentials are
signed attribute or delegation certificates, tokens, one-time-passwords, etc.
Agent ‘A’ is helped in establishing a trusted relation with ‘B’ by a
Negotiator, which extracts proper credentials from its folder in function of
Security in JADE
107
the requested action type and parameters. Agent ‘A’ can also explicitly pass
some credential to its negotiator to contribute establishing the relationship.
Figure 6.4. A mechanism for semi-automatic trust building through
credentials exchange
The negotiators of the two agents may exchange several credentials until the
PDP can make a final decision. On the ‘B’ side, an Assertion Collector
verifies both the formal validity of the credentials and their validity in terms
of trust, according to pre-existing relationships: for instance the Assertion
Collector might accept a delegation chain whose root is already one of his
trusted parties. Until certain conditions are met, the Assertion Collector tries
to reach a condition of allowed access for the requested action.
108
Models and Tools to Manage Security in Multiagent Systems
Requests for more credentials by the negotiator of ‘B’ might take a form of a
policy definition fragment (excerpt) in terms of the policy constrains that are
not yet satisfied, so that the negotiator of ‘A’ can have a precise clue about
which type of credential it is supposed to provide.
The credential requests can be performed on both sides, so that also the
negotiator of ‘A’ might ask for credentials to the negotiator of ‘B’. This
enables a complex interaction that can go far beyond a simple mutual
identification. Complex algorithms could be adopted in order to achieve
efficient interaction.
A mechanism like that can leverage basic security mechanisms as those
discussed into the previous sections in order to establish sound trust
relationships amongst agents of a distributed systems.
6.6 Conclusions
The concrete implementation of a security infrastructure for JADE is a direct
outcome of the design choices highlighted in the previous section.
Specifically, adherence to trust managemt principles has led to a completely
decentralized system, where agents are responsible to build their own trust
relations, by issuing proper delegation certificates. Moreover, as a JADE
platform is composed of a number of components, each one of them has been
modelled as a separate principal. This way, it is possible to take into account
the threats posed by components living in the same platform.
Protected resources include those of the underlying system, as well as agents
and their containers. Each agent is associated to a human, who is ultimately
taken responsible for it.
Delegation certificates allow to bind permissions to principals, and so to each
component of the whole system. While their structure is directly derived from
Security in JADE
109
the SPKI specifications, the certificate encoding is instead based on XML, to
better integrate with web based tools and applications.
6.7 References
[1]
Poggi, A., Rimassa, G., Tomaiuolo, M. Multi-user and security
support for multi-agent systems. Proc. WOA 2001 (Modena IT, September
2001), 13-18.
[2]
Vitaglione, G. JADE Tutorial - Security Administrator Guide.
http://jade.cselt.it/doc/tutorials/SecurityAdminGuide.pdf. September 2002.
[3]
Ellison, C., Frantz, B., Lampson, B., Rivest, R., Thomas, B.,
Ylonen, T. SPKI certificate theory. IETF RFC 2693, September 1999.
[4]
Aura, T. On the structure of delegation networks. Proc. 11th IEEE
Computer Security Foundations Workshop (Rockport MA, June 1998), 1426.
[5]
Blaze, M., Feigmenbaum, J., Lacy, J. Decentralized trust
management. Proc. 1996 IEEE Symposium on Security and Privacy
(Oakland CA, May 1996). 164-173.
[6]
Khare, R., Rifkin, A. Weaving a web of trust. World Wide Web
Journal, 2, 3 (Summer 1997), 77-112.
[7]
Li, N., Grosof, B. A practically implementable and tractable
delegation logic. Proc. 2000 IEEE Symposium on Security and Privacy
(Oakland CA, May 2000), 29-44.
[8]
Paajarvi, J. XML Encoding of SPKI Certificates. Internet Draft
draft-paajarvi-xml-spki-cert-00.txt, March 2000.
[9]
Jansen, W., Karygiannis, T. Mobile agent security. NIST Special
Publ. 800-19.
110
Models and Tools to Manage Security in Multiagent Systems
[10]
FIPA. http://www.fipa.org.
[11]
JADE. http://jade.tilab.com.
[12]
JAAS. http://java.sun.com/products/jaas/.
[13]
Castelfranchi C., Falcone R., Socio-Cognitive Theory of Trust.
http://alfebiite.ee.ic.ac.uk/docs/papers/D1/ab-d1-cas+fal-soccog.pdf.
[14]
Castelfranchi, C., Falcone, R. Principles of trust for MAS: cognitive
anatomy, social importance, and quantification. Proc. 3rd International
Conference on Multi-Agent Systems, pp. 72-79, Paris, France, 1998.
[15]
Castelfranchi, C., Falcone, R. Pezzullo, G. Belief sources for trust:
some learning mechanisms. Proc. 6th Workshop on Trust, Privacy, Deception
and Fraud In Agent Societies (Melbourne, 2003).
[16]
Lamsal, P. Understanding Trust and Security. 20th of October 2001.
http://www.cs.helsinki.fi/u/lamsal/papers/UnderstandingTrustAndSecurity.pd
f.
[17]
Luhmann, N. , Trust and Power, pp. 4 Wiley, New York, NY, 1979.
[18]
Barber, B., The Logic and Limits of Trust, pp. 9-17, Grammercy
Press, New Brunswick, NJ, 1959.
[19]
Deutsch, M. Cooperation and Trust: Some Theoretical Notes. In
Nebraska Symposium on Motivation, Nebraska University Press, 1962.
[20]
Gambetta, D. Can we trust trust? In Trust: making and breaking
cooperative relations, electronic edition, chapter 13, pp. 213-237, 2000.
Security in JADE
111
112
Models and Tools to Manage Security in Multiagent Systems
7 Security in openNet
The Agentcities / openNet initiative [10] is an effort to make different
systems interoperable, in such a way as to enable the semantic-aware,
dynamic composition of services. These services can potentially be offered
by different organizations, and implemented using different technologies and
systems.
As both grid and agent technologies deal with the problem of service
composition, the concepts have begun to be used in connection to indicate
with the term of “agent grid” a new generation of grid systems where agents
serve as both enablers and customers of the grid capabilities. Since the term
is the union of two words for which a unique and commonly accepted
definition does not exist, even more so the “agent grid” concept is generally
used to refer to different things, and seen from different perspectives,
although these perspectives are quite related. Our view is that an “agent grid”
refers to an infrastructure that can facilitate and enable information and
knowledge sharing at the semantic level in order to support knowledge
integration and services composition.
To realize an “agent grid” system, two different lines are possible:
1.
extend grid middleware to support agent features;
2.
extend agent-based middleware to support grid features.
Following the second lines, an important question arises: is there at present
any implemented agent-based middleware which can be suitable to realize a
grid? Probably, at this time, no agent-based middleware may be used for the
realization of a “true” grid system.
Considering JADE [1], which is the leading open source framework for the
Security in openNet
113
development of multi-agent systems, it provides lifecycle services,
communication and ontology support, security and intra-platform mobility
support, persistence facilities, and system management. Without a doubt
these features represent an important part of realizing the Grid vision, but we
believe that more is needed in order to have a “true” grid.
In the context of the openNet project, our aim is to enhance the JADE
framework adding mechanisms for code distribution, reconfiguration, goal
delegation, load balancing optimization and QoS definition.
This section presents a short review of the research conducted on the field of
agent grid, and then the results achieved in the first phase of our multiphase
effort. In particular, it describes the advantages provided by the integration of
a rule-based framework, Drools, and a scripting engine, Janino, inside JADE.
In our opinion there could be a significant synergy between agents and grid
when the problem to be solved concerns the execution of an
application/service composed of a large set of independent or loosely coupled
tasks, particularly when some interactions among tasks and even some of the
tasks to be executed may be only defined during the execution of the
application.
In fact, this kind of problem requires an intelligent movement of tasks (from
a node to another one) to reduce the high communication cost for managing
the interaction between remote agents (nodes), and requires an intelligent
task composition and generation to cope with the management of the new
interactions and tasks defined during the execution of the application.
On the basis of the previous idea, we started to improve the JADE agent
developing framework to make it suitable to realize “true” grid agent
systems. Our first steps were the realization of new types of agents that
support:
•
114
rule-based creation and composition of tasks;
Models and Tools to Manage Security in Multiagent Systems
•
mobility of code at the task level (i.e., JADE behaviors or simply
rules are exchanged by agents).
7.1 Rule-based agents
Drools [2] is a rule engine that implements the well-known Forgy’s Rete
algorithm [3]. Drools is open source (LGPL) so it provides important
advantages respect to the use of commercial products like JESS [4].
Inside the Drools environment a rule is represented by an instance of the Rule
class: it specifies all the data of the rule itself, including the declaration of
needed parameters, the extractor code to set local variables, the preconditions making the rule valid, the actions to be performed as consequence
of the rule. Rule object can be loaded from xml files at engine startup, or
even created and added to the working memory dynamically.
Rules contain scripts in their condition, consequence and extractor fields. The
scripts can be expressed using various languages, fore example Python,
Groovy and Java. In this last case, the script is executed by the embedded
Janino engine.
When a rule is scheduled for execution, i.e. all its preconditions are satisfied
by asserted facts, Drools creates a new instance of a Janino namespace, set
the needed variables inside it and invokes the Janino interpreter to execute
the code contained in the consequence section of the rule.
7.1.1 Drools4JADE
The concrete implementation of the proposed system is the direct result of the
evaluations exposed in the preceding sections. In particular, we decided to
not start from scratch, from the development of a totally new agent platform,
but instead we judged that existing solutions demonstrated during the time to
Security in openNet
115
be a sound layer on which more advanced functionalities should be added.
The chosen system was JADE [1]. Past experiences in international projects,
proved it to be preferable to other solutions, thanks to its simplicity,
flexibility, scalability and soundness. As already argued, to the integration of
JADE with Jess, yet valid in some contexts, we preferred instead the
integration with an open source, object-oriented software, as Drools. To the
rich features of Drools, we added the support for communications through
ACL messages, typical of FIPA agents. Drools rules can reference ACL
messages in both their precondition and consequence fields, which are
expressed in the Java language and executed by the embedded Janino
interpreter. Moreover, a complete support was provided, to manipulate facts
and rules on Drools agents through ACL messages.
Inside the Drools environment a rule is represented by an instance of the Rule
class: it specifies all the data of the rule itself, including the declaration of
needed parameters, the extractor code to set local variables, the preconditions making the rule valid, the actions to be performed as consequence
of the rule. Rule object can be loaded from xml files at engine startup, or
even created and added to the working memory dynamically.
Rules contain scripts in their condition, consequence and extractor fields. The
scripts can be expressed using various languages, fore example Python,
Groovy and Java. In this last case, the script is executed by the embedded
Janino engine. When a rule is scheduled for execution, i.e. all its
preconditions are satisfied by asserted facts, Drools creates a new instance of
a Janino namespace, set the needed variables inside it and invokes the
interpreter to execute the code contained in the consequence section of the
rule. Drools agents expose a complete API to allow the manipulation of their
internal working memory. Their ontology defines AgentAction objects to add
rules, assert, modify and retract facts.
116
Models and Tools to Manage Security in Multiagent Systems
7.1.2 Application-level security
Mobility of rules and code among agents cannot be fully exploited if all the
security issues that arise are not properly addressed.
All the actions requested to a Drools-enabled agent must be joined with an
authorization certificate. Only authorized agents, i.e. the ones that show a
certificate listing all needed permissions, can perform requested actions.
Moreover, the accepted rules will be confined in a specific protection
domain, instantiated according to their own authorization certificate.
While mobility of rules and code among agents paves the way for real
adaptive applications, it cannot be fully exploited if all the security issues that
arise aren't properly addressed. The approaches to mobile code security are
different, depending on the particular threats that should be faced. In the
context of our applications, we decided to leave out the problem of threats of
hosting environments against received code. These issues are harder to face,
and solutions often rely on detection means, more than prevention ones.
In our work, instead we focused on the problem of receiving potentially
malicious code, that could harm the hosting agent and its living environment.
For this purpose, we leveraged on JadeS [6], the security framework that is
already available for JADE, to implement two different layers of protection.
The security means we implemented in our system greatly benefit from the
existing infrastructure provided by the underlying Java platform and by
JADE. The security model of JADE deals with traditional user-centric
concepts, as principals, resources and permissions. Moreover it provides
means to allow delegation of access rights among agents, and the
implementation of precise protection domains, by means of authorization
certificates issued by a platform authority.
In the security framework of JADE, a principal represents any entity whose
Security in openNet
117
identity can be authenticated. Principals are bound to single persons,
departments, companies or any other organizational entity. Moreover, in
JADE even single agents are bound to a principal, whose name is the same as
the one assigned by the system to the agent; with respect to his own agents, a
user constitutes a membership group, making thus possible to grant particular
permissions to all agents launched by a single user.
Resources that JADE security model cares for include those already provided
by security Java model, including local file system elements, network
sockets, environment variables, database connections. But there are also
resources typical of multi-agent systems that have to be protected against
unauthorized accesses. Among these, agents themselves and agent execution
environments must be considered.
A permission is an object which represents the capability to perform actions.
In particular, JADE permissions, inherited from Java security model,
represent access to system resources. Each permission has a name and most
of them include a list of actions allowed on the object, too.
To take a decision while trying to access a resource, access control functions
compare permission granted to the principal with permission required to
execute the action; access is allowed if all required permissions are owned.
When an agent is requested to accept a new rule or task, a first access
protection involves authenticating the requester and checking the
authorization to perform the action; i.e.: can the agent really ask to add a new
rule, or to perform a given task on its behalf? To perform these tasks, the
requester needs some permissions. In particular, a DroolsPermission object
can authorize the execution of requests as add or remove rules or add, remove
and manipulate facts.
So, only authenticated and authorized agents can successfully ask another to
accept rules and tasks. But till this point the security measures don't go
118
Models and Tools to Manage Security in Multiagent Systems
further than what other technologies, like ActiveX, already offer. In facts,
once the request to perform a given task is accepted, then no more control on
the access to protected resources can be enforced. The agent can choose to
trust, or not to trust. But, if the request is accepted, then the power of the
received code cannot be limited in any way.
Instead, to deploy the full power of task delegation and rule mobility, the
target agent should be able to restrict the set of resources made accessible to
the mobile code. The agents should be provided means to delegate not only
tasks, but even access rights needed to perform those tasks. This is exactly
what is made possible through the security package of JADE, where
distributed security policies can be checked and enforced on the basis of
signed authorization certificates.
In our system, every requested action can be accompanied with a certificate,
signed by a known and trusted authority, listing the permissions granted to
the requester. Permissions can be obtained directly from a policy file, or
through a delegation process. Through this process, an agent can further
delegate a set of permissions to another agent, given the fact that it itself can
prove the possession of those permissions.
The final set of permissions received through the request message, can
finally be used by the servant agent to create a new protection domain to
wrap the mobile code during its execution, protecting the access to the
resources of the system, as well as those of the application.
7.2 Interoperability
One of the main goals of the whole Agentcities/openNet initiative is to
realize and integrated environment, where systems built on different models
and technologies can interoperate, both providing and accessing services.
Security in openNet
119
For this purpose, it is also important to use security models which can enable
a corresponding interoperability with regard to the management and
delegation of privileges, allowing trusted partners to access protected
resources even when their particular application is founded on different
models and technologies.
7.2.1 XrML and ODRL
XrML [11] and ODRL [12] are two different proposals, both based on XML,
which are making their way in the management of digital rights for media
content distribution. Both XrML and ODRL are based on previous works
made at Xerox PARK, which resulted in the definition of the Digital Property
Rights Language (DPRL), and many distinctions are simply linguistic,
naming similar fields in a different way. The real difference lies in their
intended domain. While XrML has a broader applicability, instead ODRL
seems more oriented to the specific media publishing market. In fact it
specifies media formats, resolutions and frame rates, while XrML doesn't.
Both languages, under some restrictions, can be used to delegate access rights
to other users, also.
Apart from their differences, however, both languages are oriented to the
management of digital rights for publishing and accessing media content, and
can hardly fit different applications. Moreover, they're supported by few
software applications, and virtually none in the public domain.
7.2.2 SAML Overview
Traditionally, the problem of identity management was considered equivalent
to PKI, and in some sense it is. However, in practice, all efforts to deploy a
X.509 infrastructure have all fallen below expectations. Professionals share
with users a widespread bad taste about PKI.
120
Models and Tools to Manage Security in Multiagent Systems
PKI is expensive and hard to manage, even
harder to use for the average human, and
implementations lack the broad interoperability
the standards promised. (Jamie Lewis, Burton
Group CEO and Research Chair) [15]
We've discussed in previous chapters the trend towards “trust management”,
which deals with identity in a radical different way. Today, there's an
opportunity for this trend to finally make its way to every day applications.
Web services, while not being a solution to every problem as a lot of
“technology evangelists” claim, are finally moving the focus to local resource
management in a global – federated – environment, paving the way for a
“trust management” infrastructure.
Probably the success of peer-to-peer applications, as well as the potential of
the Grid and of ubiquitous computing will eventually overcome the resistance
of large businesses operating in the certification arena.
Even more concretely, there's a widespread interest to avoid a whole new
security infrastructure, but to fully exploit and integrate the existing and
different security model and mechanisms, which have already been deployed.
These vary according to the different degree of “sensibility” associated with
the protected data and resources, and include plain-text username/password
pairs, Kerberos, X.509, KeyNote, SPKI and various “trust management”
infrastructures.
While SAML [13] allows to exploit digital signature and PKI technologies,
its specifications are not about the deployment of some PKI, but about their
use in a federated environment along with other technologies. The Liberty
Alliance, for example, concentrates its work on SSO, to allow the use of
services from different providers without repeating the login operations at
every site.
Security in openNet
121
The approach of SAML [13] is radically different from X.509, above all as
its specifications start from realistic use cases, which deal with problems that
traditional, X.509 based, PKI never was able to solve. The lack of attention to
real world cases is probably one of the worst characteristics of X.509. SAML
and federated identity are instead dealing with the problem of system security
following a bottom-up software engineering approach, taking into account
already existing infrastructures.
Instead of defining, and imposing, a top down model, SAML and federated
security credentials are enabling already deployed systems to grow and join
others, on the basis of precise and limited agreements. This way, the
experience gained during the last years, in the implementation and
deployment of security infrastructures, is not lost. On the contrary, it is the
basis for the new generation of integrated security systems.
Moreover, SAML is based on XML, and so it easily integrates with other
XML and Web-services based applications. It can leverage existing standards
and protocols, like XML Digital Signature, XML Encryption, SOAP, WSDL
and WS-Security.
7.2.3 SAML Specifications
The first version of SAML was standardized by OASIS in November 2002.
Version 1.1 and 2.0 were released in the following years. Since its initial
development, the definition of the requirements of SAML was driven by
three use cases:
•
web single sign-on (SSO),
•
attribute-based authorization,
•
web services security.
More scenarios were chosen, for each use case, to provide a more detailed
122
Models and Tools to Manage Security in Multiagent Systems
description of the involved interactions. From this analysis, a generic
architecture emerged, to describe the various actors and their interactions.
Figure 7.1. SAML architecture [13]
From the picture above, one can soon notice that SAML itself deals with tree
different kinds of assertions:
•
authentication assertions,
•
attribute assertions,
•
authorization decision assertions.
Authorization decision assertions are a somehow “frozen” feature in current
specifications, suggesting a better solution is to rely on other available
standards for security policies, like XACML. A profile to integrate XACML
authorization decisions into a SAML assertion has been standardized with
SAML 2.0.
The three types of assertions are issued, in principle, by three different
authorities. The Policy Enforcement Point (PEP) represent the component of
Security in openNet
123
the system which takes care of analyzing provided assertions, and generate
authorization decisions, about whether to grant or to deny access to a
protected resource.
The generic structure of a SAML assertion is depicted in the following
picture, which makes evident it is very similar to what is usually called a
“digital certificate”.
Like in every other certificate, an issuer attests some properties about a
subject, digitally signing the document to prove its authenticity and to avoid
tampering. Conditions can be added to limit the validity of the certificate. As
usual, a time window can be defined. Moreover, it can e limited to a
particular audience or to a one-time use. Conditions can also be put on the
use of the certificate by proxies who want to sign more assertions on its basis.
7.2.4 SAML
Perspective
from
the
“Trust
Management”
Being designed to allow interoperability among very different security
systems, SAML offers a variety of schemes to format security assertions. In
particular, there's a number of possible ways to represent a subject, which
also allow to keep away X.500 directories and DN names.
One interesting possibility is to use a SubjectConfirmation object to represent
a subject directly by its public key, which resembles the basic concepts of
SPKI, where, at the end, principals “are” always public keys.
As to the representation of SPKI authorization certificates, it would be
important to have access rights, or permissions, associated with the subject.
Simple authorization decisions could be encoded directly in SAML assertions
till version 1.1. In the latest specifications, these assertions are considered
“frozen”, even if not yet deprecated. However, the very same specifications
124
Models and Tools to Manage Security in Multiagent Systems
suggest alternative schemes, first of all integrating an XACML policy into a
SAML assertion. The precise way to accomplish this is described in a
separate profile, which will be briefly discussed in the following pages.
Figure 7.2. Schema of a SAML Subject [13].
But, apart from direct delegation of permissions, SPKI-like trust management
frameworks can also be used to implement distributed RBAC access control
systems, like discussed in [17]. For this purpose, local names are particularly
important, as they allow each principal to manage its own name space,
which, on the other hand, is also one of the foundations of “federated
identity” and SAML.
In fact, while SAML allows the use of X.509 distinguished names, it also
support a lot of other eterogeneous naming schemes. In this sense, its reliance
on XML for assertion encoding is not irrelevant, as it provide an intrinsic
extendibility through schemas and namespaces.
Assigning a local name to a public key, or to a set of public keys, is as simple
as defining a role, as in SAML names, and roles, are not considered globally
unique by design. And also assigning a named principal to a local name, or to
a role, is perfectly possible.
Security in openNet
125
Of course, allowing interoperability with existing X.509 infrastructures is
important, and in fact X.509 certificates can be used in conjunction with
SAML assertions as a mean of authentication. Moreover, an X.500/LDAP
attribute profile has been defined for the representation of X.500 names and
attributes when expressed as SAML attributes.
7.2.5 Authentication Context
In reality, till now, the main application area of SAML has been federated
identity and SSO, thus above all authentication. While authentication doesn't
play a direct role in trust management system, as identity is kept out of the
authorization process, and is used only if an aut-of-band verification is
requested, for example for legal use.
Anyway, SAML based authentication comes with some useful features, even
from a trust management perspective. In fact, in [16] authors already noted
that the whole X.501 PKI model was based on a wrong basis, assuming the
issuer has the ability to eventually decide the conditions under which the
certificate must be considered valid, and the enabled uses of the public key.
This has certainly a sense, since the issuer must be able to define the limits of
its delegation, however, the role of the final user should also be
acknowledged, as it is the entity who eventually takes a risk by accepting the
certificate. This aspect is particularly important in a business environment,
for example.
Thus, if the relying party has to place some confidence in the certificate, it
may may need additional information about the assertion itself. SAML
allows the authentication authority to specify which technologies, protocols,
and processes were used for the authentication. This Authentication Context
can include – but is not limited to – the actual authentication method used
(for example, face-to-face, online, shared secret). Much more details can be
126
Models and Tools to Manage Security in Multiagent Systems
specified, allowing confidence in a certificate to be based a much clearer
basis than those allowed by ambiguous X.509 policies and related extensions.
7.2.6 XACML Overview
The eXtensible Access Control Markup Language (XACML) [14] is a
language for specifying role or attribute based access control policies. It is
standardized by the OASIS group and, at the time of this writing, its latest
release is 2.0.
A high level model of the XACML language is shown in Figure 7.3. Its main
components are:
•
Rule – the basic element of each policies;
•
Policy – A set of rules, together with the algorithms to combine
them, the intended target and some conditions;
•
Policy set – A set of policies, together with the algorithms to
combine them, the intended target and some obligations.
In particular, each XACML rule is specified through its:
•
target – indicating the resources, the subjects, the actions and the
environment to which the rule applies;
•
effect – can be Allow or Deny;
•
condition – can further refine the applicability of the rule.
Security in openNet
127
Figure 7.3. Model of the XACML Language [14]
128
Models and Tools to Manage Security in Multiagent Systems
7.2.7 XACML
Perspective
from
the
“Trust
Management”
As described in the previous sections, trust management is based on public
keys as a mean to identify principals, and on authorization certificates to
allow delegation of access rights among principals. SAML defines some
rudimentary structures to convey authorization decisions in assertions.
However, these structure are not able convey all the information that can be
represented using the XACML language. On the other hand, XACML lacks
means to protect requests and responses of its Policy Enforcement Points
(PEP). It is clear, and so it appeared to both the SAML and the XACML
working group, that the two languages were in many sences complementary,
and thus a SAML profile of XACML was defined. It effectively makes the
two languages work together in a seamless way.
From the “trust management” perspective, the conjunction of SAML and
XACML, in particular the inclusion of XACML authorization decisions into
SAML assertions, provides a rich environment for the delegation of access
rights. From this point of view, the fact that logic foundations of the XACML
language exist is very important, as they provide XACML with a clear
semantic. The problem is to find algorithms through which the combination
of permissions granted in a chain of certificates could be computed in a
deterministic way, as it is already possible in SPKI.
In fact, even if the semantic of a XACML policy is logically sound,
nevertheless subtle problems can appear when different policies are linked in
a chain of delegation assertions. One major problem is about monotonicity of
authorization assertions, which cannot be guaranteed in the general case.
Using XACML authorization decisions as SAML assertions, it is possible to
assert that access to a particular resource is denied, instead of allowed.
Though being a perfectly legal and meaningful concept, the denial of a
Security in openNet
129
permission (a “negative permission”) is is not desirable in decentralized
environments. In this case, a service provider can never allow access, as it
cannot be sure to possess all issued statements. On the other hand, the nonmonotonicity of the system can also lead to attacks, as issued assertions can
be prevented to reach the provider, this way leading it to take wrong
authorization decisions.
Therefore, it is necessary to define a specific profile of SAML and XACML
which could enable the secure delegation of permissions in decentralized
environments. One of the first requirements is to make “negative
permissions” illegal.
7.2.8 Threshold Subjects
In SPKI, threshold subjects are defined as a special kind of subjects, to use
only in authorization certificates. In [16] authors question the usefulness of
this construct, arguing it is used as an alternative to simulate the conjunction
and the disjunction of subjects. Moreover, they provide an intuitive meaning
for threshold subjects when used in name certificates, also.
XACML does not support threshold subject in its general case, but
conjunction of multiple subjects is possible. In particular XACML allow the
association of multiple subjects per access request, representing the multiple
entities which are responsible for the request. For example, the request could
originate from a user, but it could also be mediated by one or more middle
agents, and also some computing devices could be taken into account, being
represented, for example, by their IP address.
The XACML Multi-Role Permissions profile specifies a way to grant
permissions only to principals playing several roles simultaneously. This kind
of policy can be defined by using a single subject in its target, but adding
multiple subject-match elements to it.
130
Models and Tools to Manage Security in Multiagent Systems
Moreover, a Role Assignment policy could be used to define which roles
should be associated with which principals. Restrictions could be also
specified about the possible combinations of roles, as to limit the total
number of roles played by a principal. This way, the disjunction of some
roles could also be imposed. However, this use could be complicated in
decentralized environments, as it could invalidate the monotonicity of the
system. Showing more credentials should never lead to obtaining fewer
permissions.
7.3 Conclusions
Interoperability among applications built according to different models and
technologies is one of the most important goals of the Agentcities / openNet
initiative. For this reason, in its current stage of development, it is paying
attention to both semantically enriched services as well as to security
mechanisms to allow their use in a protected fashion.
Rule based agents are proving to be a very flexible architecture, above all as
they allow to separate the business logic of agents from their internal
mechanisms. Semantic-web developments are moreover allowing to build
generic applications which can be customized for a specific domain through a
set of rules and a domain ontology.
Federated identities and security assertions are a novel technique to loosely
couple already existing security systems, without requiring to design and
deploy a whole new one, which could hardly fit the extremely heterogeneous
variety of goals and requirements the different applications have.
Particular attention deserve SAML and XACML, for their wide applicability,
their intrinsic extensibility, and their XML grounding, which allows them to
easily fit into the existing web-based applications, as well as into new
systems based on web or grid services. While being proven to have sound
Security in openNet
131
grounding in logical models, anyway they can be used in a distributed
environment only under some restrictions. Otherwise, the combination of
different assertions and policies could lead to unexpected results, or, even
worse, expose the system to attacks.
7.4 References
[1]
JADE Home Page, 2004. Available from http://jade.tilab.com.
[2]
Drools Home Page. 2004. Available from http://www.drools.org.
[3]
Forgy, Charles L., “Rete: A Fast Algorithm for the Many Pattern /
Many Object Pattern Match Problem”, Artificial Intelligence 19(1), pp. 1737, 1982.
[4]
E.J. Friedman-Hill. Jess, the Java Expert System Shell. Sandia
National Laboratories. 2000. http://herzberg.ca.sandia.gov/jess.
[5]
Foundation for Intelligent Physical Agents Specifications. Available
from http://www.fipa.org.
[6]
A. Poggi, G. Rimassa, M. Tomaiuolo. Multi-user and security
support for multi-agent systems. In Proc. WOA 2001: 13-18. Modena, Italy.
2001.
[7]
A. Fuggetta, G.P. Picco, G. Vigna, Understanding code mobility,
IEEE Transaction on Software Engineering 24 (5):342–362, 1998.
[8]
W. Jansen, T. Karygiannis. Mobile agent security. NIST Special
Publication 800-19.
[9]
TechNET project. Home Page available at http://www.alistechnet.org.
[10]
OpenNet web site. http://www.agentcities.org/openNet/.
132
Models and Tools to Manage Security in Multiagent Systems
[11]
XrML. eXtensible rights Markup Language. http://www.xrml.org/.
[12]
ODRL. The Open Digital Rights Language Initiative. http://odrl.net/.
[13]
SAML. OASIS Security Services (SAML) TC. http://www.oasisopen.org/committees/security/.
[14]
XACML. OASIS eXtensible Access Control Markup Language
(XACML) TC. http://www.oasis-open.org/committees/xacml/.
[15]
Lewis, J. Reinventing PKI: Federated Identity and the Path to
Practical Public Key Security. 1 March 2003. Available from:
http://www.burtongroup.com/.
[16]
Ellison, C., Frantz, B., Lampson, B., Rivest, R., Thomas, B.,
Ylonen, T. SPKI certificate theory. IETF RFC 2693, September 1999.
[17]
Li, N., Grosof, B. A practically implementable and tractable
delegation logic. Proc. 2000 IEEE Symposium on Security and Privacy
(Oakland CA, May 2000), 29-44.
Security in openNet
133
134
Models and Tools to Manage Security in Multiagent Systems
8 Conclusions
A number of different technologies are trying to address the problem of
integrating different system architectures, thus allowing the composition of
services designed and implemented according to different models.
But to become effective, such interoperability must be occompained by a
corresponding availability of security means, to integrate systems while not
opening their resources to potential attacks.
Traditional public key infrastructures have missed most of their promised
goals. But this failure has also teached us important lessons. First of all, a
security infrastructure should not be designed to replace local
implementations, but rather should integrate them in a global environment.
Moreover, trusted third parties and global naming schemes should be
avoided, as they force systems to adhere a predifined trust model. Instead,
each user and each service provider should be provided means to control the
exact flow of delegated permissions, as to match his own trust relationships.
In applicating trust management principles to JADE, a widespread
framework for agent-based systems, this work has proved that distributed
delegation of access rights can lead to open, sound and secure systems, which
don't need to rely on “trusted” third parties.
To improve interoperability, emerging XML-based standards for the
definition of security assertions and policies are being taken into
consideration, as they best fit both existing web-based applications, as well as
new ones based on web and grid services. Futher work is needed, however, to
allow their application in an open and distributed environment, as they don't
seem to provide, yet, the properties required for a sound trust management
system.
Conclusions
135
136
Models and Tools to Manage Security in Multiagent Systems
9 Acknowledgments
I wish to thank everybody who has contributed to this work in different ways.
A special thanks goes to:
My Ph.D. advisor, Agostino Poggi, who has made many direct contributions
to this work and for the interesting discussions that we have had during this
time.
Paola Turci, Giovanni Rimassa and Giosuè Vitaglione, with whom I have
worked in these years, since before even getting my degree, and who have
contributed to the work presented in this dissertation.
All the partners of the Agentcities and openNet IST projects, and in particular
Josep Pujol, Owen Cliffe and Steven Willmott, for the long and interesting
discussions about the network infrastructure, as well as for the hard work
made together to match the final goals of the projects. They all have
contributed to part of the work presented here.
All my family, my mother, and Natasha in particular, who have all shown the
greatest possible patience, and always supported me during these years. I
really appreciate this, as I know what a “difficult” person I can be.
The most important person in my life, my grandmother. The strength and
courage she always demonstrated still helps me, every day.
Acknowledgments
137