One novel strategy for the link-state routing algorithms is proposed. It combines the features of the minimum-hop and the shortest-path strategies and is based on calculation of flow-augmenting path for each destination. This path contains minimum number of links while avoiding those that are congested. The strategy introduces a unique property of being able to trigger the congestion control scheme when necessary. Its performance was investigated upon a flow model of a sample network. The results obtained are encouraging and favor the proposed strategy in termes of lower resource usage, higher reliability, smaller processing time and possibility for tradeoff between delay performance and throughput.
The paper explores the possibilities for the introduction of computers in archive offices, where the primary concern is the management of personal funds. The design of an automated system for handling the funds, as well as the necessary indexes and searches utilized by the system is presented. The implementation of the system was done at the Macedonian Academy of Arts and Scieneces, where archive office provided the background for the both, the definition, the implementation and the operational phase of the project.
The problem of finding all minimum-hop paths from one node to another arises in several contexts for adaptive routing in computer communication networks. This paper presents an efficient algorithm for determining all paths with minimum number of links between two nodes in a network. Polynomial bound is established for the worst case time complexity of the algorithm. Directions for further research are also proposed.
In this paper, the analysis of three labeling algorithms for finding the maximum flow in networks is presented. For each algorithm, a computer program is written and tested on networks. The comparison is made on the basis of the processing time and memory storage required for the implementation of each program. As a result, the relationship between the processing time required for each algorithm and the complexity of the networks is established.
The development of formal systems for reasoning about knowledge is one of the main research issues in Artificial Intelligence. A reoccuring problem in most of the systems is the one of logical omniscience or consequential closure which expresses the notion that when a reasoning agent knows anything he knows all its logical cosequences. This result is neither intuitively admissable nor computationally feasible. The paper explores the possibilities of restricting the omniscience in an intensional context.
TCP and UDP are the dominant transport protocols on the Internet. The first one is used for reliable transport, while the second one for multimedia applications. Their behavior is quite different, which makes coexistence a challenge. Namely, TCP is responsive to potential network problems, such as congestion, while UDP completely ignores it. The need to come up with a protocol that somehow extracts the �good: attributes of both TCP and UDP, and leaves out the less favorable to the Internet, has proved to be a challenging research venue. Rather then going for an �entirely� new protocol, our objectives have been to keep TCP as is, while using the mobile agent paradigm for UDP control. The proposed solution termed as Combined Model for Congestion Control (CM4CC) has been limited to wired networks. Today, the Internet is an aggregation of both wired and wireless infrastructures; the article focuses on the CM4CC extension in heterogeneous networks.
Unlike TCP, UDP is unresponsive to network congestion. This may cause, inter alia, bandwidth starvation of responsive flows, severe and prolonged congestions or in the worst-case scenario a congestion collapse. The paper addresses new ideas for taming down the unresponsive flows. By using some of the desirable properties of mobile agents, the system is able to control the influx of non-TCP or unresponsive flows into the network. Various functions performed by mobile agents monitor non-TCP flows, calculate sending rates and modify their intensity according to the needs of the network to attain as good performance as it is possible.
The joint problem of routing and flow control in packet switched networks is considered in the paper. The goal is to obtain a procedure that will choose "best" paths between the source and destination pairs, and at the same time, will keep the excess load out of the network. The proposed algorithm finds the flow augmenting paths in the network and makes adjustments of the input load to the capabilities of the bottleneck links. Directions for further analyses and investigations of the tehnique are also presented.
The experience with design and running the course Data Communications at Växjö University is presented. This is a combination of a campus and distance learning course. It is a part of a program in Computer Science for teachers coming from fifferent high schools in Sweden.
When a part of course curriculum consists of hands-on experience learning with real equipment then the design of a completely distance equivalent is accompanied with many impediments and caveats. The article describes the process of transforming the courses in data communications and computer networks from campus into completely distant courses. We present the decisions we have taken, as well as the development and the experience of running the first introductory course.
The paper presents the work performed under ICT4ICT project with a main concern upon ICT diffusion in the CEE countries. A simple model for identifying the crucial factors for penetration of the new technologies is developed. It is used for selecting countries that have achieved high level of connectivity and ICT use, yet with limited resources and depending heavily upon the knowledge of its people. Estonia�s experience is presented as an example. The lessons learned are implemented in three pilot projects in the CEE countries that have been less successful in spreading ICT. Dissemination of knowledge was the essence of these endeavours. Finally, the results and plans for future work are portrayed.
The paper presents a part of the research devoted to exploring the experience with ICT diffusion in Central and Eastern European countries. The interest was broaden by the ambition to identify some of the metrics for measuring ICT diffusion, possibly recognize the different patterns behind the dissemination of these new technologies, and sort out the factors for the adoption of services supported through the Internet. It was also important to scale down the role and the magnitude of each one of the factors. The paper is a summary of the research results and some pointers to future studies.
Much of the work that explores and studies the indicators for ICT readiness focuses on the economical factors joint by a sufficient level of education, as the driving force for the wide spread of these technologies. It has been argued that the diffusion rate is in most cases proportional to the growth of the income. Despite of the relatively high level of the human development index in most of the Central and Eastern European countries including the former Soviet republics, the degree of connectivity and Internet use demonstrates large discrepancies. Ten of them with the highest GDP have joined the EU and have much higher ICT diffusion compared to the others. Interestingly enough some have achieved the same rate of ICT diffusion despite significantly lower level of wealth. Sharing their positive experience with the CEE countries that are lagging behind with respect to ICT diffusion could possibly reduce and finally diminish the digital divide. The experience with ICT diffusion in the CEE countries is explored and a model for identifying the predominant factors was developed in order to identify those that have achieved high level of connectivity and ICT use, yet with limited resources and depending heavily upon the knowledge of its people. Estonia has expressed very high inclination and enthusiasm towards experimenting with and adopting the contemporary technologies. One of the decisive factors for establishment and development of the information society have been the IT-program for developing e-services, accompanied with creating access for all the citizens. The lessons learned from Estonia�s experience served as guidance in developing the three pilot projects implemented in Armenia and Macedonia. The primary goal was to provide means of how to possibly overcome the debilitating ignorance that spans from an ordinary citizen to a high level decision maker, in understanding the width, the depth, the profoundness, and the power of the ICT.
The paper gives an overview and comparison on two different courses in Computer networks. Although the audience and the contents of both courses were different, they had several things in common. The students in both courses were adults with a definite, but slightly different, computer science and networking background. While in the first course the participants came from more then twenty different nationalities and its running was distributed in twenty different countries, the second one involved participants from a single country. The topics of the �multinational� course were confined to IP addressing, routing and troubleshooting in IP based networks. The second one was actually a general course in Data communications. The intention of the article is to present the overall experience of teaching this type of courses in a web-based environment. It enumerates the possible problems that might be encountered during the courses. The discussion and the subsequent conclusions are focused on the comparative study concerning the outcomes and the results of the two courses.
One of the main objectives of Active Queue Management (AQM) mechanisms is to provide low delay and low packet loss in best-effort service networks. Nevertheless, AQM algorithms exhibit some weaknesses in the process of congestion detection and congestion whenever we have a dynamic network topology. The paper proposes a new AQM algorithm termed as Adaptive AQM (AAQM) to attain both fairness and congestion control in heterogeneous networks. The performance of AAQM is compared to Blue, Drop-tail, RED and REM using both Reno and Vegas version of TCP. The results based on a large series of simulations with ns (network simulator) indicate that AAQM using Reno is more robust, which means that it is highly responsive to the time-varying state of the network.
The proliferation of wireless and mobile computing and networking that creates the bases for global nomadicity, which is the ability to communicate from everywhere to anywhere by everyone, has experienced exponential growth. The advent of mobile-centric and wireless devices and the need to preserve and apply the ubiquitous Internet technology, which was originally conceived when mobility was not an issue, has posited the need to create augmented and modified models of communication that attempt to preserve the semantic integrity and transparency independent of the underlying communication infrastructure. Moreover, the number, the power and the richness of the applications developed for mobile and wireless products include multimedia capabilities whose proper execution depends on the essential network attributes such as bandwidth, delay and reliability. These attributes are also the components of the Quality of Service (QoS). Some of the problems in a heterogeneous wireless networks that either directly or indirectly influence the QoS are well known. These include but are not limited to reliable transport services, where reliability due to the infrastructure might prove to be liability rather then an asset, then asymmetrical nature of the traffic, and finally the problem of handoffs. These problems have been one of the principal areas of research in communication networks in the last several years. In the paper, that is a report on an ongoing research, we propose an architecture that addresses the problems of the Internet technology deployment in wireless heterogeneous networks through a unified conceptual model. The diversity of this model and the basic architectural components, once sufficiently explored and well defined, should enable to achieve a desirable QoS for 4G networking.
The growth of multimedia applications on the Internet made at least one fifth of the total network traffic to run over UDP. Unlike TCP, UDP is unresponsive to network congestion. This may cause, inter alia, bandwidth starvation of responsive flows, severe and prolonged congestions or in the worst-case scenario a congestion collapse. Hence, the coexistence of both protocols on fair-share premises converges towards impossibility. The paper deals with a new approach to solving the problem of taming down the unresponsive flows. By using some of the desirable properties of mobile agents, the system is able to control the influx of non-TCP or unresponsive flows into the network. Various functions performed by mobile agents monitor non-TCP flows, calculate sending rates and modify their intensity according to the needs of the network to attain as good performance as it is possible.