APPLICATION OF ADVANCED TECHNIQUES TO URBAN TRANSPORTATION MOBILITY

PLANNING: A VISION FOR THE MILLENIUM

Prof.(Dr.) S.L. Dhingra & Prabhat Shrivastava***

 

 return

 

--------------------------------------------------------------------------------

 

The future society is going to be information society because convergence of computers and communication technology continues with a great pace. Information flow rather than passenger flow and goods flow to people rather than people going to goods could be the outcome. There is crying necessity in the mega and metro cities to reduce travel edemand . This could be met by the futuristic societies as above.

 

Digital communications, the net and virtual reality systems where the emphasis is on linking all to all.With rapid advances in technology, network configuration are transgressing to provide high band width connectivity using fiber optics,fast ethernet, fast distributed data interface (FDDI), Asynchronous Transfer Mode (ATM) etc. and high speed satellite broadcast/delivering. In future as said there will be more and more IT developments which will reduce the need for travelling. It is required to keep these devlopments in mind while planning for mobility in future.

 

Mobility is the ability of an individual or type of person to move about, it is function of the performance of the transportation system and the personal characteristic of trip makers, Jones (1981). Urban Public Transit plays a major role for mobility and efficient functioning of most urban areas.The transit system should be planned and operated in the most efficient manner possible with all the many existing constraints.Transit planning consists primarily of transit network design, network evaluation, transit operations etc.Transit network design involves the design of routes, schedules, layover time, loading pattern etc. Traditionally transit network design work has been done using either heuristic or mathematical optimization techniques. In general optimization techniques are applicable only to small networks. Heuristic methods are usually required for large networks because of heavy computer time requirements of optimization techniques. The analysis and evaluation of transit networks are quite complex due to

 

 

Non linearity and non convexities in formulations

Combinatorial complexities

Multi - objective and Multi criteria problem.

These sources of complexity render the solution search space computationally intractable and necessitate intelligent search by using heuristic rules and faster and computationally feasible techniques. Advances in computing techniques have dramatically changed the transportation field in the last twenty years. The next twenty years may see even more changes as computing technology continues to evolve. Some of the recent approaches for analysis and implementation of transportation related projects for better mobility are listed and discussed below.

 

 

Knowledge based expert system approach

Artificial Neural Network Approach

Fuzzy Logic Approach

Non Traditional Optimization Techniques

Application of Genetic Algorithm

Application of Simulated annealing

Application of Hybrid Algorithms

Object Oriented Programming

Geographical Information Systems.

Development of Decision Support Systems

1. KNOWLEDGE BASED EXPERT SYSTEMS

 

As per artificial intelligence many definitions exist for Expert Systems, but in a nutshell, expert systems are computer programs that encode human expertise . ES software is composed of two basic components. Knowledge base and an Inference Engine. The system mimics the reasoning process of an expert. It is no better than the human expertise that it incorporates. Expert systems knowledge is obtained from expert sources and coded in a form suitable for the system to use in its inference or reasoning process. The expert knowledge must be obtained from specialists or other source of expertise such as text, journal articles and databases. Once a sufficient body of expert knowledge has been occupied, it must be encoded in some form, loaded into a knowledge base, then tested and refined continually throughout the life of the System.

 

Characteristics features of expert systems:

 

Architectural features of expert systems include:

 

 

Expert System Shell: a development and software delivery environment for expert systems. It includes interfaces to one or more knowledge representations and associated inference engines. It allows ES development using natural language rather than computer programming languages.

Knowledge Base: the collection of knowledge that includes the assertions, rules, objects, assumptions and constraints used by an expert for solving difficult problems or tasks.

Rule Base: a collection of rules used in the knowledge base of an expert system. It has an IF-AND/OR-THEN structure.

Fact Base: a collection of facts used in the knowledge base rules to define factual knowledge.

Frame Based: a collection of objects used in a knowledge base of an expert system.

Induction: a machine learning technique that derives its decision making capabilities from case histories.

Inference: a process by which pieces of knowledge are combined to arrive at a conclusion( similar to logical thinking)

Forward Chaining: a search strategy that starts with a body of knowledge and attempts to make conclusions.

Backward Chaining: a search strategy that starts with the desired conclusion and tries to prove it with available information.

Heuristics: a "rule of thumb" a general rule based on experience or expertise of human experts.

Conventional Programming Vs ES Programming:

 

Expert systems differ from conventional computer programming in several important ways.

 

 

A conventional computer program is simply a collection of step-by-step procedures or algorithms for processing data to arrive at some conclusion. We eeecall this method conventional or procedural programming. For example, if a program is designed to calculate the total store receipts at the end of the day, the program will add up individual receipt data and calculate a "total" at the end of each day.ES programming, which uses symbolic programming techniques is a knowledge-based approach that is highly interactive in making recommendations.

ES programming has an inference engine that conventional program lacks. An ES program is easier to modify than a conventional one because the ES program uses modular sets of independent rules - based on heuristic knowledge - to generate a solution to problem. rather than complex algorithms only a programmer can understand. An inference engine is a program that can

1) draw conclusions based on facts

2) reason using a rule base

The inference engine is the component of the expert system that access, selects, and executes programmed rules. Inference is the process that pieces together or combines information to arrive at a conclusion.

The knowledge is encoded and maintained as an entity separate from the control program. As such, it is not compiled together with the control programme itself. This permits the incremental addition and modification (refinement) of the knowledge base without recompilation of the control programmes. Further more, it is possible in some cases to use different knowledge base with the same control programmes to produce different types of expert systems. Such systems are known as expert system shells they may be loaded with different knowledge bases.

Expert systems are capable of explaining how a particular conclusion was reached, and why requested information is needed during a consultation. This is important as it gives the users a chance to assess and understand the systems reasoning ability; they're improving the user's confidence in the system.

The development of an expert system differs from conventional program development primarily by the involvement of a Knowledge Engineer (KE) and domain experts. The KE interview domain experts and attempts to formulate rules and facts used by experts to solve a difficult task. A significant step is the inclusion of an expert. The KE then transfers the knowledge into an expert system using IF-THEN rules.

2. ARTIFICIAL NEURAL NETWORKS

 

An artificial Neural network (ANN) works on the principle of the functioning of the human brain in a simplified manner. This is composed of elements whose functions are analogous to the most elementary functions of biological neurons (Wasserman, 1989). ANNs are mathematical models of theorized mind and brain activities ( Simpson, 1990). It consist of an input layer that collects the input data, the hidden processing layers, and the output layer which holds the response of the network. In a statistical approach, the sets of patterns are stored for comparison and pattern recognition. According to Sharda(1992), statistical models make assumptions regarding distributions of variables whereas ANN models are capable of providing more accurate predictions. Expert systems give unpredictable results when exposed to noisy and incomplete data.

 

Application of ANN in Transportation:

 

Wei and Sconfeld (1993) have demonstrated the use of the ANN approach in evaluating transportation network improvements. Xiong and Schneider (1992) used an ANN to estimate the travel times for various network improvements projects selected without considering the effects of traffic demand changes over the time and project implementation timings.Rodrigue (1997) identifies neural networks as being a useful tool of modelling the interactions between transportation and land use. Dia and Rose (1997), re-examines the use of neural networks for incident detection on major highways.

 

Advantages of ANN techniques:

 

 

No rules are required to be given since it generates its own rules on being trained on examples.

There is large saving on memory space since no patterns or rules are required.

Processing work involved is very less in the recall mode since no logical operations are used.

Information stored in the form of weights give reasonable networks response even with incomplete, noisy or previously unseen data.

The distributive nature of weights is an essential feature that assist in fault tolerant computations

The Working of the Neural Network:

 

ANN work on the principal of the functioning of the human brain in a simplified manner. It consist of a set of nodes arranged in layers. The input layer ( or buffer layer) collects the input data, the hidden layers process the data and the output layer holds the response of the network. The input layer consist of processing elements with input paths. Every element in the input layer is connected to every other element in the hidden layer with weights assigned to each connecting link. These weights connecting the nodes ( or processing elements) are generated randomly. Each node (or processing element) of the hidden layer multiplies the responses from the previous layer (input layer) by the weights of the connecting link and sums up signals. These aggregation signals are then filtered through an activation function or a threshold function. The activation function compresses or amplifies the aggregated signals into output values that range between 0 and 1 (Wei and Schonfeld, 1993). Thus each node is activated based on the input received, the activation function or transfer function ( which is non - linear) and the bias (threshold value) of the node. A well - trained ANN produces reliable outputs within the specified tolerable limits of the errors. There are two main categories of operations involved in the working of an artificial neural network:

 

 

Layer operation

Network operation

 

Layer Operation

There are two categories of layer operations in an ANN:

 

Normalization, and

Competition

In a normalization operation, the vector of the output of a complete layer is scaled to give a fixed total output value usually between 0 and 1( through weights and transfer functions). It may be noted that threshold functions pass the information when the combined information reaches a particular threshold level In competitive layer operation, the element with the highest activity triggers off the response whereas, the functioning of other elements are suppressed.

 

 

Network Operation

The network operations comprise two main processes - learning and recall

 

Learning Processes:

 

There are three types of learning techniques:

 

 

Supervised learning

Unsupervised learning

Reinforcement learning

The learning rule must specify how the weights must be adjusted for the desired output in all the above methods.There are four primary factors that affect the back propagation training process.

 

 

The characteristics of the ANN transformation function

The allowable error

The learning speed or step size ranging between 0 and 1

ANN size in terms of the number of hidden units

In an unsupervised learning method, no desired output is provided to the ANN. In a reinforcement learning process, the ANN trainer indicates whether the response to a given input is good or bad.

 

Recalling Process:

 

The recall process deals with the global processing of a stimulus at the input layer and the creation of response at the output layer. ANNs can be classified based on their recalling processes as follows:

 

 

Feed - Forward network and

Networks with feed back.

Networks with no feed back connections are called as feed forward networks. Here, the information is passed from the input layer to the output layer through the hidden layers using the summation and transfer function characteristics.In networks with feed back connections, information will reverberate around the network, across and with in the layers until some convergence criteria is satisfied. The information is then passed to the output layer. Most networks have energy function associated with them that governs the sequence in which the layers are processed.

 

Potentials of Artificial Neural Networks

 

Neural network technology has special capabilities and advantages and many theoretical issues related to them have been studied and clarified. To assess the potential of the technology, the main advantages as well as drawbacks of the neural networks are listed below :

 

Fault Tolerance : Neural networks are robust . Because information is distributed all over the network, they can survive the failure of some nodes and their performance degrades gracefully under faults. This is analogous to the fact that the nerve cells in the human brain die every day without effecting brain performance significantly.

 

Parallelism : They embody parallel computing and hence make it possible to build parallel processing hardware for implementing them. Extremely fast computation can thus be achieved. Training neural networks is often time consuming but because of the above parallelism, after training they can operate in real time.

 

Flexibility and Adaptively : They can deal with information that is fuzzy, probablistic, inconsistent and noisy, with a great degree of success. They can adapt intelligently to previously unseen situations. Learning : They can learn from examples presented to them and do not need to be programmed.

 

Size and complexity : For very powerful implementation of neural networks, we need massive arrays of neurons which is difficult problem for present day silicon technology.

 

Learning delay: The learning process in some cases is very lengthy. Learning algorithms with better speed and convergence properties or innovative schemes to short -circuit sluggishness of existing algorithms are required. Inference explanation : Neural network, unlike knowledge based systems, lack the ability for explaining its inferences.

 

3. FUZZY LOGIC

 

In contrast to a classical set, a fuzzy set, as the name implies, is a set without a crisp boundary. That is, the transition from "belong to a set" to "not belong to a set" is gradual, and this smooth transition is characterised by membership functions that give fuzzy sets flexibility in modelling commonly used linguistic expressions, such as "the travel time is high" or "ridership is less. Such imprecisely defined sets or classes play important role in human thinking, particularly in the domains of pattern recognition, communication of information, and abstraction. Normally conventional methods are good for simpler problems while fuzzy systems are suitable for complex problems.

 

Fuzziness in Transportation Analysis

 

In the study of transportation problems, fuzziness is found in many aspects of analysis, including perception of data and information, knowledge base, statement of goals and objectives, and problem definition. The following are some examples.

 

 

The approximate numbers are used when one interprets or perceives information that has potential measurement imprecision. For example,when one is told that the travel time from A to B is 22 minutes, one may perceive it as " approximately 20 minutes."

Fuzziness is also found in linguistic expressions, such as large volume, small delay, and peak hour. These expressions are often used in context-dependent manner. Expressions for goals of a plan are other examples; these are due mostly to the fact that the analyst does not know or cannot decide what the specific values for the goals should be.

A third case is incomplete knowledge and rules of thumb. In such a case, the causality or functional relationship is expressed by a set of linguistic rules, such as " if x is large then y is small," or " a 10 % increase in transit fare leads to a 5% decrease in ridership." In many instance, reasoning in transport planning process is based on the concept of similarity . Given a fuzzy knowledge base " if x is A then y is B," and when the input x is A', one still makes an inference to the extent that A is similar to A'. This is the logic we justify the transferability of an idea; for example, consider the case that one wants to justify the use of a particular transit service scheme that works in one city. In this case, transfer of the idea is justified to the degree of the similarity of the conditions of the two cities.

Fuzziness is different from probability; it is the perceived uncertainty, which may be caused by randomness in the Behaviour of the subject. Fuzziness in expression often contributes to the richness in communication . Fuzzy information should not be condemned nor eliminated; rather, its utility should be explored. The mechanism to process fuzzy information is provided by fuzzy set theory.

 

Fuzzy Inference

 

The fuzzy inference is a popular computing frame work based on the concepts of fuzzy set theory, fuzzy if-then rules, and fuzzy reasoning. It is the process of deriving conclusions from a given set of fuzzy rules acting on fuzzified data. The following is a typical frame of fuzzy inference :

input : x is A'

Rule : if x is A then y is B

Output: y is B'

 

In the traditional inference, in order for the conclusion " y is B' " to be inferred, A and A' must be same. If A is not the same as A' , then y cannot be inferred. In fuzzy inference, A and A' are fuzzy . The concept of similarity plays an important role here. The degree of similarity between A' and A determines the degree of truth of the conclusion. This characteristic of fuzzy inference makes it suitable for application to many transportation problems; for example, (1) justification of the transfer of an idea from one location to another; and (2) modelling of the human decision process based on linguistic rules. Perhaps the greatest advantage of fuzzy inference is its power to integrate different knowledge bases. When the input matches with the antecedents of more than one rule, all the rules whose antecedents have some degree of overlap with the input are activated; hence, the compromise of the conclusion of each applicable rule becomes the overall conclusion. The output produced by fuzzy inference system are almost always fuzzy sets. Sometimes it is necessary to have a crisp output, so a process in which fuzzy output is converted into crisp, numerical results is called defuzzification.

 

Defuzzification

 

Defuzzification refers to the way a crisp value is extracted from a fuzzy set as a representative value. Defuzzification operation can be performed by a number of methods of which the center of gravity method ( also known as centroid) and the heights meth od are commonly used. The centroid defuzzification method selects the output corresponding to the centre of gravity of the output membership function.

 

General Procedure

 

The important steps involved in realising fuzzy systems are as follows :

 

Define fuzzy problem in detail

Identify all important variables and their ranges

Determine membership profiles for each variable range

Determine rules (prepositional statements ), and

Select defuzzification methodology.

Although the fuzzy inference system has a structured knowledge representation in the form of fuzzy if-then rules, it lacks the capabilities of learning and have no memory. That is why hybrid systems particularly Neuro-Fuzzy systems are becoming popular. Determining the good membership functions and fuzzy rules out of many realizable rules and functions is not always easy. Thus, we incorporate neural networks in deriving inference rules and membership functions from observations .

 

4. NON TRADITIONAL OPTIMISATION TECHNIQUES

 

1. GENETIC ALGORITHM

 

The genetic Algorithm is a model of machine learning which derives its Behaviour from a metaphor of the process of evolution in nature. This is done by the creation within a machine of a population of individual represented by Chromosomes, in esse nce a set of character strings that are analogous to the base - 4 chromosomes that we see in our own DNA. The individuals in the population then go through a process of evolution. We should note that Evolution (in nature or any where else) is not a purposive or directed process. That is, there is no evidence to support the assertion that the goal of evolution is to produce mankind. Indeed the processes of nature seem to boil down t o different individuals competing for resources in the environment. Some are better than others. Those they are better are more likely to survive and propagate their genetic material. In nature, we see that the encoding for our genetic information (Genome) is done in a way that admits a sexual Reproduction(such as budding) typically results in offspring that are genetically identical to the parent . sexual reproduction allows the cr eation of genetically identical to the parent. Sexual reproduction allows the creation of genetically radically different offspring that are still of the same general flavour(species) At the molecular level what occurs is that a pair of Chromosomes bump into one another , exchange chunks of genetic information and drift apart. This is the recombination operation, which GA generally refer to as crossover because of the way that geneti c material crosses over from one chromosome to another. The crossover operation happens in an environment where the selection of who gets to mate is a function of the fitness of the individual, i.e. how good the individual is at competing in its environment. Some Genetic Algorithms use a simple function of t he fitness measure to select individuals (probabilistically) to undergo genetic operation such as crossover or reproduction (the propagation of genetic material unaltered). This is fitness proportionate selection . The two process that contribute to evolu tion are crossover and fitness based selection/reproduction. As it turns out, there are mathematical proofs that indicate that process of fitness proportionate Reproduction is, in fact , near optimal in some senses Mutation also plays a role in this process, though it is not the dominant role that is popularly believed to be the process of evolution, i.e. random mutation and survival of the fittest. It can not be stressed too strongly that Genetic Algorithm ( as a simulation of genetic process) is not a 'random search' for a solution to a problem (highly fit individual). The genetic algorithm use stochastic process. But the result is distinctly non random (better than random) When the Genetic Algorithm is implemented it is usually done in a manner that involves following cycle:

 

 

Evaluate the fitness of all the individual in the population.

Create a new population by performing operations such as cross over ,fitness proportionate reproduction and Mutation on the individual whose fitness has just been measured.

Discard the old population and iterate using new population

One iteration of the loop is referred to as a Generation . There is no theoretical reason for this as an implementation model. Indeed, we do not see this punctuated Behaviour in Populations in nature as a whole, but it is a convenient implementation model. The first generation (generation 0) of this process operates on a population of randomly generated individuals. From there on, the genetic operations, in concert with the fitness measure, operate to improve the population Another operator is called inversion operator. The inversion mechanism on a genotype involves choosing two points along the length of the genotype, cutting the genotype at those points and swapping end pints of the cut section

 

Coding of Problem Parameters:

 

There is no unique method of coding in GA for a given problem. Two types of coding can be used

 

Binary coding

Non Binary coding

For problems with a large no of parameters to be coded,the binary coding can prove to be too long and cumbersome to handle .The Binary coding is useful for smaller scale problems, and nonbinary coding is used for larger scale problems.

 

 

Generation of Initial Parent Pool of Solutions

 

Once the coding problem parameters is formulated, it is necessary to generate an initial parent pool of solutions. It is advantageous to have as large of a parent pool size as possible to increase the no. of schemata being processed per iteration, as mentioned earlier. However there are obvious constraint of computer memory space and processing time. An optimal parent pool size has to be determined for a given problem. Once the parent pool size is determined, the parent pool of solutions generated by a random process.

 

Evaluation of Parent Genotypes: The parent pool of solutions (i.e. genotypes) is evaluated by means of a user defined objective function. Based on the value of the objective function, the parent genotypes are ranked. Generation of off springs The ranked parent genotypes are used to generate new genotypes (offspring), care has to be taken to ensure that the method adopted does not lead to premature convergence. Premature convergence is a situation that occurs whenever there is an early and dra matic increase in lost alleles. A lost alleles occurs when the entire parent pool has the same value for a particular gene in the pool.

 

Stopping Criterion: The most common stopping criterion in Gas is when convergence has occurred, that is , when entire parent pool has converged to a single genotype. In certain cases, user a may choose other stopping criterion such as the offline performance being within 5% of the best genotype or after a certain no of iterations.

 

Applicability of Genetic Algorithms

 

Various attempts have been made to obtain optimal schedule of transit networks only with transfer time consideration using computer simulation and using a combination of optimisation model and simulation procedure. However , development of such an optima l schedule is an extremely difficult task even for a small transit network (Kikuchi and Parmeswaran 1993). The difficulty arises because of large no of variables and constraints, the discrete nature of variables, and the non linearity involved in the ob jective function and the constraints. Vignaux G,A. and Michalewicz Z. (1991) used GA for linear transportation problem. Chan W.T. et. al.(1994) used road maintenance planning using GA. Chakroborty P. et al (1995) used GA for Optimal Scheduling of Urban transit systems. Potter T. and Bossomaier T.(1995) solved Vehicle routing problem using GA . Malmborg C .J,(1996) used GA for service level based vehicle scheduling.

 

Fundamental Difference with Traditional Methods:

 

1) GA work with a coding of decision variables, a matter which is very uncommon in traditional methods. There are certain advantages of working with a coding of decision variables Coding discretizes the search space and allows GAs to be applied to discre te and discontinuous problems.

 

2) Since no gradient information is used in GAs, they can also be applied to non differentiable functions. This makes GAs robust in the sense that they can be applied to wide varities of problems.

 

3) GAs exploit coding similarities to make a faster and parallel search

 

4) Unlikely many traditional methods, GAs work with a population of points. This increase the possibility of obtaining global optimal solution even in ill behaved problem (Goldberg 1989)

 

5) GAs use probabilistic transition rules, instead of fixed rules. In early GA iterations, this randomness in GA operators makes the search unbiased toward any particular region in the search space and has an effect of not ,making a hasty wrong decision. This avoids a hasty wrong decision and affects a directed search later in optimisation process.

 

These differences make GAs distinct (in principal) from the traditional search and optimisation methods, help GAs find global optimal points, and make them more amenable for application.

 

2. SIMULATED ANNEALING

 

The simulated annealing method resembles the cooling process of molten metals through annealing . At high temperature, the atoms in the molten can move freely with respect to each other, but as the temperature is reduced, the movement of the atoms gets re stricted. The atoms start to get ordered and finally form crystals having minimum possible energy. However, the formation of the crystals mostly depends on the cooling rate. If the temperature is reduced at a very fast rate, the crystalline state may not be achieved at all, instead , the system may end up in a polycrystalline state. Therefore, in order to achieve the absolute minimum energy state, the temperature needs to be reduced at a slow rate. The process of slow cooling is known as annealing in metallurgical parlance. The simulated annealing procedure simulates this process of slow cooling of molten metal to achieve the minimum function value in a minimization problem. The cooling phenomenon is simulated by controlling a temperature like parameter introduced with the c oncept of the Boltzmann probability distribution. The initial temperature (T) and the number of iterations (n) performed at a particular temperature are two important parameters which govern the successful working of the simulated annealing procedure. If a large initial temperature is chosen, it takes a number of iterations for convergence. On the other hand, if a small initial T is chosen, the search is not adequate to thoroughly investigate the search space before converging to the true optimum. A large value of n is recommended in order to achieve qua si - equilibrium state at each temperature, but the computation time is more. Unfortunately there are no unique values of the initial temperature and n that work for every problem.

 

3. HYBRID ALGORITHMS:

 

* Neuro - fuzzy systems: Both neural networks and fuzzy logic systems (FLs) deal with important aspects of knowledge representation, and learning process, but they use different approaches and have their own strengths. Neural networks can learn from sample data automatically, but lack of explanation ability. Fuzzy logic systems are capable to perform approximate reasoning, but usually are not self adaptive. The real power of artificial intelligence lies in the integration of neural networks and FLs. There are three typical methods to integrate neural networks and fuzzy reasoning. We call these neuro- fuzzy technologies :

 

Type 1: amethod to apply the fuzzy control and neural networks to different control objectives.

Type 2 : a method to modify the fuzzy reasoning results by neural networks

Type 3 : a method to determine the membership functions of fuzzy rules by NN

 

Out of the three methods listed above, perhaps the simplest way to conceptualise is to use a fuzzifier function to pre-process data for a neural networks . Further, a NN that can learn new relationships with new input data can be used to refine fuzzy rules to create a fuzzy adaptive system. Neuro-fuzzy techniques are being used in many applications such as in; advanced transportation like intelligent car pro grammes, collision less driving, speed controllers, Product design ranging from subway and helicopter controllers to auto focus camera mechanisms and washing machine controllers. Hartani et al. (1994) discussed the possibility of using a combination of f uzzy logic and neural networks to control the acceleration and deceleration of a metro train.

 

The steps involved in neuro-fuzzy network modelling are the following :

 

 

Identify the significant input variables from the data and decide the number of input nodes.

Find the linguistic values or labels such as large, medium or small represented by fuzzy set. Each input node at layer one is connected to its corresponding linguistic values at layer two.

Determine the fuzzy rules, select the number of rules and select the appropriate rules from among the possible rules.

Set the initial weights to each rule.

Train the network. If the performance is not adequate, increase the number of rules, reset the initial weights and retrain. Unnecessary rules and neurons can be dropped with no retraining.

* GAs controlled by FL

 

The use of Fuzzy - Logic to translate and improve heuristic rules has also been applied to manage the resource of GAs (population size, selection pressure) during their transition from exploration (global search in the solution space) to exploitation (localised search in the discovered promising regions of that space). The management of GA resources gives the algorithm an adaptability that improves its efficiency and converge speed. According to Herrera and Lozano (1996), this adaptability can be used in the GA's parameter settings, genetic operators selection, genetic operators behaviour, solution representation and fitness function.

 

* FL Controller tuned by GAs

 

Many researchers have explored the use of Genetic Algorithms to tune fuzzy logic controllers. Cordon et. al. (1995) contains an updated bibliography of over 300 papers combining GAs with Fuzzy logic, of which at least half are specific to the tuning and design of fuzzy controllers by GAs.

 

* NNs generated by GAs

 

There are many forms in which GAs can be used to synthesise or tune NN, to evolve the network topology (number of hidden layers, hidden nodes,and the number of links) letting them back propagation (BP) tune the net; to find the optimal set of weights for a given topology, thus replacing BP; and to evolve the reward function, making it adaptive. The GA chromosome needed to directly encode both NN topology and parameters is usually too large to allow the GAs to perform an efficient global search.Typically NNs using back - propagation (BP) converge faster than GAs due to their exploitation of local knowledge. However this local search frequently causes the NNs to get stuck in a local minima. On the other hand, GAs are slower, since they perform a global search.Thus GAs perform efficient coarse granularity search( finding the promising region where the global minimum is located) but they are very inefficient in fine - granularity search (finding the minimum). These characteristic motivated Kitano (1990) to propose an interesting hybrid algorithm in which the GA would find a good parameter region which was then used to Initialise the NN. At that point, Back - Propagation would perform the final parameter Tuning.

 

* FL controller tuned by NNs:

 

Many researchers have used FLC tuning by NN, such as the ones described in Kawamura et al (1992),

 

* Hybrid GAs:

 

Since GAs are quite robust with respect to being trapped in local minimum (due to global nature of their search) but rather inaccurate and efficient in finding the global minimum, several modifications have been proposed to exploit their advantage and com pensate for their short coming. Researchers have tried hybrid GAs like GA and Hill climb technique and Simplex cross over technique.

 

OBJECT ORIENTED PROGRAMMING:

 

To overcome limitations of traditional programming , a natural and innovative way of programming has been developed based on modelling things as they are naturally organized The strategy is commonly referred to as Object Oriented Programming (OOP), which subdivides knowledge into physical or conceptual entities, each having a specific function with in the system. OOP was introduced to solve complex problems so that computer programs can easily be tested, refined, reused, and maintained. OOP is based on two principles:

 

 

Structured programming using languages such as Pascal or C

Management of Knowledge, which consist of classifying and storing information of similar concept into modular entities called Objects

In contrast to traditional programming , OOP is based on encapsulation. Data and the procedures that deal with them are stored in the same objects. This mechanism of dynamic object generation is called instantiation. Instances that have the same structure and behavior are further grouped into classes. Class contain general knowledge, and their sub classes hold more specified and detailed knowledge. The inheritance of information and attributes of subclasses from super classes allows more organization and reduces the size of computer coding.

 

GEOGRAPHICAL INFORMATION SYSTEM

 

Engineers and researchers are continually faced with transportation problems on which much information exists, often in the form of reports, computer data and the undocumented experience and practices. The advent of GIS has facilitated the integration of data with geographical elements to perform analysis in a variety of disciplines, including transportation. GIS which have been successfully applied in many fields outside the transportation industry, offer the potential to assemble and process data from diverse sources and its presentation in an easily understood graphical format. The unique ability of GIS to handle complex spatial relationship makes it a natural tool to use in the planning and analysis of transportation systems, specifically public tran sportation systems. The challenge is to integrate information technologies to create synergic effects so that the individual parts are made more useful by contributing to the whole. Geographic information system function in real time and the capabilities of a GIS in the transportation field will permit the assimilation, integration and presentation of data collected and stored by each division with in a highway agency. A Geographical information system is a tool that provides database management capabilities for and display of spatial data and provides the ability to perform analysis of geographic features (points,lines and polygons) based on their explicit relationship to each other. An important concept which makes GIS different from other computerized graphical systems is topology. Topology is defined as the spatial relationships between connecting or adjacent spatial objects (e.g. points,lines and polygons). Topo logical relationships are built from simple elements into complex elements. GIS has the ability to extract information from one layer of topology based on its relationship to another layer, and to integrate information from different topological layers based on their relationship to each other. GIS is the most sophisticated memb er of a family of computerized graphical systems which have varying degrees of capabilities in database management and spatial functions. This family of graphical system consist of :

 

 

Computer - Aided Drafting and Design (CADD)

Automated Mapping (AM)

Thematic Mapping ( TM )

GIS

- Raster - based GIS

Vector - based GIS

Computer - Aided Drafting and Design (CADD) system provide the ability " to interact with a visual image of a drawing by creating, editing and manipulating lines, symbols and text". Automated Mapping (AM) software generally has the same function as CADD s oftware; however CADD systems are normally used for architectural and engineering drawings, while Automated Mapping is used for mapping. Thematic Mapping ( TM ) adds another level of sophistication to automated mapping. It has the ability to add colours,labels and/or other identifying features to map entities based on attributes associated with that entity. Thus, as the term " Thematic" mapping suggests, Thematic Mapping emphasizes a particular theme on the map by focussing attention on specific attributes of the map entities. GIS differs from those other graphical systems in its ability to handle both at tributes and topology. There are two types of GIS that handle attributes and topology differently: Vector - based and Raster - based. Vector - based GIS represents map features by X, Y co-ordinates. Attributes are associated with the feature, as opposed to a Raster - based GIS, in which attributes are associated with a grid cell ( an individual point). Thus , vector based GIS deals explicitly with topology while Raster - based does not. Overall functional capabilities of GIS consist of data capture, storage and maintenance, analysis and output. Data capture can be performed using graphical data from existing sources of digitized and attribute data from existing files or manually entered. Data storage and management consists of file management and editing. Data analysis consists of data base query, spatial analysis, and modelling. Data output can be generated in the form of maps and reports.Potential applications of GIS in transportation/ highway analysis include the following:

 

 

Executive information system

Pavement management system

Bridge management

Maintenance management

Safety management

Transportation system management

Safety management

Travel demand forecasting

Corridor preservation and right of way

Hazardous cargo routing

Overweight / Oversize vehicle permit routing

Accident analysis

Environmental impact

GIS tool can be used effectively for decision making for many reasons. Two major objectives are as follows :

 

 

Map display provides a familiar,visually oriented basis for accessing, interrogating, and presenting data from a wide variety of sources.

Data integration develops a logical,coherent, consistent platform from which to integrate diverse databases.

DSS SYSTEMS

 

The transportation analysis involved in the decision making process being multi - disciplinary, multi - objective as well as multi - criteria, it becomes impossible for the urban planner to reach an objective decision in the larger interest, free of human bias and prevailing market forces. This necessitate the formulation of an array of suitable queries and acceptable options so as to arrive at the desired solution. Many a times, however this solution need not be necessarily unique. Decision support systems are quite useful for the decision making process. Techniques discussed above can be quite useful for development of DSS systems. Decision support systems (DSS) can be defined as computer based systems that help decision makers solve ill - structured problems through direct interaction with data and analysis models.

 

There are mainly, three levels of management, three levels of techn ology, and three levels of specialists involved in the functioning of a DSS. The three levels of management involved in the working of a DSS - the top level management concerned with decision making at the strategic level, the middle level management concerned with decision making at the tactical level, and the lower level management concerned with decision making at the operational level. DSS tools , DSS Generators and Specific DSS constitute the three levels of technology used in DSS.

 

DSS tools comprise the basic hardware components and software components. The class of related software and hardware used to build up specific DSS applications quickly and easily are known as DSS generators. DSS tool smith, DSS builders / managers and top management comprise the three major levels of specialist groups involved in the development of a DSS. The powerful functions of the Data Base Management Systems (DBMS) are of great importance in the development and use of a DSS. Data from internal and external sources must be accessed with ease and the extraction and entry process must be flexible enough to allow rapid updating based on the user requests. In most of the successful DSS, it is necessary to create a DSS database which is logically separate from the operational database. Large DBMs generally deal with inquiry, retrieval, updating, storage and manipulation of information in personal data bases, local data bases and system data bases and flexible report preparation capabilities.

 

IMERGING INFORMATION TECHNOLOGY POTENTIAL FOR TRANSPORTATION MOBILITY:

 

 

While microcomputers made their first appearance two decades ago (late 1970s), it is only in the last five years that they have become "seriously useable" machines. This situation has occurred as the consequence of a series of developments which include : faster processors; large capacity, high performance and relatively inexpensive hard disks; high resolution colour monitors becoming universal; CD - ROM players becoming near universal; availability of inexpensive, high quality colour hardcopy output dev ices; availability of inexpensive high quality colour scanner. These hardware technology changes have gone in parallel with changes in better data conversion software for scanners, better software for image manipulation and storage, and improvement in dat abase management systems.

By interfacing GIS information with the Web it is possible to develop stronger, cheaper, and interactive lines of communication of mapped information with community. GIS, supporting spatial decision making, and the web, as communication medium are two k ey technologies which can be used to achieve the objective of sustainable development.

An effective decision environment depends upon information which is timely, comprehensive, accurate and readily accessible. The development directions offer increased capability for frequent data update, new directions for modelling and visualisation, and wider accessibility. Data capture options are explored with emphasis on low cost and high accuracy digital camera systems ( DCS). In rapidly changing areas two or three dimen-sional maps/ models can be updated regularly for effective modelling. The capa bility of systems to cover the range of decision situations which emerge in urban planning will be enhanced by newly emerging modelling paradigms and integration of technologies - such as GIS and visualization.

Computers and information technology are increasingly being integrated into the every day functioning of the society. There are many opportunities and many problems associated this integration. Unequal access to the technology due to economic knowledge and other barriers has the potential to create an "information underclass". Community networks have the potential to provide equal access to the information revolution, thus allowing individual and communities to become more involved in the decision maki ng process that affect their lives. A traditional community network consist of a network of computers with modems that allows users to connect to a central computer which provide community information and a means for the community to communicate electroni cally

GPS FOR URBAN MOBILITY: Global Positioning System (GPS) have tremendous potential for better urban management/planning. Traffic management,emergency services,policing etc. are the areas where GPS can play significant role due to its capability to provide precise location (latitude, longitude),altitude and other details. Traffic routing, movement of vehicles, VIP movement, taxi services etc. becomes more easier by using GPS receivers on vehicles.Emergency response service can be done more effectively by using GPS along with GIS database of the city. GPS is also very useful in creating accurate spatial database. It is interesting to note that in Paris, the larger taxi companies have equipped themselves with a computer system for the distribution of the trips to be made and for quick location through satellite. Now, taxi drivers are registered in specific areas and position is controlled by the GPS system.With GPS system the position of taxies are calculated and transmitted to the reservation office easily. GPS can be used in a big way for transportation mobility.

 

The net result of these changes is that it is now relatively easy to create, store and distribute large quantities of data including images, audio and video. This potential in the area of information technology should be exploited for analysis, evaluation of transportation projects for better mobility. If information is defused by efficient information system then number of trips would automatically be reduced. Similarly the work and education trips can also be curtailed if information technology potential is exploited so that office/ school at home type of environment can be created. It is obvious that if such trips are reduced then no. of trips can be co ntrolled and therefore crowding level can be controlled and ultimately it improves mobility.

REFERENCES

 

 

Bersini, H;Nordivik, J;Bonarini, A(1995), " Comparing RBF and fuzzy inference systems on theoretical and practical basis", International conference on Artificial Neural Network (ICANN95), pp. 169 - 174, Paris, France 1995.

Bonissone P.P.(1997), " Soft computing : the convergence of emerging reasoning technologies", Soft Computing ,A fusion of foundations, Methodologies and Applications, Volume 1, Number 1, April 1997, pp. 6 -18.

Chan W.T., Faw T.F., Tan C.Y.,(1994), "Road Maintenance Planning Using Genetic Algorithms. I: Formulation", Journal of Transportation Engineering, Vol 120, No 5, ASCE, pp 693- 709.

Chakroborty, P.,Kalyanmoy deb, and P.S. Subramanyam, (1995) "Optimal Scheduling of Urban Transit system using Genetic algorithms", Journal of transportation engineering, Vol 121, No 6, ASCE, pp 544- 553.

Cordon, o, Herrera, H; Lozano,M.(1995): A classified review on the combination fuzzy logic - Genetic Algorithm bibiliography. Technical report 95129, Department of Computer Science and AI, University de Granada, Granada, Spain

Driss Ouazar et al (1996), " Object -Oriented Pumping - Test Expert System", Journal of Computing in Civil Engineering, ASCE, pp.4 -9.

Goldberge, D.E. (1989), Genetic Algorithms in search Optimisation and Machine Learning. Addision - Weseley, Reading, Mass.

Herrera F.,Lozano, M., Verdgay, L (1995), Tuning fuzzy logic controlled by genetic algorithd. Int. Journal Approximate Reasoning (IJAR) 12(3/4), 299 -315.

Jones, S,R (1981), " Accessibility measures; a literature review", TRRL report, 967, Transport and Road Research Laboratory (U.K.)..

Kasturirangan K. and Rao D.P. (1997), " Urban Planning and Management: Space technology option",Key Note Papers, CUPUM -97,pp. 50 - 58.

Kawamura,A; Watnabe, N; Okada, H (1992): "A prototype of Neuro-fuzzy cooperation systems",first IEEE international conference on fuzzy sysytems, pp. 1275 -1282,San Diego, CA.

Kikuchi,S. and Parameswaran, J,(1993), "Solving a schedule co-ordination problem using a fuzzy control technique". Proc., Intelligent Scheduling Systems Symp., ORSA - TIMS, san Francisco, calif.

Kitano, H (1990) Empirical studies on the speed of convergence of Neural Network Training Using Genetic Algorithm, fifth national conference on artificial intelligence (AAAI'90),pp.789 - 796. Boston,MA.

Kalyanmoy Deb,(1995), Optimisation for Engineering Design, Algorithms and Examples, Prentice Hall of India Private Limited, pp.290 - 359.

Malmborg C.J.,(1996), " A Genetic Algorithm for service level based vehicle scheduling" European Journal of Operation Research, 93, pp. 121-134

Potter T. and Bossomaier, (1995), "Solving Vehicle Routing Problems with Genetic Algoriths", IEEE, pp.788-793.

Sarda, R.(1992), Neural Networks for the OR/MS Analyst: An Application Biography, Working paper 92-3,july, college of Business administration.

Simpson, P.K.(1990), Artificial Neural Systems: Foundations, Paradigms, Application and Implementations, Pergamon Press, New York.

Soni, S. K et. al.(1996), "Geographical Information System : A Technology for Transportation Engineers", Indian Highways, Vol 24, No. 2, pp. 53 - 59..

Vignaux G.A. and Michalewicz Z.,(1991), "A Genetic Algorithm for the Linear Transportation Problem", IEEE, pp. 445 - 452.

Wasserman, P.D. (1989), Neural Computing: Theory and Practice, Van Nostrand Reinhold, New York.

Wei, Chien - Hung and Schonfeld, Paul M. (1993), "An Artificial Neural Network Approach for evaluating Transport Improvements", Journal of Advanced Transportation, Vol. 27, No. 2, pp. 129 - 51.

Xiong, Y. And Schneider, J.B.(1992) , "Transportation Network Design Using a Cumulative Genetic Algorithm and a Neural Network," presented at the 71st annual meeting of the Transportation Research Board, Washington D.C.

***Prbhat Shrivastava is Research Scholar in Civil Engg. Dept., IIT,Bombay