-
Intelligent Assessment and Prediction of Software Characteristics at the Design Stage
Oksana Pomorova,
Tetyana Hovorushchenko
Issue:
Volume 2, Issue 2, April 2013
Pages:
25-31
Abstract: This article is dedicated to intelligent method and system of design results evaluation and software characteristics prediction on the basis of processing of software metrics sets.
-
An Approach to Modeling Domain-Wide Information, based on Limited Points’ Data – Part I
John Charlery,
Chris D. Smith
Issue:
Volume 2, Issue 2, April 2013
Pages:
32-39
Abstract: Predicting values at data points in a specified region when only a few values are known is a perennial problem and many approaches have been developed in response. Interpolation schemes provide some success and are the most widely used among the approaches. However, none of those schemes incorporates historical aspects in their formulae. This study presents an approach to interpolation, which utilizes the historical relationships existing between the data points in a region of interest. By combining the historical relationships with the interpolation equations, an algorithm for making predictions over an entire domain area, where data is known only for some random parts of that area, is presented. A performance analysis of the algorithm indicates that even when provided with less than ten percent of the domain’s data, the algorithm outperforms the other popular interpolation algorithms when more than fifty percent of the domain’s data is provided to them.
Abstract: Predicting values at data points in a specified region when only a few values are known is a perennial problem and many approaches have been developed in response. Interpolation schemes provide some success and are the most widely used among the approaches. However, none of those schemes incorporates historical aspects in their formulae. This study...
Show More
-
An Approach to Modeling Domain-Wide Information, based on Limited Points’ Data – Part II
John Charlery,
Chris D. Smith
Issue:
Volume 2, Issue 2, April 2013
Pages:
40-48
Abstract: Predicting values at data points in a specified region when only a few values are known is a perennial problem and many approaches have been developed in response. Interpolation schemes provide some success and are the most widely used among the approaches. However, none of those schemes incorporates historical aspects in their formulae. This study presents an approach to interpolation, which utilizes the historical relationships existing between the data points in a region of interest. By combining the historical relationships with the interpolation equations, an algorithm for making predictions over an entire domain area, where data is known only for some random parts of that area, is presented. A performance analysis of the algorithm indicates that even when provided with less than ten percent of the domain’s data, the algorithm outperforms the other popular interpolation algorithms when more than fifty percent of the domain’s data is provided to them.
Abstract: Predicting values at data points in a specified region when only a few values are known is a perennial problem and many approaches have been developed in response. Interpolation schemes provide some success and are the most widely used among the approaches. However, none of those schemes incorporates historical aspects in their formulae. This stud...
Show More
-
Analogy-Based Software Quality Prediction with Project Feature Weights
Ekbal Rashid,
Srikanta Patnaik,
Vandana Bhattacharya
Issue:
Volume 2, Issue 2, April 2013
Pages:
49-53
Abstract: This paper presents analogy-based software quality estimation with project feature weights. The objective of this research is to predict the quality of project accurately and use the results in future predictions. The focus includes identifying parameters on which the quality of software depends. Estimation of rate of improvement of software quality chiefly depends on the development time. Assigning weights to these parameters to improve upon the results is also in the area of interest. In this paper two different similarity measures namely, Euclidian and Manhattan were the measures used for retrieving the matching cases from the knowledgebase to increases estimation accuracy & reliability. Expert judgment, weights and rating levels were used to assign weights and quality rating levels. The results show that assigning weights to software metrics increases the prediction performance considerably. In order to obtain the results, we have used indigenous tools.
Abstract: This paper presents analogy-based software quality estimation with project feature weights. The objective of this research is to predict the quality of project accurately and use the results in future predictions. The focus includes identifying parameters on which the quality of software depends. Estimation of rate of improvement of software qualit...
Show More
-
The Cognitive Programming Paradigm the Next Programming Struture
Issue:
Volume 2, Issue 2, April 2013
Pages:
54-67
Abstract: The development of computer programming started with the development of the switching logic, because, the computer hardware is made up of millions of digital switches. The activation and deactivation of these switches are through codified instructions – (program) which trigger the switches to function. The computer programming languages have gone through a revolution, from the machine code language, through assembly mnemonics to the high level programming languages like FORTRAN, ALGOL, COBOL, LISP, BASIC, ADA and C/C++. It is a fact that, these programming languages are not the exact codes that microprocessors do understand and work with, because through compiler and interpreter programs, these high level programming languages that are easily understood by people are converted to machine code languages for the microprocessor to understand and do the work human knowledge has instructed it to do. The various programming languages stem from the difficulties man has in using one programming language to solve different problems on the computer. Hence, for mathematical and trigonometrically problems, FORTRAN is the best, for business problems, COBOL is the right language, whilst for computer games and designs, BASIC language is the solution. The trend of using individual programming languages to solve specific problems by single processor computers have changed drastically, from single core processors to present day dual and multi-core processors. The main target of engineers and scientists is to reach a stage that the computer can think like the human brain. The human brain contains many cognitive (thinking) modules that work in parallel to produce a unique result. With the presence of multi-core processors, why should computers continue to draw summaries from stored databases, and allow us to sit hours to analyse these results to find solutions to problems? The subject of ‘Cognitive Programming Paradigm’, analyses the various programming structures and came out that these programming structures are performing similar tasks of processing stored databases and producing summarized information. These summarized information are not final, business managers and Executives have to sit hours to deliberate on what strategic decisions to take. Again, present day computers cannot solve problems holistically, as normally appear to human beings. Hence, there’s the need for these programming structures be grouped together to solve human problems holistically, like the human brains processing complex problems holistically. With the presence of multi-core processors, its possible to structure programming such that these programming structures could be run in parallel to solve a specific problem completely, i.e. be able to analyse which programming structure will be suitable for a particular problem solving or be able to store first solution and compare with new solutions of a problem to arrive at a strategic decision than its being done at present. This approach could lift the burden on Managers and Executives in deliberating further on results of a processed business problem.
Abstract: The development of computer programming started with the development of the switching logic, because, the computer hardware is made up of millions of digital switches. The activation and deactivation of these switches are through codified instructions – (program) which trigger the switches to function. The computer programming languages have gone t...
Show More
-
A Metric Based Approach for Analysis of Software Development Processes in Open Source Environment
Parminder Kaur,
Hardeep Singh
Issue:
Volume 2, Issue 2, April 2013
Pages:
68-79
Abstract: Open source software (OSS) is a software program whose source code is available to anyone under a license which gives them freedom to run the program, to study, modify and redistribute the copies of original or modified program. Its objective is to encourage the involvement in the form of improvement, modification and distribution of the licensed work. OSS proved itself highly suited, both as a software product and as a development methodology. The main challenge in the open source software development (OSSD) is to collect and extract data. This paper presents various aspects of open source software community, role of different types of users as well as developers. A metric-based approach for analysis of software development processes in open source environment is suggested and validated through a case study by studying the various development processes undertaken by developers for about fifty different open – source software’s.
Abstract: Open source software (OSS) is a software program whose source code is available to anyone under a license which gives them freedom to run the program, to study, modify and redistribute the copies of original or modified program. Its objective is to encourage the involvement in the form of improvement, modification and distribution of the licensed w...
Show More
-
Using the Semantic Web Services to Build a Virtual Medical Analysis Laboratory
Houda El Bouhissi,
Mimoun Malki,
Djamila Berramdane,
Rafa E. Al-Qutaish
Issue:
Volume 2, Issue 2, April 2013
Pages:
80-85
Received:
6 May 2013
Published:
30 May 2013
Abstract: In medical analysis field, patients often must visit a multitude of laboratories related web sites in order to check availability, booking, prices, result duration, and find the nearest laboratory. Thus, these varieties of reasons to visit the web sites make limitations on the usability of them. However, to overcome these limitations, this paper proposes a Virtual Medical Analysis Laboratory (VMAL) prototype system which will be based on applying the Semantic Web Services (SWSs) for scheduling outpatient tests in order to discover the suitable laboratory. Furthermore, the proposed prototype will also be based on the Web Service Modeling Ontology (WSMO)
Abstract: In medical analysis field, patients often must visit a multitude of laboratories related web sites in order to check availability, booking, prices, result duration, and find the nearest laboratory. Thus, these varieties of reasons to visit the web sites make limitations on the usability of them. However, to overcome these limitations, this paper pr...
Show More