Mapping computer state changes as a space.

AI Thread Summary
The discussion centers on mapping the space of computable programs by analyzing simpler problems to identify patterns in complexity. The idea is to create a distribution of these problems, potentially visualized through 2D field lines, while grappling with the challenge of measuring program state. The use of McCabe's complexity metric raises concerns about the understandability and testability of larger programs, which often exhibit high complexity due to numerous parameters and functions.A key point of contention is the concept of measurability in program states. The discussion highlights the vastness of potential state space for simple operations, such as adding double precision variables, which leads to complications with precision limits, undefined behaviors, and overflow issues. The conversation also touches on the feasibility of mapping small functions and emphasizes a path-independent approach, where the focus is on start and end points rather than the intermediate space. An example is provided with a simple state machine controlling a traffic signal, illustrating how state management can be effectively represented with minimal complexity.
lostminty
Messages
80
Reaction score
0
I want to see if we can map the space that is computable programs.

You wouldn't consider complex problems, once you mapped the simplest problems into a space of points you can get a distribution. You may see that these clump. Seeing as several of the variables are proportional to the history of the variables and one is time from then on. You could develop 2D field lines. This could be then turned into.
 
Technology news on Phys.org
How to measure program state is what? I'm missing something here.

If you use some arbitrary tool, like McCabe's complexity, then larger programs -- ones that have a large number of parameters and several functions, for example, the complexity values that pretty much suggest the program is not completely understandable nor completely testable. And I assume not measurable, at least not easily.

I get that you are restricting complexity, but I do not see what you think is 'measurable'. Content of variables? Using this approach gets us in trouble fast. Here is why: If you have a simple addition of two double precision variables then the upper limit of possible program state "space" is the cartesian product of all values from DBL_MIN -> DBL_MAX by increments of DBL_EPSILON for both variables. --- This is an enormous number. Really, really big. This also runs afoul of the problems with limits of precision DBL_DIGITS - e.g. adding 10^50 + (1/10^50) results in 10^50 - and the problems of undefined behavior and overflow - getting a NAN or INF result.

So. I'm confused.
 
Last edited:
I excluded the abstraction process. For very small functions you can map it's space. The process described is effectively path independent. If you know the start and end points and can parameterise it as a function of the localised change you can then ignore the space in between and consider them sequences that have a determined path.
 
One of the simplest examples of a state machine would be a rom only program for a microprocessor that controls a dumb traffic signal, which consists of a series of delay loops for each of the 6 states in this example {all signals red, north south green, north south yellow, all signals red, east west green, east west yellow}. Registers would be used for the loop count for the delay in each state.
 
Thread 'Star maps using Blender'
Blender just recently dropped a new version, 4.5(with 5.0 on the horizon), and within it was a new feature for which I immediately thought of a use for. The new feature was a .csv importer for Geometry nodes. Geometry nodes are a method of modelling that uses a node tree to create 3D models which offers more flexibility than straight modeling does. The .csv importer node allows you to bring in a .csv file and use the data in it to control aspects of your model. So for example, if you...
I tried a web search "the loss of programming ", and found an article saying that all aspects of writing, developing, and testing software programs will one day all be handled through artificial intelligence. One must wonder then, who is responsible. WHO is responsible for any problems, bugs, deficiencies, or whatever malfunctions which the programs make their users endure? Things may work wrong however the "wrong" happens. AI needs to fix the problems for the users. Any way to...
I am trying to run an .ipynb file and have installed Miniconda as well as created an environment as such -conda create -n <env_name> python=3.7 ipykernel jupyter I am assuming this is successful as I can activate this environment via the anaconda prompt and following command -conda activate <env_name> Then I downloaded and installed VS code and I am trying to edit an .ipynb file. I want to select a kernel, via VS Code but when I press the button on the upper right corner I am greeted...
Back
Top