Points of Required Attention™
Please chime in on a proposed restructuring of the ROM hacking sections.
Views: 88,498,867
Main | FAQ | Uploader | IRC chat | Radio | Memberlist | Active users | Latest posts | Calendar | Stats | Online users | Search 04-29-24 02:32 AM
Guest: Register | Login

0 users currently in Computing | 2 guests

Main - Computing - Looking for a little advice on working with headers in C++ New thread | New reply


MathOnNapkins
Posted on 03-27-07 11:35 AM Link | Quote | ID: 19904


Super Koopa
Level: 62

Posts: 56/842
EXP: 1935918
Next: 48768

Since: 02-19-07
From: durff

Last post: 4491 days
Last view: 4014 days
I've been using C and C++ for years, and time and time again, when I want to do a major modification of my code I will wind up with massive linker errors. The stock response "the #include directive copy and pastes the contents of the header file into the other file" seems like a giant lie to me. i.e. it might be an over simplification.

The chief reason I use header files and multiple source files, is of course, to make code easier to read. Scanning a 10,000 line file for one entry is a pain and doesn't make sense from an organizational standpoint.

Inevitably I will end up with a hierarchical arrangement when I use headers. i.e. I paint myself into corners where say, if I have a function in file3.cpp, I can call it in file1.cpp since it is declared in a low level header and I #include file3.h in file1.cpp. But it doesn't work the other way around. If there's a function in file1.cpp I can't ever seem to find out a way to call it in file3.cpp. So is there anyway to break this hierarchy?

I would prefer to program in such a way that any .cpp file can access any function declared in any .h file in my project. Furthermore, I would prefer to program in such a way that I could access any variable declared in any .h file (and defined in any .cpp file)

I would call that a flat arrangement, and basically a partitioning of one source file into several source files that while they are physically seperate, are logically one. I just would like some advice on how to achieve that.

example of my typical situation in C:

file1.cpp (#includes file1.h)
----file1.h (#includes file2.h)
----file2.cpp(#includes file2.h)
--------file2.h (#includes file3.h)
--------file3.cpp (#includes file3.h)
------------file3.h

Any advice would be greatly appreciated, as I usually end up with circular references of one sort or another in attempting a flat arrangement of files.

____________________
Zelda Hacking Forum
hobbies: delectatio morosa

Cellar Dweller
Posted on 03-27-07 09:57 PM (rev. 2 of 03-28-07 12:06 AM) Link | Quote | ID: 20004


Snifit
Level: 39

Posts: 16/287
EXP: 385268
Next: 19503

Since: 02-19-07
From: Arkansas

Last post: 4054 days
Last view: 3222 days
The standard way of dealing with this problem is to put a guard ifdef/define around the contents of each header file. For example:

astuff.h
#ifdef _ASTUFF_H_
#define _ASTUFF_H_

#include "bstuff.h"

class aclass {
bclass *b;
void some_method();
};

#endif /* _ASTUF_H_ */


bstuff.h
#ifdef _BSTUFF_H_#define _BSTUFF_H_

#include "astuff.h"

class bclass {
aclass *a;
void some_other_method();
};

#endif /* _BSTUF_H_ */



The ifdefs/defines prevent infinite recursion, yet all of the needed declarations are provided.

MathOnNapkins
Posted on 03-28-07 12:04 AM Link | Quote | ID: 20060


Super Koopa
Level: 62

Posts: 60/842
EXP: 1935918
Next: 48768

Since: 02-19-07
From: durff

Last post: 4491 days
Last view: 4014 days
I already had those, what the problem really was was something to do with declarations and definitions. I was getting errors like "thisVar is already defined in file1.obj" errors or other similar stuff. I think I finally understand how this stuff works. for so long it seemed like magic, but I spent a long time messing around with it last night and it finally clicked. Thanks for your consideration though.

But Cellar Dweller, there's another question that has been bothering me for quite some time. It stems from a discussion you had with Guy Perfect over files and memory overlays on the last board incarnation. This concerns me a bit b/c I plan on some day porting (or letting someone else port) my code to another platform. So if I do something like

char* dataFile;

// ...
// Do some stuff that loads a file and copies the contents into a properly allocated
// dataFile char*
// ...

typedef struct
{
int* offset1;
int* offset2;
int* offset3;
int* offset4;
} fileStruct;

// Let's say that 0x30 bytes into the file, we expect to see a fileStruct embedded into
//it.
fileStruct fs = *(fileStruct*) &(dataFile[0x30]);

Now that thread got me worried that I shouldn't do stuff like this, at least if I expect my files to be on different platforms. i.e. let's say someone uses a version of my program on a Mac and another uses it on a Linux machine. The dataFile that gets created and stored is likely to be incompatible across platforms, yes?

You mentioned stdint.h types (which doesn't even ship with Visual Studio, if you use that, at least VS 6 and 7 didn't...) How does this, and shifting bits into and from files solve the endianness problem? Is there any easy way to construct custom file types without sacrficing cross platform potential?



____________________
Zelda Hacking Forum
hobbies: delectatio morosa

HyperHacker
Posted on 03-28-07 12:35 AM (rev. 2 of 03-28-07 12:35 AM) Link | Quote | ID: 20082

...
Level: 73

Posts: 110/1220
EXP: 3367500
Next: 118368

Since: 03-25-07
From: no

Last post: 6094 days
Last view: 6077 days
I do it like this (order is important) and get no such problems:

-Include all headers into main.h
-Define all prototypes in main.h
-Include all source files in main.h
-Include main.h from main.c[pp]
-Compile main.c[pp] using a batch script

However it's been suggested I do it like this, which I'll try sometime:
-Define all prototypes in the header files that go with the files containing the functions
-Include all headers in main.h
-Compile every .c[pp] file separately into a .o file (never include a .c[pp] file into another file or a .h file into a .c[pp] file) and combine those into a program using a Unix makefile

Supposedly compiling to separate .o (object) files speeds up compile time since they won't be recompiled if the source file's last modification is before the object file's, which would help on big projects or slow computers. Obviously, makefiles are more portable/standard than batch scripts, I just haven't bothered to learn them yet.

Jagori
Posted on 03-28-07 02:45 AM Link | Quote | ID: 20144


Red Goomba
Level: 16

Posts: 26/35
EXP: 16405
Next: 3851

Since: 02-20-07

Last post: 6072 days
Last view: 6071 days
Re: Byte ordering... I'm not familiar with stdint.h myself, but the first thing that comes to mind is functions like htonl() and ntohl() (or the equivalent set of network <-> host byte order functions for your platform). Storing fields in network byte order should help with the endianness issue, since it's standardized. At a glance, it looks like stdint.h does byte ordering too, but like I said, I'm not familiar with it.

Koitenshin -∞
Posted on 03-28-07 04:38 AM Link | Quote | ID: 20176

Happy Hour!
Level: 81

Posts: 80/1556
EXP: 4850341
Next: 142508

Since: 03-25-07

Last post: 2678 days
Last view: 2676 days
I'm not adding anything helpful but I had to ask......is this in any way related to the editor you are working on for LTTP?

____________________
Quiz Result Provided By: theOtaku.com.
What FF7: Advent Children Character Are You?

MathOnNapkins
Posted on 03-28-07 05:06 AM (rev. 2 of 03-28-07 05:10 AM) Link | Quote | ID: 20189


Super Koopa
Level: 62

Posts: 61/842
EXP: 1935918
Next: 48768

Since: 02-19-07
From: durff

Last post: 4491 days
Last view: 4014 days
The second question is, the first question is a generalized question. E.g. far I've made a system that puts all the dungeon data into a .dng file so that other clients of the editor can import that data into their own roms. But if it's on a Power PC macintosh, the endianness is reversed (afaik), so you could have some potential problems importing that same .dng data file from a windows machine into the ported proram running on MacOS.

00 03 00 00 <-- say this is the offset in the file to tell you where a certain thing is.

that's 0x300 for windows users (big endian)

a little endian system would read it as 0x30000 (big difference)

I guess I shouldn't really be that worried about it. I've already written functions that kind of deal with the endianness problem:

unsigned int GetBufferDWord(bufPtr source, unsigned int offset)
{
// error checking

unsigned int result = (unsigned int) ( ( (source->contents) + offset)[0] & 0xFF );
result |= (unsigned int) ( ( (source->contents) + offset)[1] << 8 & 0x00FF00);
result |= (unsigned int) ( ( (source->contents) + offset)[2] << 16 & 0xFF0000);
result |= (unsigned int) ( ( (source->contents) + offset)[3] << 24 & 0xFF000000);

return result;
}

Whether or not the file originated on big endian or little endian, the result in memory will be the same with this function.

____________________
Zelda Hacking Forum
hobbies: delectatio morosa

Main - Computing - Looking for a little advice on working with headers in C++ New thread | New reply

Acmlmboard 2.1+4δ (2023-01-15)
© 2005-2023 Acmlm, blackhole89, Xkeeper et al.

Page rendered in 0.022 seconds. (321KB of memory used)
MySQL - queries: 57, rows: 72/73, time: 0.015 seconds.