Maker Pro
Maker Pro

Microchip PIC programming question (DSPic actually)

M

Mook Johnson

Jan 1, 1970
0
The data memory is seperated into two sections (X and Y) and these sections
are used for simultanous access during a MAC operation for single cycle
execution.

There are two sectons I've seen within the X and Y data sections.

one issimply called data and the other is call .BSS.

Fron what I've read the .bss in uninitualized memory while the data memory
is initilized to zero.

What is the purpose of the .bss section and why use it? How can I allocate
all X memory to Xdataand have it all initialized.

I never programmed a PIC processor before (have done 8051 though) so all of
this is new.

thanks
 
L

Leon Heller

Jan 1, 1970
0
Mook Johnson said:
The data memory is seperated into two sections (X and Y) and these
sections are used for simultanous access during a MAC operation for single
cycle execution.

There are two sectons I've seen within the X and Y data sections.

one issimply called data and the other is call .BSS.

Fron what I've read the .bss in uninitualized memory while the data memory
is initilized to zero.

What is the purpose of the .bss section and why use it? How can I
allocate all X memory to Xdataand have it all initialized.

I never programmed a PIC processor before (have done 8051 though) so all
of this is new.

You will probably be better off asking this question on the dsPIC discussion
forum on Microchip's web site.

Leon
 
J

Jonathan Kirwan

Jan 1, 1970
0
The data memory is seperated into two sections (X and Y) and these sections
are used for simultanous access during a MAC operation for single cycle
execution.

This is a very common feature of DSPs, being able to read from two
RAMs during the same cycle -- exactly because of things like FIR
filters and FFTs which can use the feature to advantage.
There are two sectons I've seen within the X and Y data sections.

one is simply called data and the other is call .BSS.

I'm not at all familiar with the dsPIC, but this seems a reasonable
convention for the linker to support. The two physical memories can
each be divided into initialized and uninitialized regions. Clearly,
the uninitialized parts will be written eventually during runtime.
Fron what I've read the .bss in uninitualized memory while the data memory
is initilized to zero.

Initialized data areas can either be initialized to some arbitrary
constant, like zero, or else to any particular collection of values.
In C programming, as I believe I recall for example, the initialized
data is set to the values specified in the code and the uninitialized
data is set to semantic zero values. In other words, in C, all of the
static data areas are initialized, one way or another. In other
languages, though, there may very well be truly uninitialized data
(which must be assumed to be exactly the values most likely to cause
problems if you don't initialize them at runtime before using them.)
What is the purpose of the .bss section and why use it?

In general, a .bss section saves either on ROM/FLASH needed to store
the initializers or else on the time needed to initialize data which
doesn't need to be initialized at startup.

In the case of the dsPIC situation you face, I can't say since I don't
use the CPU nor the toolset.
How can I allocate all X memory to Xdata and have it all initialized.

That is a question for dsPIC programmers. Probably better to ask this
in comp.arch.embedded, I think. Unless you already have...
I never programmed a PIC processor before (have done 8051 though) so all of
this is new.

It'll be fun.

I've written up some general information elsewhere on the overall
subject (not the dsPIC itself) and I'll include that here. Just in
case any of it makes sense to you and may help in ferreting out what
you need in the dsPIC tool docs. Keep in mind it's just some informal
work I added to a readme in a project, so it's not well-hewn gospel.
But you will see how I applied BSS in the writing that follows:

---

A program is often the combined product of several translation units
or modules, linked together in useful fashion. Whether a program is
written in assembly language, C, Pascal, other languages, or some
combination of them, the memory layout for the program usually follows
a standardized template. All of the code is collected together,
constants are collected together, static data which needs to be
initialized is collected together, and static data which does not need
any initialization is also collected.

Automatic data is placed on the 'stack' and heap space grows from the
end of the static data (initialized, uninitialized, and constant) up
towards the stack while the stack grows downwards.

Although linkers support many complex details to support building
correct programs from separately compiled modules, the final result
tends to break down into six distinct sections which satisfies the
needs of a general program/task model:

Section Description Access NV? Size
---------------------------------------------------------------
Code Execute Yes Fixed/static
Constants Read Yes Fixed/static
Initialized Data Read/Write Yes Fixed/static
Uninitialized Data Read/Write No Fixed/static
Heap Read/Write No Variable, up
Stack Read/Write No Variable, down

The code section is much like the constant data section described
below, except that it is for program instructions. For C programs,
the required size of the code space is set at link time and does not
need to change during execution.

(Of course, none of this precludes things like dynamic link libraries,
downloadable extensions, generating code at run time, etc. But the
general model for C has the size of the code determined at link time.
Naturally, self-modifying code also may break this model.)

The constant data section, as I use the term above, is meant for data
which cannot be modified during execution. This region is different
from initialized data because constants do not ever require
write-access during run-time. These may be the constants used in to
define a particular FIR filter, for example. If a C program defines
an instance of data, as in:

const int p[]= { 2, 3, 5, 7, 11 };

and does not forge a pointer to these constants during execution
through which data can be written, then the array p[] can be located
in a constant section. For some systems, these constants might be
co-located with code, if the code space permits reading-as-data.

(A C compiler/linker tool may consider the above definition as
sufficient by itself for it to decide to place the array in read-only
memory and to accept the fact that a write-access through a forged
pointer will fail.)

The initialized data section is for a region of read/write memory,
where the initial values must be preset to specific values before the
C program is allowed to start running. However, the initialized data
area requires read/write access at run time. For example, this
instance definition will be placed in the initialized data section:

int f[]= { 1, 1, 2, 3, 5, 8, 13 };

Array f[] is assumed to have these values present at the time a C
program starts running, but the declaration for this definition also
permits write access, so the memory it occupies needs read/write
access.

The initialized data section is for data which does not require
initial values. This means that the data could start out randomized
and that this would not impact the correct behavior of the C program.
Technically, C programs are guaranteed that all static variables start
out with semantic "zero" values, even when initializations aren't
provided in the source code. But it still may be useful for a C
compiler to place definitions of instances like this:

int z[20];

into the initialized data section, if at some point before the C
program is allowed to begin it's possible to guarantee that these
arrays are set up to their correct, semantic-zero values. This might
be done by a short routine designed to clear the entire section and
which executes before the first line of the C program does. Whether
or not this works out depends on the details of the representations
for integers, floating point, etc.

It's quite possible that a C compiler simply does not use this section
and depends entirely on all static data being "initialized data."

The heap section isn't static at run time. Normally, this section is
set up having zero size to start and then grows and shrinks during
execution. This is the area used by routines like malloc(), for
example. A simple design for the heap has it growing upwards and away
from the last memory location required by all the static data areas
and towards the stack.

The stack section also isn't static at run time. Normally, this
section also starts out with zero size and grows and shrinks during
execution. This is usually where function parameters and local,
"auto" variables reside. It's also used by the C compiler for
temporary storage, spilling registers, etc. And it usually grows
downward and away from the last possible memory location for the
program and towards the growing end of the heap section. In this
fashion, there is a single, invisible area of read-write memory, a "no
man's land" so to speak, between the heap and the stack, which each
section grows "into" like a candle burning at both ends. If, during
execution, the heap grows into the stack (or visa versa) then the
program will probably fail to operate correctly.

Some systems have only read/write, dynamic RAM available directly to
their CPU, so the non-volatile portions are often kept "on disk" as a
file and loaded by some other program before running them. DOS,
Windows, etc., are examples for PCs. But even PCs have some
non-volatile storage for their BIOS residing in memory directly
accessible to their CPU, in order to get everything up and going
correctly.

The above models fit easily into embedded microcontrollers with a von
Neumann architecture, where code and data occupy a shared address
space. The sections mentioned above are often simply placed into
memory in the order I listed them, with the stack started at the
higher possible RAM address. The code section, the constant section,
and the data required for the initialized data section are then placed
into non-volatile memory such as EPROM or flash, so they are present
when the system powers up. The startup code for the C program will
then copy the initialized data from the non-volatile memory into RAM,
where it can be modified as needed. (It's also possible that it may
similarly copy out the constant section.)

For embedded microcontrollers with a Harvard architecture, where code
does not occupy a shared address space with data, the details get a
little more complex. When the system starts up, the code, constants,
and initialized data must be available. One way to do this is to
provide non-volatile memory for both code and data. Then the startup
code can copy out the values required for the initialized data section
from the non-volatile data memory into writable RAM at startup.
However, this isn't a common choice for such controllers -- usually,
the non-volatile memory is used only for the code space. If such a
system does NOT support a special method for accessing the code space
*as* data, then the compiler tool will need to provide additional code
to initialize the constant section and the initialized data section
before the C program begins. The instruction stream must then supply
these needed constants (and it will take a lot of space, often.) This
is why most Harvard-type microcontrollers include a special
instruction to allow access to the code space, reading it as data
values, and do not provide non-volatile data memory at all.

A generalized, illustrating example for the above discussion is:

Segment Class Segment Name Segment Description
-----------------------------------------------------------------
flash ROM CODE Code section
flash ROM CONST_copy Data for constant section
flash ROM INIT_copy Data for init'd data section
volatile RAM CONST Constant data section
volatile RAM INIT Initialized data section
volatile RAM BSS Uninitialized data section
volatile RAM HEAP Heap section
volatile RAM STACK Stack section

In von Neumann architectures, the size of CONST_copy and INIT_copy
matches the size of CONST and INIT, respectively. The startup code
copies these two sections to their respective RAM counterparts before
starting the C program. In von Neumann architectures, or where the
Harvard architecture supports special instructions to access the code
space as data, the need for a separate CONST_copy and CONST segment
can be eliminated, so that the structure looks something like:

Segment Class Segment Name Segment Description
-----------------------------------------------------------------
flash ROM CODE Code section
flash ROM CONST Constant data section
flash ROM INIT_copy Data for init'd data section
volatile RAM INIT Initialized data section
volatile RAM BSS Uninitialized data section
volatile RAM HEAP Heap section
volatile RAM STACK Stack section

This reduces the need for volatile RAM by removing the need for a
separate instance in RAM for constants, if the compiler supports it.

For the case of supporting threads, we need to provide a separate
stack for each thread. It then looks something like:

Segment Class Segment Name Segment Description
-----------------------------------------------------------------
flash ROM CODE Code section
flash ROM CONST_copy Data for constant section
flash ROM INIT_copy Data for init'd data section
volatile RAM CONST Constant data section
volatile RAM INIT Initialized data section
volatile RAM BSS Uninitialized data section
volatile RAM HEAP Heap section
volatile RAM STACK_P1 P1 stack section
volatile RAM STACK_P2 P2 stack section
volatile RAM STACK_P... P... stack section
volatile RAM STACK_Pn Pn stack section
volatile RAM STACK Startup stack section

We can carve out the additional stack sections from the HEAP section
or from the starting STACK section provided by the compiler. Either
way we choose to do it, the required space for their stacks comes out
of that no man's land between the stack and heap. Each thread does
require it's own stack to maintain its context. (For full support for
processes, we'd need to create separate CONST, INIT, BSS, and HEAP
sections as well.)
 
T

Tim Wescott

Jan 1, 1970
0
Mook said:
The data memory is seperated into two sections (X and Y) and these sections
are used for simultanous access during a MAC operation for single cycle
execution.

There are two sectons I've seen within the X and Y data sections.

one issimply called data and the other is call .BSS.

Fron what I've read the .bss in uninitualized memory while the data memory
is initilized to zero.

What is the purpose of the .bss section and why use it? How can I allocate
all X memory to Xdataand have it all initialized.

I never programmed a PIC processor before (have done 8051 though) so all of
this is new.

thanks
Are you sure it's not .data?

The common convention is to store initialized data in a .data section
(or segment, depending on who's terminology you use), uninitialized (but
set to zero) data in a .bss section, constant data in a .const section,
and code in .text.

So if you're writing in C you'll have something like this:

int bob = 23; // goes into .data
int ralph; // goes into .bss, set to 0
const int sue = 48; // goes into .const
 
L

Leon Heller

Jan 1, 1970
0
Mook Johnson said:
The data memory is seperated into two sections (X and Y) and these
sections are used for simultanous access during a MAC operation for single
cycle execution.

There are two sectons I've seen within the X and Y data sections.

one issimply called data and the other is call .BSS.

Fron what I've read the .bss in uninitualized memory while the data memory
is initilized to zero.

What is the purpose of the .bss section and why use it? How can I
allocate all X memory to Xdataand have it all initialized.

I never programmed a PIC processor before (have done 8051 though) so all
of this is new.

The dsPIC is different from the PICs, it has a new architecture. Your
questions are addressed in the MPLAB ASM30/LINK30 and Utilities User's
Guide, available on the Microchip web site.

Leon
 
Top