-
Notifications
You must be signed in to change notification settings - Fork 293
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
big-endian support working on simulation only #20
Comments
Fixed! The problem was the BITS_BIG_ENDIAN set to 1 in the gcc. When comparing the code of putchar() for example, there is a test variable&1, where an "addi" is generated by the little-endian compiler, but a "slli" is generated by the big-endian compiler. By setting the BITS_BIG_ENDIAN to 0, the code makes sense again and works in both FPGA and simulation. This fix confirms that the DarkRISCV is accidentally "bi-endian", i.e. the design accidentally provides a way that the hardware and software works with both little and big-endian. The affected file in the gcc:
|
Of course, the problem never is so easy to solve... |
After some effort to make the gcc generate big-endian output for RISCV, I found a mixed result: the .data* segment was fully generated as big-endian (there are some extra changes in binutils in order to make it work), but the .text* segment is not fully generated. Anyway, after some extra research I found that is possible implement a more intelligent way to handle both big end little-endian memories in the same core. This means that is possible put the compiled .data and .text in little-endian memory areas and put network frames in big-endian memory areas, in a way that the endian handling between that areas can be optimized (optimized means optimized, not transparent). |
Hi. Just a quick heads-up: GCC 11 (and binutils 2.36) fully support the |
Wow! This is a very good news! I will take a look as far as possible! :) |
The big-endian support works only in the simulation. The problem is probably related to the gcc : even in the simulation the -Os optimization appears to not match with the equivalent optimization in little-endian version of gcc.
The text was updated successfully, but these errors were encountered: