Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
…tion into dev-private
  • Loading branch information
mkhairy committed Jun 18, 2019
2 parents fe58efe + 6337105 commit f025d73
Show file tree
Hide file tree
Showing 5 changed files with 37 additions and 7 deletions.
5 changes: 5 additions & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,5 +18,10 @@ matrix:
- CONFIG=TITANV
- CUDA_INSTALL_PATH=/usr/local/cuda-9.1/
- PTXAS_CUDA_INSTALL_PATH=/usr/local/cuda-9.1/
- services: docker
env:
- CONFIG=TITANV-LOCALXBAR
- CUDA_INSTALL_PATH=/usr/local/cuda-9.1/
- PTXAS_CUDA_INSTALL_PATH=/usr/local/cuda-9.1/

script: docker run -v `pwd`:/home/runner/gpgpu-sim_distribution:rw tgrogers/gpgpu-sim_regress:volta_update /bin/bash -c "./start_torque.sh; chown -R runner /home/runner/gpgpu-sim_distribution; su - runner -c 'export CUDA_INSTALL_PATH=$CUDA_INSTALL_PATH && export PTXAS_CUDA_INSTALL_PATH=$PTXAS_CUDA_INSTALL_PATH && source /home/runner/gpgpu-sim_distribution/setup_environment && make -j -C /home/runner/gpgpu-sim_distribution && cd /home/runner/gpgpu-sim_simulations/ && git pull && /home/runner/gpgpu-sim_simulations/util/job_launching/run_simulations.py -C $CONFIG -B rodinia_2.0-ft -N regress && /home/runner/gpgpu-sim_simulations/util/job_launching/monitor_func_test.py -v -N regress'"
1 change: 1 addition & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -168,6 +168,7 @@ $(SIM_LIB_DIR)/libcudart.so: makedirs $(LIBS) cudalib
if [ ! -f $(SIM_LIB_DIR)/libcudart.so.10.0 ]; then ln -s libcudart.so $(SIM_LIB_DIR)/libcudart.so.10.0; fi
if [ ! -f $(SIM_LIB_DIR)/libcudart.so.10.1 ]; then ln -s libcudart.so $(SIM_LIB_DIR)/libcudart.so.10.1; fi


$(SIM_LIB_DIR)/libcudart.dylib: makedirs $(LIBS) cudalib
g++ -dynamiclib -Wl,-headerpad_max_install_names,-undefined,dynamic_lookup,-compatibility_version,1.1,-current_version,1.1\
$(SIM_OBJ_FILES_DIR)/libcuda/*.o \
Expand Down
26 changes: 25 additions & 1 deletion README
Original file line number Diff line number Diff line change
Expand Up @@ -177,7 +177,7 @@ GPGPU-Sim dependencies:
* flex
* zlib
* CUDA Toolkit

GPGPU-Sim documentation dependencies:
* doxygen
* graphvi
Expand Down Expand Up @@ -208,6 +208,9 @@ python-matplotlib"
CUDA SDK dependencies:
"sudo apt-get install libxi-dev libxmu-dev libglut3-dev"

If you are running applications which use NVIDIA libraries such as cuDNN and
cuBLAS, install them too.

Finally, ensure CUDA_INSTALL_PATH is set to the location where you installed
the CUDA Toolkit (e.g., /usr/local/cuda) and that $CUDA_INSTALL_PATH/bin is in
your PATH. You probably want to modify your .bashrc file to incude the
Expand All @@ -216,6 +219,10 @@ following (this assumes the CUDA Toolkit was installed in /usr/local/cuda):
export CUDA_INSTALL_PATH=/usr/local/cuda
export PATH=$CUDA_INSTALL_PATH/bin

If running applications which use cuDNN or cuBLAS:
export CUDNN_PATH=<Path To cuDNN Directory>
export LD_LIBRARY_PATH=$CUDA_INSTALL_PATH/lib64:$CUDA_INSTALL_PATH/lib:$CUDNN_PATH/lib64


Step 2: Build
=============
Expand Down Expand Up @@ -261,6 +268,23 @@ ldd <your_application_name>

You should see that your application is using libcudart.so file in GPGPUSim directory.

If running applications which use cuDNN or cuBLAS:

* Modify the Makefile or the compilation command of the application to change
all the dynamic links to static ones, for example:
* -L$(CUDA_PATH)/lib64 -lcublas to
-L$(CUDA_PATH)/lib64 -lcublas_static

* -L$(CUDNN_PATH)/lib64 -lcudnn to
-L$(CUDNN_PATH)/lib64 -lcudnn_static

* Modify the Makefile or the compilation command such that the following
flags are used by the nvcc compiler:
-gencode arch=compute_61,code=compute_61

(the number 61 refers to the SM version. You would need to set it based
on the GPGPU-Sim config "-gpgpu ptx force max capability" you use)

Copy the contents of configs/QuadroFX5800/ or configs/GTX480/ to your
application's working directory. These files configure the microarchitecture
models to resemble the respective GPGPU architectures.
Expand Down
6 changes: 3 additions & 3 deletions configs/tested-cfgs/SM7_QV100/gpgpusim.config
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
# functional simulator specification
-gpgpu_ptx_instruction_classification 0
-gpgpu_ptx_sim_mode 0
-gpgpu_ptx_force_max_capability 70
-gpgpu_ptx_force_max_capability 60


# Device Limits
Expand All @@ -21,7 +21,7 @@
-gpgpu_runtime_pending_launch_count_limit 2048

# Compute Capability
-gpgpu_compute_capability_major 7
-gpgpu_compute_capability_major 6
-gpgpu_compute_capability_minor 0

# SASS execution (only supported with CUDA >= 4.0)
Expand All @@ -44,7 +44,7 @@

# shader core pipeline config
-gpgpu_shader_registers 65536
-gpgpu_occupancy_sm_number 70
-gpgpu_occupancy_sm_number 60

# This implies a maximum of 64 warps/SM
-gpgpu_shader_core_pipeline 2048:32
Expand Down
6 changes: 3 additions & 3 deletions src/gpgpu-sim/gpu-cache.h
Original file line number Diff line number Diff line change
Expand Up @@ -681,15 +681,15 @@ class cache_config {
// tag + index is required to check for hit/miss. Tag is now identical to the block address.

//return addr >> (m_line_sz_log2+m_nset_log2);
return addr & ~(m_line_sz-1);
return addr & ~(new_addr_type)(m_line_sz-1);
}
new_addr_type block_addr( new_addr_type addr ) const
{
return addr & ~(m_line_sz-1);
return addr & ~(new_addr_type)(m_line_sz-1);
}
new_addr_type mshr_addr( new_addr_type addr ) const
{
return addr & ~(m_atom_sz-1);
return addr & ~(new_addr_type)(m_atom_sz-1);
}
enum mshr_config_t get_mshr_type() const
{
Expand Down

0 comments on commit f025d73

Please sign in to comment.