Skip to content

Commit de1ffbd

Browse files
committed
update links to JuliaAstro
1 parent cc6ad56 commit de1ffbd

2 files changed

Lines changed: 10 additions & 10 deletions

File tree

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# AAS247Julia
22

3-
[![Rendered Pluto.jl notebooks](https://img.shields.io/badge/Rendered_Pluto.jl_notebooks-blue)](https://denfc.github.io/AAS247Julia/notebooks)
3+
[![Rendered Pluto.jl notebooks](https://img.shields.io/badge/Rendered_Pluto.jl_notebooks-blue)](https://juliaastro.org/AAS247Julia/notebooks)
44

55
SSID: AAS 247 Winter \
66
Password: AAS247Winter

notebooks/2-4: Parallel-Computing.jl

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ Julia has a strong parallel computing infrastructure that enable high performanc
2424
2525
# Vectorization
2626
27-
All modern CPUs provide vectorization or Single-Instruction-Multiple-Data (**SIMD**) execution. SIMD is when the computer can apply a single instruction to multiple data in a single CPU cycle. For example, consider adding two vectors `A` and `B`.
27+
All modern CPUs provide vectorization or Single-Instruction-Multiple-Data (**SIMD**) execution. SIMD is when the computer can apply a single instruction to multiple data in a single CPU cycle. For example, consider adding two vectors `A` and `B`.
2828
2929
The serial computation loops through each value in the two arrays and applies the
3030
addition operation during each CPU cycle (left figure). Whereas, the vectorized computation loops through groups of values in the two arrays and applies the addition operation during each CPU cycle (right figure), resulting in 2x, 4x, or greater performance improvement depending on the CPU architecture (i.e., AVX, AVX2, AVX512).
@@ -37,7 +37,7 @@ Julia will vectorize array compuations whenever possible and as discussed in ses
3737

3838
# ╔═╡ 4473cae3-9350-4741-8457-6bacb1def61b
3939
html"""
40-
<img src="https://github.com/denfc/AAS247Julia/blob/main/data/vectorization.jpeg?raw=true"/>
40+
<img src="https://github.com/JuliaAstro/AAS247Julia/blob/main/data/vectorization.jpeg?raw=true"/>
4141
"""
4242

4343
# ╔═╡ 54d083d4-3bf8-4ed7-95b5-203e13cc3249
@@ -525,7 +525,7 @@ This means that for each addition clock, we are simultaneously adding four eleme
525525
md"""
526526
## 2: Vectorization Using Packages
527527
!!! warning ""
528-
*
528+
*
529529
"""
530530

531531
# ╔═╡ 3de353d3-ef0c-4e25-b52c-189061adac12
@@ -610,7 +610,7 @@ md"""
610610
md"""
611611
## 3: Threads.@threads
612612
!!! warning ""
613-
*
613+
*
614614
"""
615615

616616
# ╔═╡ e468d9fd-ead0-4ce4-92b1-cb96132f6921
@@ -694,7 +694,7 @@ To determine whether threading is useful, a user should benchmark the code. Addi
694694
md"""
695695
## 4: Thread Issues
696696
!!! warning ""
697-
*
697+
*
698698
"""
699699

700700
# ╔═╡ bd78505c-904c-4e65-9160-6b3ebf02c21e
@@ -826,7 +826,7 @@ The atomic solution is substantially slower than the manual solution. In fact, a
826826
md"""
827827
## 5: High-Level Threads
828828
!!! warning ""
829-
*
829+
*
830830
"""
831831

832832
# ╔═╡ 44ddfdd9-7898-4561-b46a-045bcc1ae467
@@ -861,7 +861,7 @@ is almost as fast as our hand-written example, but requires less understanding o
861861
md"""
862862
## 6: GPUs
863863
!!! warning ""
864-
*
864+
*
865865
"""
866866

867867
# ╔═╡ 799de936-6c6d-402f-93db-771e7ec1ef51
@@ -952,9 +952,9 @@ outlarge_gpu .= xlarge_gpu .+ sin.(ylarge_gpu)
952952

953953
# ╔═╡ cfeda4c2-5881-4ae3-a220-ae8f7511d79f
954954
md"""
955-
## N:
955+
## N:
956956
!!! warning ""
957-
*
957+
*
958958
"""
959959

960960
# ╔═╡ 00000000-0000-0000-0000-000000000001

0 commit comments

Comments
 (0)