You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/getting-started/ubuntu-instructions.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ These instructions will show you how to run a .NET for Apache Spark app using .N
7
7
- Download and install the following: **[.NET Core 2.1 SDK](https://dotnet.microsoft.com/download/dotnet-core/2.1)** | **[OpenJDK 8](https://openjdk.java.net/install/)** | **[Apache Spark 2.4.1](https://archive.apache.org/dist/spark/spark-2.4.1/spark-2.4.1-bin-hadoop2.7.tgz)**
8
8
- Download and install **[Microsoft.Spark.Worker](https://github.com/dotnet/spark/releases)** release:
9
9
- Select a **[Microsoft.Spark.Worker](https://github.com/dotnet/spark/releases)** release from .NET for Apache Spark GitHub Releases page and download into your local machine (e.g., `~/bin/Microsoft.Spark.Worker`).
10
-
-**IMPORTANT** Create a [new environment variable](https://help.ubuntu.com/community/EnvironmentVariables)`DotnetWorkerPath` and set it to the directory where you downloaded and extracted the Microsoft.Spark.Worker (e.g., `~/bin/Microsoft.Spark.Worker`).
10
+
-**IMPORTANT** Create a [new environment variable](https://help.ubuntu.com/community/EnvironmentVariables)`DOTNET_WORKER_DIR` and set it to the directory where you downloaded and extracted the Microsoft.Spark.Worker (e.g., `~/bin/Microsoft.Spark.Worker`).
11
11
12
12
For detailed instructions, you can see [Building .NET for Apache Spark from Source on Ubuntu](../building/ubuntu-instructions.md).
Copy file name to clipboardExpand all lines: docs/getting-started/windows-instructions.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ These instructions will show you how to run a .NET for Apache Spark app using .N
7
7
- Download and install the following: **[.NET Core 2.1 SDK](https://dotnet.microsoft.com/download/dotnet-core/2.1)** | **[Visual Studio 2019](https://www.visualstudio.com/downloads/)** | **[Java 1.8](https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html)** | **[Apache Spark 2.4.1](https://archive.apache.org/dist/spark/spark-2.4.1/spark-2.4.1-bin-hadoop2.7.tgz)**
8
8
- Download and install **[Microsoft.Spark.Worker](https://github.com/dotnet/spark/releases)** release:
9
9
- Select a **[Microsoft.Spark.Worker](https://github.com/dotnet/spark/releases)** release from .NET for Apache Spark GitHub Releases page and download into your local machine (e.g., `c:\bin\Microsoft.Spark.Worker\`).
10
-
-**IMPORTANT** Create a [new environment variable](https://www.java.com/en/download/help/path.xml)`DotnetWorkerPath` and set it to the directory where you downloaded and extracted the Microsoft.Spark.Worker (e.g., `c:\bin\Microsoft.Spark.Worker`).
10
+
-**IMPORTANT** Create a [new environment variable](https://www.java.com/en/download/help/path.xml)`DOTNET_WORKER_DIR` and set it to the directory where you downloaded and extracted the Microsoft.Spark.Worker (e.g., `c:\bin\Microsoft.Spark.Worker`).
11
11
12
12
For detailed instructions, you can see [Building .NET for Apache Spark from Source on Windows](../building/windows-instructions.md).
Below are some of the highlights from this release.
6
+
7
+
*[Apache Spark 2.4.3](https://spark.apache.org/news/spark-2-4-3-released.html) support ([#118](https://github.com/dotnet/spark/pull/108))
8
+
* dotnet/spark is now using [dotnet/arcade](https://github.com/dotnet/arcade) as the build infrastructure ([#113](https://github.com/dotnet/spark/pull/113))
9
+
*[Source Link](https://github.com/dotnet/sourcelink) is now supported for the Nuget package ([#40](https://github.com/dotnet/spark/issues/40)).
10
+
* Fixed the issue where Microsoft.Spark.dll is not signed ([#119](https://github.com/dotnet/spark/issues/119)).
11
+
* Pickling performance is improved ([#111](https://github.com/dotnet/spark/pull/111)).
12
+
* Performance improvment PRs in the Pickling Library: [irmen/Pyrolite#64](https://github.com/irmen/Pyrolite/pull/64), [irmen/Pyrolite#67](https://github.com/irmen/Pyrolite/pull/67)
13
+
* ArrayType and MapType are supported as UDF return types ([#112](https://github.com/dotnet/spark/issues/112#issuecomment-493297068), [#114](https://github.com/dotnet/spark/pull/114))
14
+
15
+
### Supported Spark Versions
16
+
17
+
The following table outlines the supported Spark versions along with the microsoft-spark JAR to use with:
0 commit comments