【visionOS】Create the first visionOS program from scratch

Foreword: I originally read BonjourWeb, but I was attracted to apple visionOS unconsciously, because this concept product is really cutting-edge and novel.
Maybe I will give it a try when the time comes~~~ Let’s learn briefly first hehe

Create a new world of apps and games for Apple Vision Pro.

Introducing visionOS

visionOS is the operating system of Apple Vision Pro. Use visionOS with familiar tools and technologies to build immersive apps and games for spatial computing.

Insert image description here
My dear, if you want to develop software for visionOS, you need a Mac with an Apple chip. In this way, you can create new applications using SwiftUI and make full use of the immersion provided in visionOS.

Plus, if you have an existing iPad or iPhone, adding visionOS to your apps will give you a better, more realistic look and feel and add platform-specific features to create compelling experiences. .

Expand your app into immersive spaces

Start with a familiar window-based experience to introduce people to your content. From there, add visionOS-specific SwiftUI scene types such as volumes and spaces. These scene types let you incorporate depth, 3D objects and an immersive experience.

Use RealityKit and Reality Composer Pro to build your app's 3D content, and use RealityView to display it. Use ARKit to integrate your content with people's surroundings in an immersive experience.

Insert image description here

Explore new ways to interact with page links

People can select an element by looking at it and tapping their finger. They can also use specific gestures to scale, drag, scale, and rotate objects. SwiftUI provides built-in support for these standard gestures, so most of your app input relies on them. When you want to go beyond standard gestures, use ARKit to create custom gestures.

Insert image description here

Dive into the featured sample application page link

Explore the core concepts of all visionOS applications with Hello World. Learn how to detect custom gestures with Happy Beam's ARKit. Discover streaming 2D and stereoscopic media with destination video. And learn how to build 3D scenes using RealityKit and Reality Composer Pro with Diorama and Swift Splash.

Create your first visionOS application

If you are new to visionOS, start with a new Xcode project to learn platform features and become familiar with visionOS content and technology. SwiftUI is a great choice when you're building apps for visionOS because it gives you full access to visionOS features. While you can also use UIKit to build parts of your app, you'll need to use SwiftUI to implement many platform-unique features.

Developing software for visionOS requires a Mac with an Apple chip.

In any SwiftUI app, you can use scenes to put content on the screen. A scene contains views and controls to be displayed on the screen. Scenes also define how these views and controls look when they appear on the screen. In visionOS, you can include 2D and 3D views in the same scene, and these views can be rendered in a window or as part of the person's surroundings.

Insert image description here
Figure 1 Scene with window
Insert image description here
Figure 2 Scene with window and 3D object

Start with a new Xcode project and add some features to become familiar with visionOS content and technology. Run your app in the simulator to verify that your content looks like you expect, and run it on a device to see your 3D content come to life.

Organize content around one or more scenes, which govern your application's interface. Each scene contains views and controls to be displayed, and the scene type determines whether the content takes on a 2D or 3D appearance. SwiftUI adds 3D scene types to visionOS, as well as 3D elements and layout options for all scene types.

Create your Xcode projection page link

Select File > Project in Xcode. Navigate to the visionOS section of the template selector and select the App template. When prompted, give the project a name and other options.

When creating a new visionOS application, you can configure the application's initial scene type from the configuration dialog box. To display primarily 2D content in the initial scene, select Window as the initial scene type. For primarily 3D content, choose a Volume. You can also add an immersive scene to place your content in the character's surroundings.

Insert image description here
Include a Reality Composer Professional project file when you want to create 3D assets or scenes for display from within your application. Use this project file to build content from original shapes and existing USDZ assets. You can also use it to build and test custom RealityKit animations and behaviors for your content.

Modify existing window page link

Build the initial interface using standard SwiftUI views. Views provide the basic content for your interface, and you can customize the appearance and behavior of views using SwiftUI modifiers. For example, the .background modifier adds a partially transparent tint behind your content:

@main
struct MyApp: App {
    
    
    var body: some Scene {
    
    
        WindowGroup {
    
    
            ContentView()
               .background(.black.opacity(0.8))
        }


        ImmersiveSpace(id: "Immersive") {
    
    
            ImmersiveView()
        }
    }
}

Handling events in views in page links

Many SwiftUI views handle interactions automatically—all you have to do is provide code that runs when interactions occur. You can also add SwiftUI gesture recognizers to your view to handle tap, long press, drag, rotate, and zoom gestures. The system automatically maps the following types of input to your SwiftUI event handling code:
Insert image description here

This photo shows controls in the corner of a window and a top-down overlay view of a person sitting in a chair with hands on knees.

Indirect input. The human eye indicates the goal of the interaction. To initiate an interaction, people touch their thumb and index finger simultaneously with one or both hands. Additional finger and hand movements define the gesture type.
Insert image description here

The picture shows the virtual 3D keyboard. This person's right hand is hitting the J key.

Enter directly. An interaction is reported when a person's finger occupies the same space as an item on the screen. Additional finger and hand movements define the gesture type.
Insert image description here

This photo shows a person's hands typing on a physical keyboard on a table. A virtual suggestion bar appears above the physical keyboard.

keyboard input. People can use a connected mouse, trackpad, or keyboard to interact with items, trigger menu commands, and perform gestures.

Build and run your app page link

Build and run your app in the simulator to see how it looks. The visionOS emulator has a virtual background that acts as a backdrop for your application content. Use your keyboard and mouse or trackpad to navigate the environment and interact with applications.

Click and drag the window bar below the application content to reposition the window within the environment. Move the pointer over the circle next to the window bar to reveal the window's close button. Move the cursor to a corner of the window to turn the window bars into resize controls.

Tips: The application cannot control the position of the window in space. The system places each window in its initial position and updates that position based on further interaction with the application.

Add 3D content to your application

Add depth and dimension to your visionOS apps and discover how your app content blends into people's surroundings.

Devices with stereoscopic displays allow people to experience 3D content in a way that feels more realistic. There seems to be real depth to the content and people can view it from different angles, making it appear to be right in front of them.

When building an app for visionOS, consider how you can add depth to your app's interface. The system offers several ways to display 3D content, including in an existing window, in a volume, and in an immersive space. Choose the option that best suits your app and the content you offer.

Insert image description here
Add depth to traditional 2D windows in page links

Windows are an important part of the application interface. With visionOS, apps automatically get materials with the visionOS look and feel, fully resizable windows, spacing adjustments for eye and hand input, and highlight adjustments for your custom controls.

Incorporate depth effects into custom views as needed, and use 3D layout options to arrange views in the window.

  • Apply the shadow(color:radius:x:y:) or visualEffect(_:) modifier to the view.

  • Lifts or highlights the view when someone is viewing it using thehoverEffect(_:isEnabled:) modifier.

  • UseZStack to layout the view.

  • Animated view related changestransform3DEffect(_:).

  • Rotate the view using the rotation3DEffect(_:axis:anchor:anchorZ:perspective:) modifier.

In addition to giving the 2D view more depth, you can also add static 3D models to your 2D window. The Model3D view loads a USDZ file or other asset type and displays it in a window at its native size. Use it in your application where you already have model data, or you can download it from the web. For example, a shopping application might use this type of view to display 3D versions of products.

Display dynamic 3D scene using RealityKitin page link

RealityKit is Apple's technology for creating 3D models and scenes that you can dynamically update on your screen. In visionOS, RealityKit and SwiftUI are used together to seamlessly couple an application’s 2D and 3D content. Load existing USDZ assets or create scenes in Reality Composer Pro to incorporate animation, physics, lighting, sounds and custom behaviors for your content. To use a Reality Composer Pro project in your app, add the Swift package to your Xcode project and import its module in your Swift file.

Insert image description here
Use RealityView when you are ready to display 3D content in your interface. This SwiftUI view serves as a container for your RealityKit content and allows you to update the content using familiar SwiftUI techniques.

The following example shows a view using RealityView to display a 3D sphere. The code in the view closure creates a RealityKit entity for the sphere, applies a texture to the sphere's surface, and adds the sphere to the view's contents.

 struct SphereView: View {
    
    
    var body: some View {
    
    
        RealityView {
    
     content in
            let model = ModelEntity(
                         mesh: .generateSphere(radius: 0.1),
                         materials: [SimpleMaterial(color: .white, isMetallic: true)])
            content.add(model)
        }
    }
}

When SwiftUI displays your RealityView, it executes your code once to create entities and other content. Since creating entities is relatively expensive, the view only runs the creation code once. When you want to update an entity's state, change the view's state and use an update closure to apply those changes to the content. The following example uses an update closure to change the size of the sphere when the value of the scale property changes:

struct SphereView: View {
    
    
    var scale = false


    var body: some View {
    
    
        RealityView {
    
     content in
            let model = ModelEntity(
                         mesh: .generateSphere(radius: 0.1),
                         materials: [SimpleMaterial(color: .white, isMetallic: true)])
            content.add(model)
        } update: {
    
     content in
            if let model = content.entities.first {
    
    
                model.transform.scale = scale ? [1.2, 1.2, 1.2] : [1.0, 1.0, 1.0]
            }
        }
    }
}

Respond to interactions with RealityKit content in page links

Handle interactions with RealityKit scene entities:

  • Attach a gesture recognizer to yourRealityView and addtargetedToAnyEntity() modifiers to it.

  • Appends aInputTargetComponent to an entity or its parent entity.

  • Adds collision shapes to entities that support interactionRealityKit.

The targetedToAnyEntity() modifier provides a bridge between the gesture recognizer andRealityKit the content. For example, to identify when someone drags an entity, you can specify a DragGesture and add modifiers to it. SwiftUI executes the provided closure when the specified gesture occurs on the entity.

The following example adds a tap gesture recognizer to the sphere view from the previous example. The code also adds the InputTargetComponent and CollisionComponent components to the shape to allow interaction to occur. If you omit these components, the view will not detect interactions with the entity.

struct SphereView: View {
    
    
    @State private var scale = false


    var body: some View {
    
    
        RealityView {
    
     content in
            let model = ModelEntity(
                mesh: .generateSphere(radius: 0.1),
                materials: [SimpleMaterial(color: .white, isMetallic: true)])


            // Enable interactions on the entity.
            model.components.set(InputTargetComponent())
            model.components.set(CollisionComponent(shapes: [.generateSphere(radius: 0.1)]))
            content.add(model)
        } update: {
    
     content in
            if let model = content.entities.first {
    
    
                model.transform.scale = scale ? [1.2, 1.2, 1.2] : [1.0, 1.0, 1.0]
            }
        }
        .gesture(TapGesture().targetedToAnyEntity().onEnded {
    
     _ in
            scale.toggle()
        })
    }
}

Display 3D content in volume link

A volume is a window type that grows in three dimensions to match the size of the content it contains. Both windows and volumes can hold 2D and 3D content and are similar in many ways. However, the 3D content of a window clip extends too far from the window surface, so for content that is primarily 3D, a volume is a better choice.

To create a volume, add a WindowGroup scene to your application and set its style to volumetric. This style tells SwiftUI to create a window for 3D content. Include any 2D or 3D view you want in the volume. You can also use RealityKit to add a RealityView to structure your content. The following example creates a volume with a static 3D model of some balloons, which are stored in the application's bundle:

struct MyApp: App {
    
    
    var body: some Scene {
    
    
        WindowGroup {
    
    
            Model3D("balloons")
        }.windowStyle(style: .volumetric)
    }
}

Windows and volumes are convenient ways to display limited 2D and 3D content, but your app can't control the position of the content around people. The system sets the initial position of each window and volume at display time. The system also adds a window bar that allows users to reposition or resize windows.

Insert image description here
Display 3D content in page links around people

When you need more control over the placement of content in your app, you can add contentImmersiveSpace. Immersive spaces provide an unlimited area for your content, and you can control the size and position of your content within the space. With user permission, you can also use ARKit with Immersive Spaces to integrate content into their surroundings. For example, you can use ARKit scene reconstruction to obtain a mesh of furniture and nearby objects and have your content interact with that mesh.

ImmersiveSpaceis a scene type that you can create with other scenes in your application. The following example shows an application containing immersive spaces and windows:

@main
struct MyImmersiveApp: App {
    
    
    var body: some Scene {
    
    
        WindowGroup() {
    
    
            ContentView()
        }


        ImmersiveSpace(id: "solarSystem") {
    
    
            SolarSystemView()
        }
    }
}

If you don't add a style modifier to the ImmersiveSpace declaration, the space will be created using a mixed style. This style displays your content alongside pass-through content that shows the character’s surroundings. Other styles allow you to hide passthroughs to varying degrees. Use the immersionStyle(selection:in:) modifier to specify the styles supported by the space. If multiple styles are specified, you can switch between styles using the modifier's selection argument.

Be careful about how much content you include in an immersive scene using a mix of styles. Content that takes up a large portion of the screen, even partially transparent content, can prevent people from seeing potential dangers in their surroundings. If you want people to immerse themselves in your content, configure your space with a complete style. For more information, see Create a fully immersive experience in your app.

Remember to set the location of the items you place in ImmersiveSpace. Use modifiers to position SwiftUI views and transform components to position RealityKit entities. SwiftUI initially places the origin of space at the person's feet, but can change this origin based on other events. For example, the system might move the origin to accommodate a SharePlay activity that displays content with a spatial character. If you need to position a SwiftUI view relative to a RealityKit entity, use the methods in the RealityView's content parameter to perform any needed coordinate transformations.

To display your ImmersiveSpace scene, open it using the openImmersiveSpace action, which is obtained from the SwiftUI environment. This operation runs asynchronously and uses the provided information to find and initialize the scene. The following example shows a button that opens a space with the solarSystem identifier:

Button("Show Solar System") {
    
    
    Task {
    
    
        let result = await openImmersiveSpace(id: "solarSystem")
        if case .error = result {
    
    
            print("An error occurred")
        }
    }
}

When one application presents an ImmersiveSpace, the system hides the content of other applications to prevent visual conflict. While your space is visible, other apps remain hidden, but they return when you close it. If your application defines multiple spaces, you must suppress the currently visible space before displaying a different space. If you do not cancel the visible space, the system will issue a runtime warning when you try to open another space.

Guess you like

Origin blog.csdn.net/weixin_43233219/article/details/133778314