ios开发者账号变更开发者_iOS开发者对WWDC 2020的预测

ios开发者账号变更开发者

WWDC 2020 is almost here, and it’s Apple’s first online-only developer event given the current health situation. We’ll surely miss the applause from a live audience, and it’ll be interesting to see how Apple recreates that experience in its keynote and videos.

WWDC 2020即将到来,考虑到当前的健康状况,这是Apple的首个仅在线开发人员活动。 我们肯定会错过现场观众的热烈掌声,很高兴看到Apple如何在其主题演讲和视频中重现这种体验。

But what’s more exciting is the new goodies and updates that Apple will present to the developer community and world this year.

但是,更令人兴奋的是Apple今年将向开发人员社区和全世界展示的新功能和更新。

WWDC 2019 was unarguably the biggest Apple event in recent years, as it unveiled groundbreaking changes: system-level dark mode; a new iPadOS 13 with multiwindow implementation support; Project Catalyst, which allows porting iOS apps to macOS in a single click; or SwiftUI, Apple’s shiny new declarative UI framework.

WWDC 2019无疑是近几年来最大的苹果盛会,因为它揭示了突破性的变化:系统级暗模式; 具有多窗口实现支持的新iPadOS 13; Project Catalyst,只需单击即可将iOS应用移植到macOS; 或SwiftUI,Apple崭新的声明式UI框架。

This ensures the bar is set high for “Dub Dub” 2020, and at a general level, one would expect enhancements and improvements in the developer ecosystem and not so many massive new features.

这可以确保将“ Dub Dub” 2020的标准设置得较高,并且总体上来说,人们会期望开发人员生态系统中的增强和改进,而不是那么多的大量新功能。

As a developer of macOS, tvOS, and iPadOS, I’m excited and have a wish list of improvements and features I’d like to see from the Apple conference.

作为macOS,tvOS和iPadOS的开发人员,我很兴奋,并希望获得我想从Apple大会上看到的改进和功能列表。

CoreML, CreateML, and Vision Framework were bolstered with huge updates last year while the RealityKit, SwiftUI, and Combine frameworks made their debuts and stole the show. I’d expect these six frameworks to be the key players this time around as well, with others revolving around them.

去年,CoreML,CreateML和Vision Framework进行了大量更新,而RealityKit,SwiftUI和Combine框架则首次亮相并抢走了展会。 我希望这六个框架也将成为这次的主要参与者,而其他框架也会围绕它们展开。

CoreML:对更多层的支持 (CoreML: A Support for More Layers)

CoreML introduced on-device transfer learning in WWDC 19 that allows us to retrain models on our iOS devices. Currently, it supports only k-nearest neighbor and CNN classifiers. One would expect Apple to expand the set of layers by including recurrent neural network (RNN) layers, such as LSTM; providing more built-in optimizers (currently there’s Adam and SGD), and unleashing a way to create custom loss functions.

CoreML在WWDC 19中引入了设备上的传输学习,使我们能够在iOS设备上重新训练模型。 当前,它仅支持k最近邻和CNN分类器。 人们期望苹果通过包括递归神经网络(RNN)层(例如LSTM)来扩展层的集合; 提供更多内置优化器(当前有Adam和SGD),并释放一种创建自定义损失函数的方法。

The ability to train RNN models should help developers build interesting mobile machine learning apps, like price predictions for stocks (though, only for educational/entertainment purposes — don’t use them for investment advice) easily.

训练RNN模型的能力应该可以帮助开发人员轻松构建有趣的移动机器学习应用程序,例如股票价格预测(尽管仅用于教育/娱乐目的-请勿将其用于投资建议)。

CreateML:训练自定义图像分割模型等 (CreateML: Training Custom Image-Segmentation Models and More)

CreateML finally got its separate application outside of Playgrounds last time. It now allows nonmachine-learning developers to quickly train models with an amazing drag-and-drop interface, which boasts of built-in model trainers for image classifiers, object detection, and recommendation systems, among the many others.

上一次,CreateML终于在Playgrounds之外获得了其单独的应用程序。 现在,它使非机器学习的开发人员可以通过出色的拖放界面快速训练模型,该界面具有用于图像分类器,对象检测和推荐系统等的内置模型训练器。

Expanding the training possibilities by including image segmentation and pose estimation is on the top of my wish list. Having a built-in annotation tool, possibly one that lets you select objects from a video, would make training object detections a whole lot easier and possible to be done completely in-house — today you need to add your annotation.json file from the outside.

通过包括图像分割和姿势估计来扩展训练可能性是我最希望的清单。 有一个内置的注释工具,可能是一个可以让你从视频选择对象,就会使培训对象检测一大堆更容易,可能需要做彻底的内部-今天你需要添加annotation.json从文件外。

更强大的愿景框架 (A More Powerful Vision Framework)

Vision is Apple’s deep-learning framework that provides out-of-the-box implementation for complex computer-vision algorithms. It also helps drive CoreML models. WWDC 19 introduced new pet-animal classifiers, a built-in image-classification tool, computing-image similarity, and bolstering face technology with capture-quality requests.

Vision是Apple的深度学习框架,可为复杂的计算机视觉算法提供现成的实现。 它还有助于驱动CoreML模型。 WWDC 19引入了新的宠物动物分类器,内置的图像分类工具,计算图像相似性以及支持具有捕获质量要求的面部技术。

This year, Apple could push the computer-vision envelope even further by:

今年,苹果可以通过以下方式进一步推动计算机视觉领域的发展:

  • Expanding face capture quality to a more generic image-quality Vision request

    将面部捕捉质量扩展到更通用的图像质量Vision要求
  • Introducing movement tracking and providing a built-in ability to determine the distance of the objects from the camera to open possibilities for smarter applications

    引入了运动跟踪功能,并提供了确定物体与摄像机之间距离的内置功能,从而为更智能的应用打开了可能性
  • Bringing image registration and a built-in image-segmentation request would enable better digital-image processing that could be useful for medical use cases

    引入图像注册和内置的图像分割请求,将可以实现更好的数字图像处理,这可能对医疗用例有用

These are some of the things I’d wish to see in the Vision framework updates this year.

这些是我希望在今年Vision框架更新中看到的一些内容。

RealityKit和Reality Composer中的大量更新 (Huge Updates in RealityKit and Reality Composer)

With the introduction of RealityKit, a 3D engine, Apple rebooted its augmented-reality (AR) strategy, which was earlier using SceneKit.

随着3D引擎RealityKit的推出,Apple重新启动了其增强现实(AR)策略,该策略先前使用SceneKit。

But it suffered from scarce documentation. Apple would surely look to work on that in order to bring developers onto the AR train. A new AR app with iOS 14, possibly to showcase object occlusion should be a good start. Out-of-the-box support for hand-gestures tracking will help open the door for exciting AR apps.

但是它缺少文档。 苹果肯定会考虑进行此工作,以便将开发人员带入AR培训。 带有iOS 14的新AR应用程序可能是一个不错的开始,该应用程序可能展示对象遮挡。 开箱即用的手势跟踪支持将为令人兴奋的AR应用打开大门。

Reality Composer is an AR scene editor that lets you create, import, and customize 3D content. More flexibility in blending shapes with pivots and a way to import GLTF and other model file types to USDZ should help.

Reality Composer是一个AR场景编辑器,可让您创建,导入和自定义3D内容。 在将形状与枢轴混合时具有更大的灵活性,以及​​将GLTF和其他模型文件类型导入USDZ的方法应该会有所帮助。

RealityKit will play a huge role in the coming years given Apple’s big aspirations with AR glasses. We can expect huge updates this year, and it could just be curtains for SceneKit in AR.

鉴于苹果公司对AR眼镜的强烈期望,RealityKit在未来几年将扮演重要角色。 我们可以期待今年会有巨大的更新,这可能只是AR中SceneKit的帷幕。

更多联合出版商 (More Combine Publishers)

Combine, Apple’s own declarative-reactive-programming framework, was possibly the most polished framework on debut last year. Several Foundation types could expose their publisher’s functionality through it.

苹果公司自己的声明式-React性编程框架Combine可能是去年首次亮相时最精美的框架。 几种Foundation类型可以通过它公开其发布者的功能。

This year, Apple could introduce more built-in publishers for MapKit, PencilKit, CoreLocation, and CoreML and provide new merging operators.

今年,苹果可能会为MapKit,PencilKit,CoreLocation和CoreML引入更多内置发布者,并提供新的合并运营商。

机器学习– Powered PencilKit (Machine Learning–Powered PencilKit)

The PencilKit framework also made a debut last year, but the reception it got was fairly low key, as others stole the limelight. Currently, it lets you integrate the canvas and ink tools only.

PencilKit框架也在去年首次亮相,但是由于其他人抢了风头,它的接受度相当低调。 当前,它仅允许您集成画布和墨水工具。

Apple could propel it this year by allowing seamless integration of images and assets in the PKCanvas. Boosting the drawing framework with machine learning, either by incorporating built-in text recognition for handwriting or by having point detections, could create a stir and allow developers to build even more exciting machine learning applications.

苹果今年可以通过允许PKCanvas中图像和资产的无缝集成来推动这一PKCanvas 。 通过并入用于手写的内置文本识别或通过点检测来通过机器学习来增强绘图框架,可能会引起轰动,并允许开发人员构建更令人兴奋的机器学习应用程序。

SwiftUI可能会再次成为秀场窃取者 (SwiftUI Might Be the Show Stealer Once Again)

SwiftUI created a storm last time around, and this year may be no different. The state-driven framework is still a little rough around the edges, and the lack of good documentation coupled with missing UIKit functionalities has already set high expectations for SwiftUI 2.0. No wonder it’ll be the most talked about feature during and after the WWDC 2020 week.

SwiftUI上次引起了一场风暴,今年可能也是如此。 状态驱动的框架仍然有些粗糙,缺少良好的文档以及缺少UIKit功能已经对SwiftUI 2.0提出了很高的期望。 难怪它将成为WWDC 2020周期间和之后最受关注的功能。

With lots riding on its shoulders, here’s my SwiftUI wish list and improvements I’d like to see.

肩上有很多东西,这是我想看到的SwiftUI愿望清单和改进。

SwiftUI中的集合视图 (Collection views in SwiftUI)

Collection views got an interesting update with the new compositional layouts, but they were altogether missed from SwiftUI. The declarative nature of compositional layouts should bring some kind of collection view in SwiftUI this year. In doing so, developers can build complex grid-based user interfaces.

新的合成布局对集合视图进行了有趣的更新,但是SwiftUI完全忽略了它们。 构图布局的声明性应该会在今年的SwiftUI中带来某种集合视图。 这样,开发人员可以构建复杂的基于网格的用户界面。

带来缺少的UI视图 (Bring the missing UI views)

The first version of SwiftUI was devoid of activity indicators, search bars, multiline text views, and refresh controls. We had to leverage either the UIViewRepresentable protocol and UIKit interoperability or use GeometryReader and shapes (for the progress bar) to build the equivalent SwiftUI wrapper views.

SwiftUI的第一个版本没有活动指示器,搜索栏,多行文本视图和刷新控件。 我们必须利用UIViewRepresentable协议和UIKit的互操作性,或者使用GeometryReader和shapes(用于进度条)来构建等效的SwiftUI包装器视图。

选项卡视图和导航视图的改进 (Improvements in tab views and navigation views)

While the basic tab view and navigation view seem to work, there have been some grave underlying issues. For instance, SwiftUI’s tab view has been a cause of concern when switching tabs. Neither the scroll nor the navigation position is preserved, and you’d have to fall back on UITabBarController.

尽管基本的选项卡视图和导航视图似乎有效,但仍存在一些严重的潜在问题。 例如,切换选项卡时,SwiftUI的选项卡视图就引起了人们的关注。 滚动和导航位置都不会保留,您必须依靠UITabBarController

On the navigation front, the biggest issue was the handling of destination views in NavigationLinks — they were created before you tapped the navigation link. We’d expect a consistent lazy loading of destination views (Apple fixed it in Xcode 11.4.1, but it’s still inconsistent).

在导航方面,最大的问题是在NavigationLinks处理目标视图-它们是在您点击导航链接之前创建的。 我们期望目标视图的延迟加载保持一致(Apple在Xcode 11.4.1中对其进行了修复,但是仍然不一致)。

We’d expect Apple to fix these issues and probably come up with a better means of navigation for SwiftUI.

我们希望苹果公司能够解决这些问题,并可能为SwiftUI提供更好的导航方法。

滚动视图的当前位置 (Scroll view’s current position)

The ability to access the current scroll-view position in SwiftUI is missing. We can expect Apple to introduce a binding property for the position offset.

缺少访问SwiftUI中当前滚动视图位置的功能。 我们可以期望Apple为头寸偏移引入绑定属性。

与其他框架的集成更加轻松 (Easier integration with other frameworks)

The WKWebView, MapKit, PencilKit, ARKit views and ShareSheet currently all require UIViewRepresentable. Having better interoperability with SwiftUI would accelerate the workflow and reduce boilerplate code.

WKWebViewMapKitPencilKitARKit视图和ShareSheet当前都需要UIViewRepresentable 。 与SwiftUI更好的互操作性将加速工作流程并减少样板代码。

修复CoreData (Rehaul CoreData)

Just like SwiftUI changed the way we construct user interfaces, it’s high time, Apple introduced something like SwiftData that’s constructed in Swift only, rather than Objective-C.

就像SwiftUI改变了我们构造用户界面的方式一样,现在是时候了,Apple推出了诸如SwiftData之类的东西,该东西仅在Swift中而不是在Objective-C中构造。

SwiftUI故事板 (SwiftUI storyboards)

While SwiftUI presents real-time canvas previews, having an app-level view that shows how the different SwiftUI views are connected — like a storyboard — would be a good addition and would help developers use SwiftUI in production.

尽管SwiftUI可以提供实时的画布预览,但是具有一个应用程序级视图可以显示不同的SwiftUI视图如何连接(例如情节提要),这将是一个不错的选择,并将帮助开发人员在生产中使用SwiftUI。

总结思想 (Closing Thoughts)

While this comprises my primary wish list, having an Xcode for iPad would be a dream wish, considering we had Playgrounds introduced last time.

尽管这是我的主要愿望清单,但考虑到我们上次引入了Playgrounds,拥有适用于iPad的Xcode将是一个梦想。

A TestFlight for macOS and Xcode visual po-like debugging can’t be ruled out of the cards as well.

同样不能排除针对macOS的TestFlight和Xcode可视po式调试的可能性。

One can expect considerable changes in the AVFoundation API (considering we now have triple cameras or more) and Location and Bluetooth APIs too.

人们可以期待AVFoundation API(考虑到我们现在拥有AVFoundation或更多摄像头)以及LocationBluetooth API的巨大变化。

Like the last time, SwiftUI and RealityKit would be the front runners during WWDC 20 unless Apple surprises us, once again.

就像上一次一样,除非苹果再次给我们惊喜,否则SwiftUI和RealityKit将在WWDC 20期间成为领跑者。

Thanks for reading. I’m looking forward to WWDC 2020 and beyond!

谢谢阅读。 我期待WWDC 2020及以后!

翻译自: https://medium.com/better-programming/a-senior-ios-developers-predictions-for-wwdc-2020-fb727a4b45e6

ios开发者账号变更开发者

你可能感兴趣的:(ios,python,git,android)