Redux 要点,第 8 部分:RTK 查询高级模式
如何使用带有 ID 的标签来管理缓存失效和重新获取
¥How to use tags with IDs to manage cache invalidation and refetching
如何在 React 之外使用 RTK 查询缓存
¥How to work with the RTK Query cache outside of React
处理响应数据的技术
¥Techniques for manipulating response data
实现乐观更新和流式更新
¥Implementing optimistic updates and streaming updates
介绍
¥Introduction
在 第 7 部分:RTK 查询基础知识 中,我们了解了如何设置和使用 RTK 查询 API 来处理应用中的数据获取和缓存。我们向 Redux 存储添加了 "API 切片",定义了 "query" 端点来获取帖子数据,定义了 "mutation" 端点来添加新帖子。
¥In Part 7: RTK Query Basics, we saw how to set up and use the RTK Query API to handle data fetching and caching in our application. We added an "API slice" to our Redux store, defined "query" endpoints to fetch posts data, and a "mutation" endpoint to add a new post.
在本节中,我们将继续迁移示例应用以将 RTK 查询用于其他数据类型,并了解如何使用其一些高级功能来简化代码库并改善用户体验。
¥In this section, we'll continue migrating our example app to use RTK Query for the other data types, and see how to use some of its advanced features to simplify the codebase and improve user experience.
本节中的一些更改并不是绝对必要的 - 包含它们是为了演示 RTK 查询的功能并展示你可以执行的一些操作,以便你可以了解如何在需要时使用这些功能。
¥Some of the changes in this section aren't strictly necessary - they're included to demonstrate RTK Query's features and show some of the things you can do, so you can see how to use these features if you need them.
编辑帖子
¥Editing Posts
我们已经添加了一个突变端点来将新的 Post 条目保存到服务器,并在我们的 <AddPostForm>
中使用它。接下来,我们需要更新 <EditPostForm>
,以便编辑现有帖子。
¥We've already added a mutation endpoint to save new Post entries to the server, and used that in our <AddPostForm>
. Next, we need to handle updating the <EditPostForm>
to let us edit an existing post.
更新编辑帖子表单
¥Updating the Edit Post Form
与添加帖子一样,第一步是在 API 切片中定义新的突变端点。这看起来很像添加帖子的突变,但端点需要在 URL 中包含帖子 ID 并使用 HTTP PATCH
请求来指示它正在更新某些字段。
¥As with adding posts, the first step is to define a new mutation endpoint in our API slice. This will look much like the mutation for adding a post, but the endpoint needs to include the post ID in the URL and use an HTTP PATCH
request to indicate that it's updating some of the fields.
export const apiSlice = createApi({
reducerPath: 'api',
baseQuery: fetchBaseQuery({ baseUrl: '/fakeApi' }),
tagTypes: ['Post'],
endpoints: builder => ({
getPosts: builder.query({
query: () => '/posts',
providesTags: ['Post']
}),
getPost: builder.query({
query: postId => `/posts/${postId}`
}),
addNewPost: builder.mutation({
query: initialPost => ({
url: '/posts',
method: 'POST',
body: initialPost
}),
invalidatesTags: ['Post']
}),
editPost: builder.mutation({
query: post => ({
url: `/posts/${post.id}`,
method: 'PATCH',
body: post
})
})
})
})
export const {
useGetPostsQuery,
useGetPostQuery,
useAddNewPostMutation,
useEditPostMutation
} = apiSlice
添加完毕后,我们就可以更新 <EditPostForm>
。它需要从存储中读取原始的 Post
条目,使用它来初始化组件状态以编辑字段,然后将更新的更改发送到服务器。目前,我们正在使用 selectPostById
读取 Post
条目,并为请求手动分派 postUpdated
thunk。
¥Once that's added, we can update the <EditPostForm>
. It needs to read the original Post
entry from the store, use that to initialize the component state to edit the fields, and then send the updated changes to the server. Currently, we're reading the Post
entry with selectPostById
, and manually dispatching a postUpdated
thunk for the request.
我们可以使用在 <SinglePostPage>
中使用的相同 useGetPostQuery
钩子从存储中的缓存中读取 Post
条目,并且我们将使用新的 useEditPostMutation
钩子来处理保存更改。
¥We can use the same useGetPostQuery
hook that we used in <SinglePostPage>
to read the Post
entry from the cache in the store, and we'll use the new useEditPostMutation
hook to handle saving the changes.
import React, { useState } from 'react'
import { useHistory } from 'react-router-dom'
import { Spinner } from '../../components/Spinner'
import { useGetPostQuery, useEditPostMutation } from '../api/apiSlice'
export const EditPostForm = ({ match }) => {
const { postId } = match.params
const { data: post } = useGetPostQuery(postId)
const [updatePost, { isLoading }] = useEditPostMutation()
const [title, setTitle] = useState(post.title)
const [content, setContent] = useState(post.content)
const history = useHistory()
const onTitleChanged = e => setTitle(e.target.value)
const onContentChanged = e => setContent(e.target.value)
const onSavePostClicked = async () => {
if (title && content) {
await updatePost({ id: postId, title, content })
history.push(`/posts/${postId}`)
}
}
// omit rendering logic
}
缓存数据订阅生命周期
¥Cache Data Subscription Lifetimes
让我们尝试一下,看看会发生什么。打开浏览器的开发工具,转到“网络”选项卡,然后刷新主页。当我们获取初始数据时,你应该会看到对 /posts
的 GET
请求。当你单击 "查看帖子" 按钮时,你应该会看到对 /posts/:postId
的第二个请求,该请求返回单个帖子条目。
¥Let's try this out and see what happens. Open up your browser's DevTools, go to the Network tab, and refresh the main page. You should see a GET
request to /posts
as we fetch the initial data. When you click on a "View Post" button, you should see a second request to /posts/:postId
that returns that single post entry.
现在单击单个帖子页面内的 "编辑帖子"。UI 切换为显示 <EditPostForm>
,但这次单个帖子没有网络请求。为什么不?
¥Now click "Edit Post" inside the single post page. The UI switches over to show <EditPostForm>
, but this time there's no network request for the individual post. Why not?
RTK 查询允许多个组件订阅相同的数据,并将确保每个唯一的数据集仅获取一次。在内部,RTK 查询为每个端点 + 缓存键组合保留活动 "subscriptions" 的参考计数器。如果组件 A 调用 useGetPostQuery(42)
,则会获取该数据。如果组件 B 随后安装并调用 useGetPostQuery(42)
,则请求的数据完全相同。两个钩子用法将返回完全相同的结果,包括获取的 data
和加载状态标志。
¥RTK Query allows multiple components to subscribe to the same data, and will ensure that each unique set of data is only fetched once. Internally, RTK Query keeps a reference counter of active "subscriptions" to each endpoint + cache key combination. If Component A calls useGetPostQuery(42)
, that data will be fetched. If Component B then mounts and also calls useGetPostQuery(42)
, it's the exact same data being requested. The two hook usages will return the exact same results, including fetched data
and loading status flags.
当活动订阅数量降至 0 时,RTK 查询将启动内部计时器。如果在添加任何新的数据订阅之前计时器到期,RTK 查询将自动从缓存中删除该数据,因为应用不再需要该数据。但是,如果在计时器到期之前添加了新的订阅,则计时器将被取消,并且将使用已缓存的数据,而无需重新获取。
¥When the number of active subscriptions goes down to 0, RTK Query starts an internal timer. If the timer expires before any new subscriptions for the data are added, RTK Query will remove that data from the cache automatically, because the app no longer needs the data. However, if a new subscription is added before the timer expires, the timer is canceled, and the already-cached data is used without needing to refetch it.
在这种情况下,我们的 <SinglePostPage>
安装并通过 ID 请求该个人 Post
。当我们点击 "编辑帖子" 时,<SinglePostPage>
组件被路由卸载,并且活动订阅也因卸载而被删除。RTK 查询立即启动 "删除此帖子数据" 计时器。但是,<EditPostPage>
组件立即安装并使用相同的缓存键订阅了相同的 Post
数据。因此,RTK Query 取消了计时器并继续使用相同的缓存数据,而不是从服务器获取数据。
¥In this case, our <SinglePostPage>
mounted and requested that individual Post
by ID. When we clicked on "Edit Post", the <SinglePostPage>
component was unmounted by the router, and the active subscription was removed due to unmounting. RTK Query immediately started a "remove this post data" timer. But, the <EditPostPage>
component mounted right away and subscribed to the same Post
data with the same cache key. So, RTK Query canceled the timer and kept using the same cached data instead of fetching it from the server.
默认情况下,未使用的数据会在 60 秒后从缓存中删除,但这可以在根 API 切片定义中进行配置,也可以使用 keepUnusedDataFor
标志在各个端点定义中进行覆盖,该标志指定缓存生命周期(以秒为单位)。
¥By default, unused data is removed from the cache after 60 seconds, but this can be configured in either the root API slice definition or overridden in the individual endpoint definitions using the keepUnusedDataFor
flag, which specifies a cache lifetime in seconds.
使特定项目无效
¥Invalidating Specific Items
我们的 <EditPostForm>
组件现在可以将编辑后的帖子保存到服务器,但我们有一个问题。如果我们在编辑时单击 "保存帖子",它会返回到 <SinglePostPage>
,但它仍然显示未经编辑的旧数据。<SinglePostPage>
仍在使用之前获取的缓存 Post
条目。就此而言,如果我们返回主页并查看 <PostsList>
,它也会显示旧数据。我们需要一种方法来强制重新获取单个 Post
条目和整个帖子列表。
¥Our <EditPostForm>
component can now save the edited post to the server, but we have a problem. If we click "Save Post" while editing, it returns us to the <SinglePostPage>
, but it's still showing the old data without the edits. The <SinglePostPage>
is still using the cached Post
entry that was fetched earlier. For that matter, if we return to the main page and look at the <PostsList>
, it's also showing the old data. We need a way to force a refetch of both the individual Post
entry, and the entire list of posts.
之前,我们了解了如何使用 "tags" 使部分缓存数据无效。我们声明 getPosts
查询端点提供 'Post'
标记,而 addNewPost
突变端点使相同的 'Post'
标记无效。这样,每次添加新帖子时,我们都会强制 RTK 查询从 getQuery
端点重新获取整个帖子列表。
¥Earlier, we saw how we can use "tags" to invalidate parts of our cached data. We declared that the getPosts
query endpoint provides a 'Post'
tag, and that the addNewPost
mutation endpoint invalidates that same 'Post'
tag. That way, every time we add a new post, we force RTK Query to refetch the entire list of posts from the getQuery
endpoint.
我们可以向 getPost
查询和 editPost
突变添加 'Post'
标签,但这也会强制重新获取所有其他单独的帖子。幸运的是,RTK Query 允许我们定义特定的标签,这让我们在使数据失效时更有选择性。这些特定标签看起来像 {type: 'Post', id: 123}
。
¥We could add a 'Post'
tag to both the getPost
query and the editPost
mutation, but that would force all the other individual posts to be refetched as well. Fortunately, RTK Query lets us define specific tags, which let us be more selective in invalidating data. These specific tags look like {type: 'Post', id: 123}
.
我们的 getPosts
查询定义了一个 providesTags
字段,它是一个字符串数组。providesTags
字段还可以接受回调函数,该函数接收 result
和 arg
,并返回一个数组。这使我们能够根据正在获取的数据的 ID 创建标签条目。同样,invalidatesTags
也可以是回调。
¥Our getPosts
query defines a providesTags
field that is an array of strings. The providesTags
field can also accept a callback function that receives the result
and arg
, and returns an array. This allows us to create tag entries based on IDs of data that is being fetched. Similarly, invalidatesTags
can be a callback as well.
为了获得正确的行为,我们需要使用正确的标签设置每个端点:
¥In order to get the right behavior, we need to set up each endpoint with the right tags:
getPosts
:为整个列表提供通用'Post'
标签,并为每个收到的帖子对象提供特定的{type: 'Post', id}
标签¥
getPosts
: provides a general'Post'
tag for the whole list, as well as a specific{type: 'Post', id}
tag for each received post objectgetPost
:为单个帖子对象提供特定的{type: 'Post', id}
对象¥
getPost
: provides a specific{type: 'Post', id}
object for the individual post objectaddNewPost
:使常规'Post'
标记无效,以重新获取整个列表¥
addNewPost
: invalidates the general'Post'
tag, to refetch the whole listeditPost
:使特定的{type: 'Post', id}
标签无效。这将强制重新获取getPost
中的单个帖子以及getPosts
中的整个帖子列表,因为它们都提供了与{type, id}
值匹配的标签。¥
editPost
: invalidates the specific{type: 'Post', id}
tag. This will force a refetch of both the individual post fromgetPost
, as well as the entire list of posts fromgetPosts
, because they both provide a tag that matches that{type, id}
value.
export const apiSlice = createApi({
reducerPath: 'api',
baseQuery: fetchBaseQuery({ baseUrl: '/fakeApi' }),
tagTypes: ['Post'],
endpoints: builder => ({
getPosts: builder.query({
query: () => '/posts',
providesTags: (result = [], error, arg) => [
'Post',
...result.map(({ id }) => ({ type: 'Post', id }))
]
}),
getPost: builder.query({
query: postId => `/posts/${postId}`,
providesTags: (result, error, arg) => [{ type: 'Post', id: arg }]
}),
addNewPost: builder.mutation({
query: initialPost => ({
url: '/posts',
method: 'POST',
body: initialPost
}),
invalidatesTags: ['Post']
}),
editPost: builder.mutation({
query: post => ({
url: `posts/${post.id}`,
method: 'PATCH',
body: post
}),
invalidatesTags: (result, error, arg) => [{ type: 'Post', id: arg.id }]
})
})
})
如果响应没有数据或有错误,这些回调中的 result
参数可能是未定义的,因此我们必须安全地处理它。对于 getPosts
,我们可以通过使用默认参数数组值进行映射来实现这一点,对于 getPost
,我们已经根据参数 ID 返回一个单项数组。对于 editPost
,我们从传递到触发函数的部分帖子对象中知道帖子的 ID,因此我们可以从那里读取它。
¥It's possible for the result
argument in these callbacks to be undefined if the response has no data or there's an error, so we have to handle that safely. For getPosts
we can do that by using a default argument array value to map over, and for getPost
we're already returning a single-item array based on the argument ID. For editPost
, we know the ID of the post from the partial post object that was passed into the trigger function, so we can read it from there.
完成这些更改后,让我们返回并尝试再次编辑帖子,并在浏览器 DevTools 中打开“网络”选项卡。
¥With those changes in place, let's go back and try editing a post again, with the Network tab open in the browser DevTools.
这次当我们保存编辑后的帖子时,我们应该看到两个请求连续发生:
¥When we save the edited post this time, we should see two requests happen back-to-back:
来自
editPost
突变的PATCH /posts/:postId
¥The
PATCH /posts/:postId
from theeditPost
mutation重新获取
GET /posts/:postId
作为getPost
查询¥A
GET /posts/:postId
as thegetPost
query is refetched
然后,如果我们单击返回主 "帖子" 选项卡,我们还应该看到:
¥Then, if we click back to the main "Posts" tab, we should also see:
重新获取
GET /posts
作为getPosts
查询¥A
GET /posts
as thegetPosts
query is refetched
因为我们使用标签提供端点之间的关系,所以 RTK 查询知道当我们进行编辑并且具有该 ID 的特定标签无效时,它需要重新获取单个帖子和帖子列表 - 无需进一步更改!同时,当我们编辑帖子时,getPosts
数据的缓存删除计时器到期,因此它已从缓存中删除。当我们再次打开 <PostsList>
组件时,RTK Query 发现它的缓存中没有数据并重新获取它。
¥Because we provided the relationships between the endpoints using tags, RTK Query knew that it needed to refetch the individual post and the list of posts when we made that edit and the specific tag with that ID was invalidated - no further changes needed! Meanwhile, as we were editing the post, the cache removal timer for the getPosts
data expired, so it was removed from the cache. When we opened the <PostsList>
component again, RTK Query saw that it did not have the data in cache and refetched it.
这里有一个警告。通过在 getPosts
中指定一个普通的 'Post'
标签并在 addNewPost
中使其无效,我们实际上最终也会强制重新获取所有单独的帖子。如果我们确实只想重新获取 getPost
端点的帖子列表,你可以包含具有任意 ID 的附加标签(例如 {type: 'Post', id: 'LIST'}
),然后使该标签无效。RTK 查询文档有 一个表格,描述如果某些通用/特定标签组合无效将会发生什么。
¥There is one caveat here. By specifying a plain 'Post'
tag in getPosts
and invalidating it in addNewPost
, we actually end up forcing a refetch of all individual posts as well. If we really want to just refetch the list of posts for the getPost
endpoint, you can include an additional tag with an arbitrary ID, like {type: 'Post', id: 'LIST'}
, and invalidate that tag instead. The RTK Query docs have a table that describes what will happen if certain general/specific tag combinations are invalidated.
RTK 查询还有许多其他选项用于控制何时以及如何重新获取数据,包括 "条件抓取"、"懒惰查询" 和 "prefetching",并且可以通过多种方式自定义查询定义。有关使用这些功能的更多详细信息,请参阅 RTK 查询使用指南文档:
¥RTK Query has many other options for controlling when and how to refetch data, including "conditional fetching", "lazy queries", and "prefetching", and query definitions can be customized in a variety of ways. See the RTK Query usage guide docs for more details on using these features:
管理用户数据
¥Managing Users Data
我们已经完成将帖子数据管理转换为使用 RTK 查询。接下来,我们将转换用户列表。
¥We've finished converting our posts data management over to use RTK Query. Next up, we'll convert the list of users.
由于我们已经了解了如何使用 RTK 查询钩子来获取和读取数据,因此在本节中我们将尝试不同的方法。RTK Query 的核心 API 与 UI 无关,可以与任何 UI 层一起使用,而不仅仅是 React。通常你应该坚持使用钩子,但在这里我们将仅使用 RTK 查询核心 API 来处理用户数据,以便你可以了解如何使用它。
¥Since we've already seen how to use the RTK Query hooks for fetching and reading data, for this section we're going to try a different approach. RTK Query's core API is UI-agnostic and can be used with any UI layer, not just React. Normally you should stick with using the hooks, but here we're going to work with the user data using just the RTK Query core API so you can see how to use it.
手动获取用户
¥Fetching Users Manually
我们目前正在 usersSlice.js
中定义 fetchUsers
异步 thunk,并在 index.js
中手动调度该 thunk,以便用户列表尽快可用。我们可以使用 RTK 查询执行相同的过程。
¥We're currently defining a fetchUsers
async thunk in usersSlice.js
, and dispatching that thunk manually in index.js
so that the list of users is available as soon as possible. We can do that same process using RTK Query.
我们首先在 apiSlice.js
中定义 getUsers
查询端点,类似于我们现有的端点。我们将导出 useGetUsersQuery
钩子只是为了保持一致性,但现在我们不打算使用它。
¥We'll start by defining a getUsers
query endpoint in apiSlice.js
, similar to our existing endpoints. We'll export the useGetUsersQuery
hook just for consistency, but for now we're not going to use it.
export const apiSlice = createApi({
reducerPath: 'api',
baseQuery: fetchBaseQuery({ baseUrl: '/fakeApi' }),
tagTypes: ['Post'],
endpoints: builder => ({
// omit other endpoints
getUsers: builder.query({
query: () => '/users'
})
})
})
export const {
useGetPostsQuery,
useGetPostQuery,
useGetUsersQuery,
useAddNewPostMutation,
useEditPostMutation
} = apiSlice
如果我们检查 API 切片对象,它会包含一个 endpoints
字段,其中针对我们定义的每个端点都有一个端点对象。
¥If we inspect the API slice object, it includes an endpoints
field, with one endpoint object inside for each endpoint we've defined.
每个端点对象包含:
¥Each endpoint object contains:
我们从根 API 切片对象导出的相同主查询/突变钩子,但命名为
useQuery
或useMutation
¥The same primary query/mutation hook that we exported from the root API slice object, but named as
useQuery
oruseMutation
对于查询端点,针对 "懒惰查询" 或部分订阅等场景的一组附加查询钩子
¥For query endpoints, an additional set of query hooks for scenarios like "lazy queries" or partial subscriptions
一组 "matcher" 实用工具,用于检查此端点的请求所调度的
pending/fulfilled/rejected
操作¥A set of "matcher" utilities to check for the
pending/fulfilled/rejected
actions dispatched by requests for this endpoint触发对此端点的请求的
initiate
thunk¥An
initiate
thunk that triggers a request for this endpointselect
函数创建 记忆选择器,可以检索该端点的缓存结果数据 + 状态条目¥A
select
function that creates memoized selectors that can retrieve the cached result data + status entries for this endpoint
如果我们想获取 React 之外的用户列表,我们可以在索引文件中调度 getUsers.initiate()
thunk:
¥If we want to fetch the list of users outside of React, we can dispatch the getUsers.initiate()
thunk in our index file:
// omit other imports
import { apiSlice } from './features/api/apiSlice'
async function main() {
// Start our mock API server
await worker.start({ onUnhandledRequest: 'bypass' })
store.dispatch(apiSlice.endpoints.getUsers.initiate())
ReactDOM.render(
<React.StrictMode>
<Provider store={store}>
<App />
</Provider>
</React.StrictMode>,
document.getElementById('root')
)
}
main()
此调度在查询钩子内自动发生,但如果需要,我们可以手动启动它。
¥This dispatch happens automatically inside the query hooks, but we can start it manually if needed.
手动发送 RTKQ 请求 thunk 将创建一个订阅条目,但这取决于你 稍后取消订阅该数据 - 否则数据将永久保留在缓存中。在这种情况下,我们总是需要用户数据,因此我们可以跳过取消订阅。
¥Manually dispatching an RTKQ request thunk will create a subscription entry, but it's then up to you to unsubscribe from that data later - otherwise the data stays in the cache permanently. In this case, we always need user data, so we can skip unsubscribing.
选择用户数据
¥Selecting Users Data
目前,我们有 selectAllUsers
和 selectUserById
这样的选择器,它们由 createEntityAdapter
用户适配器生成,并从 state.users
读取。如果我们重新加载页面,所有与用户相关的显示都会被破坏,因为 state.users
切片没有数据。现在我们正在为 RTK 查询的缓存获取数据,我们应该将这些选择器替换为从缓存中读取的等效选择器。
¥We currently have selectors like selectAllUsers
and selectUserById
that are generated by our createEntityAdapter
users adapter, and are reading from state.users
. If we reload the page, all of our user-related display is broken because the state.users
slice has no data. Now that we're fetching data for RTK Query's cache, we should replace those selectors with equivalents that read from the cache instead.
每次调用 API 切片端点中的 endpoint.select()
函数时,都会创建一个新的记忆选择器函数。select()
将缓存键作为其参数,并且该键必须与作为参数传递给查询钩子或 initiate()
thunk 的缓存键相同。生成的选择器使用该缓存键来确切地知道它应该从存储中的缓存状态返回哪个缓存结果。
¥The endpoint.select()
function in the API slice endpoints will create a new memoized selector function every time we call it. select()
takes a cache key as its argument, and this must be the same cache key that you pass as an argument to either the query hooks or the initiate()
thunk. The generated selector uses that cache key to know exactly which cached result it should return from the cache state in the store.
在这种情况下,我们的 getUsers
端点不需要任何参数 - 我们总是获取整个用户列表。因此,我们可以创建一个不带参数的缓存选择器,缓存键变为 undefined
。
¥In this case, our getUsers
endpoint doesn't need any parameters - we always fetch the entire list of users. So, we can create a cache selector with no argument, and the cache key becomes undefined
.
import {
createSlice,
createEntityAdapter,
createSelector
} from '@reduxjs/toolkit'
import { apiSlice } from '../api/apiSlice'
/* Temporarily ignore adapter - we'll use this again shortly
const usersAdapter = createEntityAdapter()
const initialState = usersAdapter.getInitialState()
*/
// Calling `someEndpoint.select(someArg)` generates a new selector that will return
// the query result object for a query with those parameters.
// To generate a selector for a specific query argument, call `select(theQueryArg)`.
// In this case, the users query has no params, so we don't pass anything to select()
export const selectUsersResult = apiSlice.endpoints.getUsers.select()
const emptyUsers = []
export const selectAllUsers = createSelector(
selectUsersResult,
usersResult => usersResult?.data ?? emptyUsers
)
export const selectUserById = createSelector(
selectAllUsers,
(state, userId) => userId,
(users, userId) => users.find(user => user.id === userId)
)
/* Temporarily ignore selectors - we'll come back to this later
export const {
selectAll: selectAllUsers,
selectById: selectUserById,
} = usersAdapter.getSelectors((state) => state.users)
*/
一旦我们有了初始的 selectUsersResult
选择器,我们就可以用从缓存结果返回用户数组的选择器替换现有的 selectAllUsers
选择器,然后用从该数组中找到正确用户的选择器替换 selectUserById
。
¥Once we have that initial selectUsersResult
selector, we can replace the existing selectAllUsers
selector with one that returns the array of users from the cache result, and then replace selectUserById
with one that finds the right user from that array.
现在我们将注释掉 usersAdapter
中的那些选择器 - 稍后我们将进行另一项更改,切换回使用这些。
¥For now we're going to comment out those selectors from the usersAdapter
- we're going to make another change later that switches back to using those.
我们的组件已经导入了 selectAllUsers
和 selectUserById
,所以这个更改应该可以正常工作!尝试刷新页面并单击帖子列表和单个帖子视图。正确的用户名应出现在每个显示的帖子中以及 <AddPostForm>
.txt 的下拉列表中。
¥Our components are already importing selectAllUsers
and selectUserById
, so this change should just work! Try refreshing the page and clicking through the posts list and single post view. The correct user names should appear in each displayed post, and in the dropdown in the <AddPostForm>
.
由于 usersSlice
根本不再被使用,我们可以继续从该文件中删除 createSlice
调用,并从我们的存储设置中删除 users: usersReducer
。我们仍然有一些引用 postsSlice
的代码,所以我们还不能完全删除它 - 我们很快就会讨论这个问题。
¥Since the usersSlice
is no longer even being used at all, we can go ahead and delete the createSlice
call from this file, and remove users: usersReducer
from our store setup. We've still got a couple bits of code that reference postsSlice
, so we can't quite remove that yet - we'll get to that shortly.
注入端点
¥Injecting Endpoints
对于大型应用来说,通常会将 "code-split" 功能放入单独的打包包中,然后在首次使用该功能时按需 "惰性加载" 它们。我们说过,RTK 查询通常每个应用都有一个 "API 切片",到目前为止,我们已经直接在 apiSlice.js
中定义了所有端点。如果我们想要对一些端点定义进行代码分割,或者将它们移动到另一个文件中以防止 API 切片文件变得太大,会发生什么情况?
¥It's common for larger applications to "code-split" features into separate bundles, and then "lazy load" them on demand as the feature is used for the first time. We said that RTK Query normally has a single "API slice" per application, and so far we've defined all of our endpoints directly in apiSlice.js
. What happens if we want to code-split some of our endpoint definitions, or move them into another file to keep the API slice file from getting too big?
RTK 查询支持使用 apiSlice.injectEndpoints()
拆分端点定义。这样,我们仍然可以拥有带有单个中间件和缓存缩减器的单个 API 切片,但我们可以将某些端点的定义移动到其他文件。这可以实现代码分割场景,并在需要时将一些端点与功能文件夹共置。
¥RTK Query supports splitting out endpoint definitions with apiSlice.injectEndpoints()
. That way, we can still have a single API slice with a single middleware and cache reducer, but we can move the definition of some endpoints to other files. This enables code-splitting scenarios, as well as co-locating some endpoints alongside feature folders if desired.
为了说明这个过程,我们将 getUsers
端点切换为注入到 usersSlice.js
中,而不是在 apiSlice.js
中定义。
¥To illustrate this process, let's switch the getUsers
endpoint to be injected in usersSlice.js
, instead of defined in apiSlice.js
.
我们已经将 apiSlice
导入到 usersSlice.js
中,以便我们可以访问 getUsers
端点,因此我们可以在此处切换为调用 apiSlice.injectEndpoints()
。
¥We're already importing apiSlice
into usersSlice.js
so that we can access the getUsers
endpoint, so we can switch to calling apiSlice.injectEndpoints()
here instead.
import { apiSlice } from '../api/apiSlice'
export const extendedApiSlice = apiSlice.injectEndpoints({
endpoints: builder => ({
getUsers: builder.query({
query: () => '/users'
})
})
})
export const { useGetUsersQuery } = extendedApiSlice
export const selectUsersResult = extendedApiSlice.endpoints.getUsers.select()
injectEndpoints()
改变原始 API 切片对象以添加附加端点定义,然后返回它。我们最初添加到存储的实际缓存缩减器和中间件仍然可以正常工作。此时,apiSlice
和 extendedApiSlice
是同一个对象,但在这里引用 extendedApiSlice
对象而不是 apiSlice
可能会有所帮助,以提醒我们自己。(如果你使用 TypeScript,这一点更为重要,因为只有 extendedApiSlice
值具有新端点的添加类型。)
¥injectEndpoints()
mutates the original API slice object to add the additional endpoint definitions, and then returns it. The actual caching reducer and middleware that we originally added to the store still work okay as-is. At this point, apiSlice
and extendedApiSlice
are the same object, but it can be helpful to refer to the extendedApiSlice
object instead of apiSlice
here as a reminder to ourselves. (This is more important if you're using TypeScript, because only the extendedApiSlice
value has the added types for the new endpoints.)
目前,唯一引用 getUsers
端点的文件是我们的索引文件,它正在调度 initiate
thunk。我们需要更新它以导入扩展的 API 切片:
¥At the moment, the only file that references the getUsers
endpoint is our index file, which is dispatching the initiate
thunk. We need to update that to import the extended API slice instead:
// omit other imports
- import { apiSlice } from './features/api/apiSlice'
+ import { extendedApiSlice } from './features/users/usersSlice'
async function main() {
// Start our mock API server
await worker.start({ onUnhandledRequest: 'bypass' })
- store.dispatch(apiSlice.endpoints.getUsers.initiate())
+ store.dispatch(extendedApiSlice.endpoints.getUsers.initiate())
ReactDOM.render(
<React.StrictMode>
<Provider store={store}>
<App />
</Provider>
</React.StrictMode>,
document.getElementById('root')
)
}
main()
或者,你可以只从切片文件中导出特定端点本身。
¥Alternately, you could just export the specific endpoints themselves from the slice file.
操作响应数据
¥Manipulating Response Data
到目前为止,我们所有的查询端点都简单地存储了来自服务器的响应数据,与正文中接收到的数据完全相同。getPosts
和 getUsers
都期望服务器返回一个数组,而 getPost
期望单个 Post
对象作为主体。
¥So far, all of our query endpoints have simply stored the response data from the server exactly as it was received in the body. getPosts
and getUsers
both expect the server to return an array, and getPost
expects the individual Post
object as the body.
客户端通常需要从服务器响应中提取数据片段,或者在缓存数据之前以某种方式转换数据。例如,如果 /getPost
请求返回类似 {post: {id}}
的正文,并且数据嵌套,该怎么办?
¥It's common for clients to need to extract pieces of data from the server response, or to transform the data in some way before caching it. For example, what if the /getPost
request returns a body like {post: {id}}
, with the data nested?
我们可以通过几种方法从概念上处理这个问题。一种选择是提取 responseData.post
字段并将其存储在缓存中,而不是整个正文。另一种方法是将整个响应数据存储在缓存中,但让我们的组件仅指定它们需要的缓存数据的特定部分。
¥There's a couple ways that we could handle this conceptually. One option would be to extract the responseData.post
field and store that in the cache, instead of the entire body. Another would be to store the entire response data in the cache, but have our components specify just a specific piece of that cached data that they need.
转变反应
¥Transforming Responses
端点可以定义一个 transformResponse
处理程序,该处理程序可以在缓存之前提取或修改从服务器接收的数据。对于 getPost
示例,我们可以有 transformResponse: (responseData) => responseData.post
,它只会缓存实际的 Post
对象,而不是响应的整个正文。
¥Endpoints can define a transformResponse
handler that can extract or modify the data received from the server before it's cached. For the getPost
example, we could have transformResponse: (responseData) => responseData.post
, and it would cache just the actual Post
object instead of the entire body of the response.
在 第 6 部分:性能和标准化 中,我们讨论了为什么以规范化结构存储数据有用的原因。特别是,它允许我们根据 ID 查找和更新项目,而不必循环遍历数组来查找正确的项目。
¥In Part 6: Performance and Normalization, we discussed reasons why it's useful to store data in a normalized structure. In particular, it lets us look up and update items based on an ID, rather than having to loop over an array to find the right item.
我们的 selectUserById
选择器当前必须循环缓存的用户数组才能找到正确的 User
对象。如果我们要使用标准化方法来转换要存储的响应数据,我们可以简化它以直接通过 ID 查找用户。
¥Our selectUserById
selector currently has to loop over the cached array of users to find the right User
object. If we were to transform the response data to be stored using a normalized approach, we could simplify that to directly find the user by ID.
我们之前在 usersSlice
中使用 createEntityAdapter
来管理规范化的用户数据。我们可以将 createEntityAdapter
集成到 extendedApiSlice
中,并在缓存数据之前实际使用 createEntityAdapter
来转换数据。我们将取消注释最初的 usersAdapter
行,并再次使用其更新函数和选择器。
¥We were previously using createEntityAdapter
in usersSlice
to manage normalized users data. We can integrate createEntityAdapter
into our extendedApiSlice
, and actually use createEntityAdapter
to transform the data before it's cached. We'll uncomment the usersAdapter
lines we originally had, and use its update functions and selectors again.
import { apiSlice } from '../api/apiSlice'
const usersAdapter = createEntityAdapter()
const initialState = usersAdapter.getInitialState()
export const extendedApiSlice = apiSlice.injectEndpoints({
endpoints: builder => ({
getUsers: builder.query({
query: () => '/users',
transformResponse: responseData => {
return usersAdapter.setAll(initialState, responseData)
}
})
})
})
export const { useGetUsersQuery } = extendedApiSlice
// Calling `someEndpoint.select(someArg)` generates a new selector that will return
// the query result object for a query with those parameters.
// To generate a selector for a specific query argument, call `select(theQueryArg)`.
// In this case, the users query has no params, so we don't pass anything to select()
export const selectUsersResult = extendedApiSlice.endpoints.getUsers.select()
const selectUsersData = createSelector(
selectUsersResult,
usersResult => usersResult.data
)
export const { selectAll: selectAllUsers, selectById: selectUserById } =
usersAdapter.getSelectors(state => selectUsersData(state) ?? initialState)
我们向 getUsers
端点添加了 transformResponse
选项。它接收整个响应数据正文作为其参数,并应返回要缓存的实际数据。通过调用 usersAdapter.setAll(initialState, responseData)
,它将返回包含所有接收到的项目的标准 {ids: [], entities: {}}
规范化数据结构。
¥We've added a transformResponse
option to the getUsers
endpoint. It receives the entire response data body as its argument, and should return the actual data to be cached. By calling usersAdapter.setAll(initialState, responseData)
, it will return the standard {ids: [], entities: {}}
normalized data structure containing all of the received items.
需要为 adapter.getSelectors()
函数提供 "输入选择器",以便它知道在哪里可以找到标准化数据。在这种情况下,数据嵌套在 RTK 查询缓存缩减器内,因此我们从缓存状态中选择正确的字段。
¥The adapter.getSelectors()
function needs to be given an "input selector" so it knows where to find that normalized data. In this case, the data is nested down inside the RTK Query cache reducer, so we select the right field out of the cache state.
规范化与文档缓存
¥Normalized vs Document Caches
值得退后一步来讨论我们刚刚做了进一步的事情。
¥It's worth stepping back for a minute to discuss what we just did further.
你可能听说过与 Apollo 等其他数据获取库相关的术语 "标准化缓存"。重要的是要了解 RTK 查询使用 "文档缓存" 方法,而不是 "标准化缓存"。
¥You may have heard the term "normalized cache" in relation to other data fetching libraries like Apollo. It's important to understand that RTK Query uses a "document cache" approach, not a "normalized cache".
完全规范化的缓存会尝试根据项目类型和 ID 在所有查询中删除重复的相似项目。举个例子,假设我们有一个带有 getTodos
和 getTodo
端点的 API 切片,并且我们的组件进行以下查询:
¥A fully normalized cache tries to deduplicate similar items across all queries, based on item type and ID. As an example, say that we have an API slice with getTodos
and getTodo
endpoints, and our components make the following queries:
getTodos()
getTodos({filter: 'odd'})
getTodo({id: 1})
每个查询结果都将包含一个类似于 {id: 1}
的 Todo 对象。
¥Each of these query results would include a Todo object that looks like {id: 1}
.
在完全规范化的数据去重缓存中,只会存储此 Todo 对象的单个副本。然而,RTK Query 将每个查询结果独立保存在缓存中。因此,这将导致该 Todo 的三个独立副本缓存在 Redux 存储中。但是,如果所有端点始终提供相同的标签(例如 {type: 'Todo', id: 1}
),则使该标签无效将强制所有匹配的端点重新获取其数据以保持一致性。
¥In a fully normalized de-duplicating cache, only a single copy of this Todo object would be stored. However, RTK Query saves each query result independently in the cache. So, this would result in three separate copies of this Todo being cached in the Redux store. However, if all the endpoints are consistently providing the same tags (such as {type: 'Todo', id: 1}
), then invalidating that tag will force all the matching endpoints to refetch their data for consistency.
RTK 查询故意不实现可跨多个请求删除重复项的缓存。有几个原因:
¥RTK Query deliberately does not implement a cache that would deduplicate identical items across multiple requests. There are several reasons for this:
完全标准化的跨查询共享缓存是一个很难解决的问题
¥A fully normalized shared-across-queries cache is a hard problem to solve
我们现在没有时间、资源或兴趣来尝试解决这个问题
¥We don't have the time, resources, or interest in trying to solve that right now
在许多情况下,当数据失效时简单地重新获取数据效果很好并且更容易理解
¥In many cases, simply re-fetching data when it's invalidated works well and is easier to understand
至少,RTKQ 可以帮助解决 "获取一些数据" 的一般用例,这是很多人的一大痛点
¥At a minimum, RTKQ can help solve the general use case of "fetch some data", which is a big pain point for a lot of people
相比之下,我们只是标准化了 getUsers
端点的响应数据,因为它被存储为 {[id]: value}
查找表。然而,这和 "标准化缓存" 不一样 - 我们只是改变了这一响应的存储方式,而不是跨端点或请求删除重复的结果。
¥In comparison, we just normalized the response data for the getUsers
endpoint, in that it's being stored as an {[id]: value}
lookup table. However, this is not the same thing as a "normalized cache" - we only transformed how this one response is stored rather than deduplicating results across endpoints or requests.
从结果中选择值
¥Selecting Values from Results
从旧 postsSlice
读取的最后一个组件是 <UserPage>
,它根据当前用户过滤帖子列表。我们已经看到,我们可以使用 useGetPostsQuery()
获取整个帖子列表,然后在组件中对其进行转换,例如在 useMemo
内部进行排序。查询钩子还使我们能够通过提供 selectFromResult
选项来选择缓存状态的片段,并且仅在所选片段发生更改时才重新渲染。
¥The last component that is reading from the old postsSlice
is <UserPage>
, which filters the list of posts based on the current user. We've already seen that we can get the entire list of posts with useGetPostsQuery()
and then transform it in the component, such as sorting inside of a useMemo
. The query hooks also give us the ability to select pieces of the cached state by providing a selectFromResult
option, and only re-render when the selected pieces change.
我们可以使用 selectFromResult
让 <UserPage>
从缓存中读取经过过滤的帖子列表。然而,为了让 selectFromResult
避免不必要的重新渲染,我们需要确保我们提取的任何数据都被正确记忆。为此,我们应该创建一个新的选择器实例,<UsersPage>
组件可以在每次渲染时重用该实例,以便选择器根据其输入记住结果。
¥We can use selectFromResult
to have <UserPage>
read just a filtered list of posts from the cache. However, in order for selectFromResult
to avoid unnecessary re-renders, we need to ensure that whatever data we extract is memoized correctly. To do this, we should create a new selector instance that the <UsersPage>
component can reuse every time it renders, so that the selector memoizes the result based on its inputs.
import { createSelector } from '@reduxjs/toolkit'
import { selectUserById } from '../users/usersSlice'
import { useGetPostsQuery } from '../api/apiSlice'
export const UserPage = ({ match }) => {
const { userId } = match.params
const user = useSelector(state => selectUserById(state, userId))
const selectPostsForUser = useMemo(() => {
const emptyArray = []
// Return a unique selector instance for this page so that
// the filtered results are correctly memoized
return createSelector(
res => res.data,
(res, userId) => userId,
(data, userId) => data?.filter(post => post.user === userId) ?? emptyArray
)
}, [])
// Use the same posts query, but extract only part of its data
const { postsForUser } = useGetPostsQuery(undefined, {
selectFromResult: result => ({
// We can optionally include the other metadata fields from the result here
...result,
// Include a field called `postsForUser` in the hook result object,
// which will be a filtered list of posts
postsForUser: selectPostsForUser(result, userId)
})
})
// omit rendering logic
}
与我们在这里创建的记忆选择器函数有一个关键的区别。通常,选择器期望整个 Redux state
作为他们的第一个参数,并从 state
中提取或导出一个值。然而,在这种情况下,我们只处理保存在缓存中的 "result" 值。结果对象内部有一个 data
字段,其中包含我们需要的实际值,以及一些请求元数据字段。
¥There's a key difference with the memoized selector function we've created here. Normally, selectors expect the entire Redux state
as their first argument, and extract or derive a value from state
. However, in this case we're only dealing with the "result" value that is kept in the cache. The result object has a data
field inside with the actual values we need, as well as some of the request metadata fields.
我们的 selectFromResult
回调从服务器接收包含原始请求元数据的 result
对象和 data
,并且应该返回一些提取或派生的值。由于查询钩子会向此处返回的任何内容添加额外的 refetch
方法,因此最好始终从 selectFromResult
返回一个对象,其中包含你需要的字段。
¥Our selectFromResult
callback receives the result
object containing the original request metadata and the data
from the server, and should return some extracted or derived values. Because query hooks add an additional refetch
method to whatever is returned here, it's preferable to always return an object from selectFromResult
with the fields inside that you need.
由于 result
保存在 Redux 存储中,因此我们无法更改它 - 我们需要返回一个新对象。查询钩子将对此返回的对象进行 "shallow" 比较,并且仅在其中一个字段发生更改时重新渲染组件。我们可以通过仅返回该组件所需的特定字段来优化重新渲染 - 如果我们不需要其余的元数据标志,我们可以完全省略它们。如果你确实需要它们,你可以扩展原始 result
值以将它们包含在输出中。
¥Since result
is being kept in the Redux store, we can't mutate it - we need to return a new object. The query hook will do a "shallow" comparison on this returned object, and only re-render the component if one of the fields has changed. We can optimize re-renders by only returning the specific fields needed by this component - if we don't need the rest of the metadata flags, we could omit them entirely. If you do need them, you can spread the original result
value to include them in the output.
在本例中,我们将调用字段 postsForUser
,并且可以从钩子结果中解构该新字段。通过每次调用 selectPostsForUser(result, userId)
,它都会记住过滤后的数组,并且只有在获取的数据或用户 ID 发生变化时才重新计算它。
¥In this case, we'll call the field postsForUser
, and we can destructure that new field from the hook result. By calling selectPostsForUser(result, userId)
every time, it will memoize the filtered array and only recalculate it if the fetched data or the user ID changes.
比较转型方法
¥Comparing Transformation Approaches
我们现在已经看到了管理转变响应的三种不同方法:
¥We've now seen three different ways that we can manage transforming responses:
将原始响应保留在缓存中,读取组件中的完整结果并导出值
¥Keep original response in cache, read full result in component and derive values
将原始响应保留在缓存中,使用
selectFromResult
读取派生结果¥Keep original response in cache, read derived result with
selectFromResult
在存储到缓存之前转换响应
¥Transform response before storing in cache
这些方法中的每一种都可以在不同的情况下发挥作用。以下是一些关于何时应考虑使用它们的建议:
¥Each of these approaches can be useful in different situations. Here's some suggestions for when you should consider using them:
transformResponse
:端点的所有使用者都需要特定的格式,例如标准化响应以实现按 ID 更快的查找¥
transformResponse
: all consumers of the endpoint want a specific format, such as normalizing the response to enable faster lookups by IDselectFromResult
:端点的某些消费者只需要部分数据,例如过滤后的列表¥
selectFromResult
: some consumers of the endpoint only need partial data, such as a filtered list每个组件/
useMemo
:当只有某些特定组件需要转换缓存数据时¥per-component /
useMemo
: when only some specific components need to transform the cached data
高级缓存更新
¥Advanced Cache Updates
我们已经完成了帖子和用户数据的更新,所以剩下的就是处理反应和通知。将这些切换为使用 RTK 查询将使我们有机会尝试一些可用于处理 RTK 查询的缓存数据的高级技术,并使我们能够为用户提供更好的体验。
¥We've completed updating our posts and users data, so all that's left is working with reactions and notifications. Switching these to use RTK Query will give us a chance to try out some of the advanced techniques available for working with RTK Query's cached data, and allow us to provide a better experience for our users.
持续反应
¥Persisting Reactions
最初,我们只跟踪客户端的反应,并没有将它们持久化到服务器。让我们添加一个新的 addReaction
突变,并在用户每次单击反应按钮时使用它来更新服务器上相应的 Post
。
¥Originally, we only tracked reactions on the client side and did not persist them to the server. Let's add a new addReaction
mutation and use that to update the corresponding Post
on the server every time the user clicks a reaction button.
export const apiSlice = createApi({
reducerPath: 'api',
baseQuery: fetchBaseQuery({ baseUrl: '/fakeApi' }),
tagTypes: ['Post'],
endpoints: builder => ({
// omit other endpoints
addReaction: builder.mutation({
query: ({ postId, reaction }) => ({
url: `posts/${postId}/reactions`,
method: 'POST',
// In a real app, we'd probably need to base this on user ID somehow
// so that a user can't do the same reaction more than once
body: { reaction }
}),
invalidatesTags: (result, error, arg) => [
{ type: 'Post', id: arg.postId }
]
})
})
})
export const {
useGetPostsQuery,
useGetPostQuery,
useAddNewPostMutation,
useEditPostMutation,
useAddReactionMutation
} = apiSlice
与我们的其他突变类似,我们采用一些参数并向服务器发出请求,并在请求正文中包含一些数据。由于这个示例应用很小,我们将只给出反应的名称,并让服务器在这篇文章中增加该反应类型的计数器。
¥Similar to our other mutations, we take some parameters and make a request to the server, with some data in the body of the request. Since this example app is small, we'll just give the name of the reaction, and let the server increment the counter for that reaction type on this post.
我们已经知道我们需要重新获取这篇文章才能看到客户端上的任何数据更改,因此我们可以根据其 ID 使该特定 Post
条目无效。
¥We already know that we need to refetch this post in order to see any of the data change on the client, so we can invalidate this specific Post
entry based on its ID.
完成后,让我们更新 <ReactionButtons>
以使用此突变。
¥With that in place, let's update <ReactionButtons>
to use this mutation.
import React from 'react'
import { useAddReactionMutation } from '../api/apiSlice'
const reactionEmoji = {
thumbsUp: '👍',
hooray: '🎉',
heart: '❤️',
rocket: '🚀',
eyes: '👀'
}
export const ReactionButtons = ({ post }) => {
const [addReaction] = useAddReactionMutation()
const reactionButtons = Object.entries(reactionEmoji).map(
([reactionName, emoji]) => {
return (
<button
key={reactionName}
type="button"
className="muted-button reaction-button"
onClick={() => {
addReaction({ postId: post.id, reaction: reactionName })
}}
>
{emoji} {post.reactions[reactionName]}
</button>
)
}
)
return <div>{reactionButtons}</div>
}
让我们看看实际效果!转到主 <PostsList>
,然后单击其中一个反应,看看会发生什么。
¥Let's see this in action! Go to the main <PostsList>
, and click one of the reactions to see what happens.
呃哦。整个 <PostsList>
组件呈灰色,因为我们刚刚重新获取了整个帖子列表以响应更新的一篇帖子。这是故意更明显的,因为我们的模拟 API 服务器设置为在响应之前有 2 秒的延迟,但即使响应速度更快,这仍然不是一个良好的用户体验。
¥Uh-oh. The entire <PostsList>
component was grayed out, because we just refetched the entire list of posts in response to that one post being updated. This is deliberately more visible because our mock API server is set to have a 2-second delay before responding, but even if the response is faster, this still isn't a good user experience.
实现乐观更新
¥Implementing Optimistic Updates
对于添加反应之类的小更新,我们可能不需要重新获取整个帖子列表。相反,我们可以尝试只更新客户端上已缓存的数据,以匹配我们期望在服务器上发生的情况。此外,如果我们立即更新缓存,用户在单击按钮时会得到即时反馈,而不必等待响应返回。这种立即更新客户端状态的方法称为 "乐观更新",它是 Web 应用中的常见模式。
¥For a small update like adding a reaction, we probably don't need to re-fetch the entire list of posts. Instead, we could try just updating the already-cached data on the client to match what we expect to have happen on the server. Also, if we update the cache immediately, the user gets instant feedback when they click the button instead of having to wait for the response to come back. This approach of updating client state right away is called an "optimistic update", and it's a common pattern in web apps.
RTK 查询允许你通过修改基于 "请求生命周期" 处理程序的客户端缓存来实现乐观更新。端点可以定义一个 onQueryStarted
函数,该函数将在请求开始时调用,并且我们可以在该处理程序中运行其他逻辑。
¥RTK Query lets you implement optimistic updates by modifying the client-side cache based on "request lifecycle" handlers. Endpoints can define an onQueryStarted
function that will be called when a request starts, and we can run additional logic in that handler.
export const apiSlice = createApi({
reducerPath: 'api',
baseQuery: fetchBaseQuery({ baseUrl: '/fakeApi' }),
tagTypes: ['Post'],
endpoints: builder => ({
// omit other endpoints
addReaction: builder.mutation({
query: ({ postId, reaction }) => ({
url: `posts/${postId}/reactions`,
method: 'POST',
// In a real app, we'd probably need to base this on user ID somehow
// so that a user can't do the same reaction more than once
body: { reaction }
}),
async onQueryStarted({ postId, reaction }, { dispatch, queryFulfilled }) {
// `updateQueryData` requires the endpoint name and cache key arguments,
// so it knows which piece of cache state to update
const patchResult = dispatch(
apiSlice.util.updateQueryData('getPosts', undefined, draft => {
// The `draft` is Immer-wrapped and can be "mutated" like in createSlice
const post = draft.find(post => post.id === postId)
if (post) {
post.reactions[reaction]++
}
})
)
try {
await queryFulfilled
} catch {
patchResult.undo()
}
}
})
})
})
onQueryStarted
处理程序接收两个参数。第一个是请求开始时传递的缓存键 arg
。第二个是一个对象,其中包含一些与 createAsyncThunk
({dispatch, getState, extra, requestId}
) 中的 thunkApi
相同的字段,但也包含一个称为 queryFulfilled
的 Promise
。该 Promise
将在请求返回时解析,并根据请求执行或拒绝。
¥The onQueryStarted
handler receives two parameters. The first is the cache key arg
that was passed when the request started. The second is an object that contains some of the same fields as the thunkApi
in createAsyncThunk
( {dispatch, getState, extra, requestId}
), but also a Promise
called queryFulfilled
. This Promise
will resolve when the request returns, and either fulfill or reject based on the request.
API 切片对象包含一个 updateQueryData
util 函数,可让我们更新缓存的值。它需要三个参数:要更新的端点的名称、用于标识特定缓存数据的相同缓存键值以及更新缓存数据的回调。updateQueryData
使用 Immer,因此你可以像在 createSlice
中一样对起草的缓存数据进行 "mutate"。
¥The API slice object includes a updateQueryData
util function that lets us update cached values. It takes three arguments: the name of the endpoint to update, the same cache key value used to identify the specific cached data, and a callback that updates the cached data. updateQueryData
uses Immer, so you can "mutate" the drafted cache data the same way you would in createSlice
.
我们可以通过在 getPosts
缓存中查找特定的 Post
条目来实现乐观更新,并对其进行 "mutating" 以增加反应计数器。
¥We can implement the optimistic update by finding the specific Post
entry in the getPosts
cache, and "mutating" it to increment the reaction counter.
updateQueryData
生成一个操作对象,其中包含我们所做更改的补丁差异。当我们调度该操作时,返回值是 patchResult
对象。如果我们调用 patchResult.undo()
,它会自动调度一个操作来反转补丁差异更改。
¥updateQueryData
generates an action object with a patch diff of the changes we made. When we dispatch that action, the return value is a patchResult
object. If we call patchResult.undo()
, it automatically dispatches an action that reverses the patch diff changes.
默认情况下,我们预计请求会成功。如果请求失败,我们可以 await queryFulfilled
、捕获失败并撤消补丁更改以恢复乐观更新。
¥By default, we expect that the request will succeed. In case the request fails, we can await queryFulfilled
, catch a failure, and undo the patch changes to revert the optimistic update.
对于这种情况,我们还删除了刚刚添加的 invalidatesTags
行,因为我们不想在单击反应按钮时重新获取帖子。
¥For this case, we've also removed the invalidatesTags
line we'd just added, since we don't want to refetch the posts when we click a reaction button.
现在,如果我们快速单击反应按钮几次,我们应该每次都会在 UI 中看到数字增量。如果我们查看“网络”选项卡,我们还会看到每个单独的请求也发送到服务器。
¥Now, if we click several times on a reaction button quickly, we should see the number increment in the UI each time. If we look at the Network tab, we'll also see each individual request go out to the server as well.
流缓存更新
¥Streaming Cache Updates
我们的最后一个功能是通知选项卡。当我们最初在 第 6 部分 中构建此功能时,我们说 "在真实的应用中,每次发生事情时服务器都会向我们的客户端推送更新"。我们最初通过添加 "刷新通知" 按钮来伪造该功能,并让它发出 HTTP GET
请求以获取更多通知条目。
¥Our final feature is the notifications tab. When we originally built this feature in Part 6, we said that "in a real app, the server would push updates to our client every time something happens". We initially faked that feature by adding a "Refresh Notifications" button, and having it make an HTTP GET
request for more notifications entries.
应用通常会发出初始请求以从服务器获取数据,然后打开 Websocket 连接以随着时间的推移接收其他更新。RTK Query 提供了 onCacheEntryAdded
端点生命周期处理程序,让我们可以对缓存数据实现 "流式更新"。我们将使用该功能来实现更现实的通知管理方法。
¥It's common for apps to make an initial request to fetch data from the server, and then open up a Websocket connection to receive additional updates over time. RTK Query provides an onCacheEntryAdded
endpoint lifecycle handler that lets us implement "streaming updates" to cached data. We'll use that capability to implement a more realistic approach to managing notifications.
我们的 src/api/server.js
文件已经配置了一个模拟 Websocket 服务器,类似于模拟 HTTP 服务器。我们将编写一个新的 getNotifications
端点来获取初始通知列表,然后建立 Websocket 连接以监听未来的更新。我们仍然需要手动告诉模拟服务器何时发送新通知,因此我们将继续通过单击按钮强制更新来伪造这一点。
¥Our src/api/server.js
file has a mock Websocket server already configured, similar to the mock HTTP server. We'll write a new getNotifications
endpoint that fetches the initial list of notifications, and then establishes the Websocket connection to listen for future updates. We still need to manually tell the mock server when to send new notifications, so we'll continue faking that by having a button we click to force the update.
我们将像对 getUsers
所做的那样,在 notificationsSlice
中注入 getNotifications
端点,只是为了表明这是可能的。
¥We'll inject the getNotifications
endpoint in notificationsSlice
like we did with getUsers
, just to show it's possible.
import { forceGenerateNotifications } from '../../api/server'
import { apiSlice } from '../api/apiSlice'
export const extendedApi = apiSlice.injectEndpoints({
endpoints: builder => ({
getNotifications: builder.query({
query: () => '/notifications',
async onCacheEntryAdded(
arg,
{ updateCachedData, cacheDataLoaded, cacheEntryRemoved }
) {
// create a websocket connection when the cache subscription starts
const ws = new WebSocket('ws://localhost')
try {
// wait for the initial query to resolve before proceeding
await cacheDataLoaded
// when data is received from the socket connection to the server,
// update our query result with the received message
const listener = event => {
const message = JSON.parse(event.data)
switch (message.type) {
case 'notifications': {
updateCachedData(draft => {
// Insert all received notifications from the websocket
// into the existing RTKQ cache array
draft.push(...message.payload)
draft.sort((a, b) => b.date.localeCompare(a.date))
})
break
}
default:
break
}
}
ws.addEventListener('message', listener)
} catch {
// no-op in case `cacheEntryRemoved` resolves before `cacheDataLoaded`,
// in which case `cacheDataLoaded` will throw
}
// cacheEntryRemoved will resolve when the cache subscription is no longer active
await cacheEntryRemoved
// perform cleanup steps once the `cacheEntryRemoved` promise resolves
ws.close()
}
})
})
})
export const { useGetNotificationsQuery } = extendedApi
const emptyNotifications = []
export const selectNotificationsResult =
extendedApi.endpoints.getNotifications.select()
const selectNotificationsData = createSelector(
selectNotificationsResult,
notificationsResult => notificationsResult.data ?? emptyNotifications
)
export const fetchNotificationsWebsocket = () => (dispatch, getState) => {
const allNotifications = selectNotificationsData(getState())
const [latestNotification] = allNotifications
const latestTimestamp = latestNotification?.date ?? ''
// Hardcode a call to the mock server to simulate a server push scenario over websockets
forceGenerateNotifications(latestTimestamp)
}
// omit existing slice code
与 onQueryStarted
一样,onCacheEntryAdded
生命周期处理程序接收 arg
缓存键作为其第一个参数,并接收带有 thunkApi
值的选项对象作为第二个参数。options 对象还包含一个 updateCachedData
util 函数和两个生命周期 Promise
- cacheDataLoaded
和 cacheEntryRemoved
。当此订阅的初始数据添加到存储中时,cacheDataLoaded
会解析。当添加此端点 + 缓存密钥的第一个订阅时,会发生这种情况。只要数据的 1+ 个订阅者仍然处于活动状态,缓存条目就会保持活动状态。当订阅者数量变为 0 并且缓存生存期计时器到期时,缓存条目将被删除,并且 cacheEntryRemoved
将解析。通常,使用模式是:
¥Like with onQueryStarted
, the onCacheEntryAdded
lifecycle handler receives the arg
cache key as its first parameter, and an options object with the thunkApi
values as the second parameter. The options object also contains an updateCachedData
util function, and two lifecycle Promise
s - cacheDataLoaded
and cacheEntryRemoved
. cacheDataLoaded
resolves when the initial data for this subscription is added to the store. This happens when the first subscription for this endpoint + cache key is added. As long as 1+ subscribers for the data are still active, the cache entry is kept alive. When the number of subscribers goes to 0 and the cache lifetime timer expires, the cache entry will be removed, and cacheEntryRemoved
will resolve. Typically, the usage pattern is:
立即
await cacheDataLoaded
¥
await cacheDataLoaded
right away创建像 Websocket 一样的服务器端数据订阅
¥Create a server-side data subscription like a Websocket
当收到更新时,根据更新使用
updateCachedData
到 "mutate" 的缓存值¥When an update is received, use
updateCachedData
to "mutate" the cached values based on the update最后是
await cacheEntryRemoved
¥
await cacheEntryRemoved
at the end之后清理订阅
¥Clean up subscriptions afterwards
我们的模拟 Websocket 服务器文件公开了 forceGenerateNotifications
方法来模拟将数据推送到客户端。这取决于了解最新的通知时间戳,因此我们添加一个可以调度的 thunk,它从缓存状态读取最新的时间戳并告诉模拟服务器生成更新的通知。
¥Our mock Websocket server file exposes a forceGenerateNotifications
method to mimic pushing data out to the client. That depends on knowing the most recent notification timestamp, so we add a thunk we can dispatch that reads the latest timestamp from the cache state and tells the mock server to generate newer notifications.
在 onCacheEntryAdded
内部,我们创建了到 localhost
的真实 Websocket
连接。在真实的应用中,这可能是你接收持续更新所需的任何类型的外部订阅或轮询连接。每当模拟服务器向我们发送更新时,我们都会将所有收到的通知推送到缓存中并重新排序。
¥Inside of onCacheEntryAdded
, we create a real Websocket
connection to localhost
. In a real app, this could be any kind of external subscription or polling connection you need to receive ongoing updates. Whenever the mock server sends us an update, we push all of the received notifications into the cache and re-sort it.
当缓存条目被删除时,我们会清理 Websocket 订阅。在此应用中,通知缓存条目永远不会被删除,因为我们永远不会取消订阅数据,但重要的是要了解清理如何适用于真正的应用。
¥When the cache entry is removed, we clean up the Websocket subscription. In this app, the notifications cache entry will never be removed because we never unsubscribe from the data, but it's important to see how the cleanup would work for a real app.
跟踪客户端状态
¥Tracking Client-Side State
我们需要进行最后一组更新。我们的 <Navbar>
组件必须启动通知的获取,而 <NotificationsList>
需要显示具有正确的已读/未读状态的通知条目。然而,我们之前在收到条目时在 notificationsSlice
reducer 中的客户端添加了已读/未读字段,现在通知条目保存在 RTK 查询缓存中。
¥We need to make one final set of updates. Our <Navbar>
component has to initiate the fetching of notifications, and <NotificationsList>
needs to show the notification entries with the correct read/unread status. However, we were previously adding the read/unread fields on the client side in our notificationsSlice
reducer when we received the entries, and now the notification entries are being kept in the RTK Query cache.
我们可以重写 notificationsSlice
,以便它监听任何收到的通知,并跟踪客户端上每个通知条目的一些附加状态。
¥We can rewrite notificationsSlice
so that it listens for any received notifications, and tracks some additional state on the client side for each notification entry.
收到新的通知条目有两种情况:当我们通过 HTTP 获取初始列表时,以及当我们收到通过 Websocket 连接推送的更新时。理想情况下,我们希望使用相同的逻辑来应对这两种情况。我们可以使用 RTK 的 "匹配实用程序" 编写一个案例 reducer,该 reducer 响应多种操作类型而运行。
¥There's two cases when new notification entries are received: when we fetch the initial list over HTTP, and when we receive an update pushed over the Websocket connection. Ideally, we want to use the same logic in response to both cases. We can use RTK's "matching utilities" to write one case reducer that runs in response to multiple action types.
让我们看看添加这个逻辑后 notificationsSlice
是什么样子。
¥Let' see what notificationsSlice
looks like after we add this logic.
import {
createAction,
createSlice,
createEntityAdapter,
createSelector,
isAnyOf
} from '@reduxjs/toolkit'
import { forceGenerateNotifications } from '../../api/server'
import { apiSlice } from '../api/apiSlice'
const notificationsReceived = createAction(
'notifications/notificationsReceived'
)
export const extendedApi = apiSlice.injectEndpoints({
endpoints: builder => ({
getNotifications: builder.query({
query: () => '/notifications',
async onCacheEntryAdded(
arg,
{ updateCachedData, cacheDataLoaded, cacheEntryRemoved, dispatch }
) {
// create a websocket connection when the cache subscription starts
const ws = new WebSocket('ws://localhost')
try {
// wait for the initial query to resolve before proceeding
await cacheDataLoaded
// when data is received from the socket connection to the server,
// update our query result with the received message
const listener = event => {
const message = JSON.parse(event.data)
switch (message.type) {
case 'notifications': {
updateCachedData(draft => {
// Insert all received notifications from the websocket
// into the existing RTKQ cache array
draft.push(...message.payload)
draft.sort((a, b) => b.date.localeCompare(a.date))
})
// Dispatch an additional action so we can track "read" state
dispatch(notificationsReceived(message.payload))
break
}
default:
break
}
}
ws.addEventListener('message', listener)
} catch {
// no-op in case `cacheEntryRemoved` resolves before `cacheDataLoaded`,
// in which case `cacheDataLoaded` will throw
}
// cacheEntryRemoved will resolve when the cache subscription is no longer active
await cacheEntryRemoved
// perform cleanup steps once the `cacheEntryRemoved` promise resolves
ws.close()
}
})
})
})
export const { useGetNotificationsQuery } = extendedApi
// omit selectors and websocket thunk
const notificationsAdapter = createEntityAdapter()
const matchNotificationsReceived = isAnyOf(
notificationsReceived,
extendedApi.endpoints.getNotifications.matchFulfilled
)
const notificationsSlice = createSlice({
name: 'notifications',
initialState: notificationsAdapter.getInitialState(),
reducers: {
allNotificationsRead(state, action) {
Object.values(state.entities).forEach(notification => {
notification.read = true
})
}
},
extraReducers(builder) {
builder.addMatcher(matchNotificationsReceived, (state, action) => {
// Add client-side metadata for tracking new notifications
const notificationsMetadata = action.payload.map(notification => ({
id: notification.id,
read: false,
isNew: true
}))
Object.values(state.entities).forEach(notification => {
// Any notifications we've read are no longer new
notification.isNew = !notification.read
})
notificationsAdapter.upsertMany(state, notificationsMetadata)
})
}
})
export const { allNotificationsRead } = notificationsSlice.actions
export default notificationsSlice.reducer
export const {
selectAll: selectNotificationsMetadata,
selectEntities: selectMetadataEntities
} = notificationsAdapter.getSelectors(state => state.notifications)
发生了很多事情,但让我们逐一分解这些变化。
¥There's a lot going on, but let's break down the changes one at a time.
目前还没有一个好的方法可以让 notificationsSlice
reducer 知道我们何时通过 Websocket 收到了新通知的更新列表。因此,我们将导入 createAction
,专门为 "收到一些通知" 情况定义一个新的操作类型,并在更新缓存状态后分派该操作。
¥There isn't currently a good way for the notificationsSlice
reducer to know when we've received an updated list of new notifications via the Websocket. So, we'll import createAction
, define a new action type specifically for the "received some notifications" case, and dispatch that action after updating the cache state.
我们希望对“fulfilled getNotifications
”操作和 "从 Websocket 收到" 操作运行相同的 "添加读取/新元数据" 逻辑。我们可以通过调用 isAnyOf()
并传入每个动作创建者来创建一个新的 "matcher" 函数。如果当前操作与这些类型中的任何一个匹配,则 matchNotificationsReceived
匹配器函数将返回 true。
¥We want to run the same "add read/new metadata" logic for both the "fulfilled getNotifications
" action and the "received from Websocket" action. We can create a new "matcher" function by calling isAnyOf()
and passing in each of those action creators. The matchNotificationsReceived
matcher function will return true if the current action matches either of those types.
以前,我们有一个针对所有通知的规范化查找表,并且 UI 将它们选择为单个排序数组。我们将重新调整此切片的用途,以存储描述已读/未读状态的 "metadata" 对象。
¥Previously, we had a normalized lookup table for all of our notifications, and the UI selected those as a single sorted array. We're going to repurpose this slice to instead store "metadata" objects that describe the read/unread status.
我们可以使用 extraReducers
内部的 builder.addMatcher()
API 添加一个 case 缩减程序,只要我们匹配这两种操作类型之一,该程序就会运行。在其中,我们添加一个新的 "已读/是新的" 元数据条目,该条目通过 ID 对应于每个通知,并将其存储在 notificationsSlice
中。
¥We can use the builder.addMatcher()
API inside of extraReducers
to add a case reducer that runs whenever we match one of those two action types. Inside of there, we add a new "read/isNew" metadata entry that corresponds to each notification by ID, and store that inside of notificationsSlice
.
最后,我们需要更改从此切片导出的选择器。我们不会将 selectAll
导出为 selectAllNotifications
,而是将其导出为 selectNotificationsMetadata
。它仍然返回规范化状态的值数组,但我们正在更改名称,因为项目本身已更改。我们还将导出 selectEntities
选择器,它返回查找表对象本身,作为 selectMetadataEntities
。当我们尝试在 UI 中使用这些数据时,这将很有用。
¥Finally, we need change the selectors we're exporting from this slice. Instead of exporting selectAll
as selectAllNotifications
, we're going to export it as selectNotificationsMetadata
. It still returns an array of the values from the normalized state, but we're changing the name since the items themselves have changed. We're also going to export the selectEntities
selector, which returns the lookup table object itself, as selectMetadataEntities
. That will be useful when we try to use this data in the UI.
完成这些更改后,我们可以更新 UI 组件以获取和显示通知。
¥With those changes in place, we can update our UI components to fetch and display notifications.
import React from 'react'
import { useDispatch, useSelector } from 'react-redux'
import { Link } from 'react-router-dom'
import {
fetchNotificationsWebsocket,
selectNotificationsMetadata,
useGetNotificationsQuery
} from '../features/notifications/notificationsSlice'
export const Navbar = () => {
const dispatch = useDispatch()
// Trigger initial fetch of notifications and keep the websocket open to receive updates
useGetNotificationsQuery()
const notificationsMetadata = useSelector(selectNotificationsMetadata)
const numUnreadNotifications = notificationsMetadata.filter(
n => !n.read
).length
const fetchNewNotifications = () => {
dispatch(fetchNotificationsWebsocket())
}
let unreadNotificationsBadge
if (numUnreadNotifications > 0) {
unreadNotificationsBadge = (
<span className="badge">{numUnreadNotifications}</span>
)
}
// omit rendering logic
}
在 <NavBar>
中,我们使用 useGetNotificationsQuery()
触发初始通知获取,并切换到从 state.notificationsSlice
读取元数据对象。现在,单击 "刷新" 按钮会触发模拟 Websocket 服务器推送另一组通知。
¥In <NavBar>
, we trigger the initial notifications fetch with useGetNotificationsQuery()
, and switch to reading the metadata objects from state.notificationsSlice
. Clicking the "Refresh" button now triggers the mock Websocket server to push out another set of notifications.
我们的 <NotificationsList>
同样切换到读取缓存的数据和元数据。
¥Our <NotificationsList>
similarly switches over to reading the cached data and metadata.
import {
useGetNotificationsQuery,
allNotificationsRead,
selectMetadataEntities,
} from './notificationsSlice'
export const NotificationsList = () => {
const dispatch = useDispatch()
const { data: notifications = [] } = useGetNotificationsQuery()
const notificationsMetadata = useSelector(selectMetadataEntities)
const users = useSelector(selectAllUsers)
useLayoutEffect(() => {
dispatch(allNotificationsRead())
})
const renderedNotifications = notifications.map((notification) => {
const date = parseISO(notification.date)
const timeAgo = formatDistanceToNow(date)
const user = users.find((user) => user.id === notification.user) || {
name: 'Unknown User',
}
const metadata = notificationsMetadata[notification.id]
const notificationClassname = classnames('notification', {
new: metadata.isNew,
})
// omit rendering logic
}
我们从缓存中读取通知列表,并从 notificationSlice 中读取新的元数据条目,并继续以与以前相同的方式显示它们。
¥We read the list of notifications from cache and the new metadata entries from the notificationsSlice, and continue displaying them the same way as before.
作为最后一步,我们可以在这里进行一些额外的清理 - postsSlice
不再使用,因此可以完全删除。
¥As a final step, we can do some additional cleanup here - the postsSlice
is no longer being used, so that can be removed entirely.
这样,我们就完成了将应用转换为使用 RTK 查询!所有数据获取均已切换为使用 RTKQ,并且我们通过添加乐观更新和流式更新来改善用户体验。
¥With that, we've finished converting our application over to use RTK Query! All of the data fetching has been switched over to use RTKQ, and we've improved the user experience by adding optimistic updates and streaming updates.
你学到了什么
¥What You've Learned
正如我们所看到的,RTK 查询包含一些强大的选项,用于控制我们管理缓存数据的方式。虽然你可能不会立即需要所有这些选项,但它们提供了灵活性和关键功能来帮助实现特定的应用行为。
¥As we've seen, RTK Query includes some powerful options for controlling how we manage cached data. While you may not need all of these options right away, they provide flexibility and key capabilities to help implement specific application behaviors.
让我们最后看一下整个应用的运行情况:
¥Let's take one last look at the whole application in action:
特定的缓存标签可用于更细粒度的缓存失效
¥Specific cache tags can be used for finer-grained cache invalidation
缓存标签可以是
'Post'
或{type: 'Post', id}
¥Cache tags can be either
'Post'
or{type: 'Post', id}
端点可以根据结果和参数缓存键提供或无效缓存标签
¥Endpoints can provide or invalidate cache tags based on results and arg cache keys
RTK Query 的 API 与 UI 无关,可以在 React 之外使用
¥RTK Query's APIs are UI-agnostic and can be used outside of React
端点对象包括发起请求、生成结果选择器、匹配请求动作对象的功能
¥Endpoint objects include functions for initiating requests, generating result selectors, and matching request action objects
响应可以根据需要以不同的方式进行转换
¥Responses can be transformed in different ways as needed
端点可以定义
transformResponse
回调来在缓存之前修改数据¥Endpoints can define a
transformResponse
callback to modify the data before caching可以为 Hooks 提供
selectFromResult
选项来提取/转换数据¥Hooks can be given a
selectFromResult
option to extract/transform data组件可以读取整个值并使用
useMemo
进行转换¥Components can read an entire value and transform with
useMemo
RTK 查询具有用于操作缓存数据的高级选项,以获得更好的用户体验
¥RTK Query has advanced options for manipulating cached data for better user experience
onQueryStarted
生命周期可用于乐观更新,方法是在请求返回之前立即更新缓存¥The
onQueryStarted
lifecycle can be used for optimistic updates by updating cache immediately before a request returnsonCacheEntryAdded
生命周期可用于通过基于服务器推送连接随时间更新缓存来进行流式更新¥The
onCacheEntryAdded
lifecycle can be used for streaming updates by updating cache over time based on server push connections
下一步是什么?
¥What's Next?
恭喜,你已完成 Redux Essentials 教程!现在,你应该对 Redux Toolkit 和 React-Redux 是什么、如何编写和组织 Redux 逻辑、Redux 数据流和 React 用法,以及如何使用 configureStore
和 createSlice
等 API 有了充分的了解。你还应该了解 RTK 查询如何简化获取和使用缓存数据的过程。
¥Congratulations, you've completed the Redux Essentials tutorial! You should now have a solid understanding of what Redux Toolkit and React-Redux are, how to write and organize Redux logic, Redux data flow and usage with React, and how to use APIs like configureStore
and createSlice
. You should also see how RTK Query can simplify the process of fetching and using cached data.
第 6 部分中的 "下一步是什么?" 部分 包含指向应用创意、教程和文档的其他资源的链接。
¥The "What's Next?" section in Part 6 has links to additional resources for app ideas, tutorials, and documentation.
有关使用 RTK 查询的更多详细信息,请参阅 RTK 查询使用指南文档 和 API 参考。
¥For more details on using RTK Query, see the RTK Query usage guide docs and API reference.
如果你正在寻求有关 Redux 问题的帮助,请加入 Discord 上 Reactiflux 服务器中的 #redux
通道。
¥If you're looking for help with Redux questions, come join the #redux
channel in the Reactiflux server on Discord.
感谢你阅读本教程,我们希望你喜欢使用 Redux 构建应用!
¥Thanks for reading through this tutorial, and we hope you enjoy building applications with Redux!